This WeekÕs Homework: (It is very short because of the exam).

Make sure you can do 2.3: 1,2,5,7,12 and read the below stuff on exciting conventions! as well as doing the exercise at the bottom of this page.

Turn in:2.3: 3,6,11,14 and 6.3: 3(a,c),6 and the following:

1. Let V be an inner-product space and let W ba a subspace of V. Show $V
= W \oplus W^{\perp}$. (As usual you may assume that our field is the real numbers).

(notice this assignment was shortened on Wednesday)

Proposed Topics For this Week: The first exam will be handed out on Monday, and an extra copy will be posted on the web. If you find any typos please tell me so I can pass them along and post them on this page. In Monday's lecture we will show that the composition of linear maps can be achieved by matrix multiplication in the presence of specified finite bases. Wednesday we will continue our geometric exploration by defining the adjoint of an operator as well as showing that bilinear forms (like linear transformations) can also be understood via the use of matrices. On Friday we will discuss invertibility and isomorphism, sec 2.4.

Exam Hints and some Corrections(largely answers to questions posed by students)

1.
problem one is not designed to be difficult, especially the first 4 parts. Simply look at last Friday's lecture where we explored the stikingly similar

Pu(v) = v -<v,u>u .

Note this is a very similar function, so please look into that lecture for some serious hints!

Reminder:

(a)
In class we defined bilinear forms and inner products only on real vector spaces; so assume V is a real innerproduct space! (i.e. you never need a bar).
(b)
The definition of isometry: an isometry form V to itself is a linear map L from V to itself satisfying

\begin{displaymath}\left< v,v \right> = \left<L(v),L(v) \right>\end{displaymath}

for every vector v.

(c)
Recall a unit vector u is one where <u,u> = 1 and u and v are said to be orthogonal if <u,v>=0. A set is said to be orthonormal if its elements are all unit vectors and every pair of distinct vectors u and v from this set are orthogonal.

(d)
"orthonoral" in 1(e) should read orthonormal.

(e)
In the second line we are using the reflection through u, not the reflection "though" u.

(f)
Notice Ru is a linear maaping for each u, in other words for each unit vector I can form such a mappping.

(g)
On problem 1(e) there have been some questions regarding the use of induction to "find" the right formula. I don't care how you guess the right formula! However, when your done finding it you must use induction to verify its truth.

(h)
The ei in 1(h) should be an e1.

2.
problems two and three.

(a)
Notice $\beta_0 = \{e_1, e_2\}$ is the stndard basis on R2, i.e. e1 =(1,0) and e2 = (0,1).

(b)
Recall that M1+1 is R2 with the bilinear form H(-,-) on it defined by

H(a e1 + be2, ce1 + d e2) = ac -bd .

Also notice that for the problem you are never asked to make a computations with this bilinear form, only to explore tha bases

\begin{displaymath}\beta_s = \{ \cosh(s) e_1 + \sinh(s)e_2,
\sinh(s) e_1 + \cosh(s)e_2\} .\end{displaymath}

Recall we varified in the first x-session that for each real number s that $\beta_s$ was an "ortho-normal" basis with respect to the above bilinear form H(-,-): and we even graphed these bases in the $\beta_0$ cordinate plane.

(c)
Notice we are denoting vectors in M1+1 with the symbol p and for each basis we get coordinates descibing p; and in particular we get a coordinate plane to view all such p simultaneously.

(d)
Notice the statement that

\begin{displaymath}\frac{x_1 -x_0}{(t_0+1) -t_0} = 1\end{displaymath}

in 3(a) is an hypothesis you may use to verify the needed computation.

5.
problems four and five

(a)
There are some notation issues. Namely

QN = QN

and in particular

Q3 = Q3.

Much more importantly please re-index $\beta_Q$ and $\beta_Q^N$ by starting a zero rather than 1, i.e.

\begin{displaymath}\beta_Q = \{\psi_i\}_{i=0}^{\infty}\end{displaymath}

and

\begin{displaymath}\beta^N_Q = \{\psi_i\}_{i=0}^{N} . \end{displaymath}

(b)
For us applying the Grahm-Schidt proceedure means forming an orthonormal set, not just an orthogonal set (i.e. it must consist of unit vectors). Notice you can easily check if you did this correctly by doing 5(b).

For a revised coppy of the exam click Here .. (At the moment my converter program is having some problems, so the exam doesn't look so hot.)

The X-session Topic: I will go over the exam.

Some Exciting Convensions!

Transforming a linear mapping between finite dimensional vector spaces into a matrix involves some choices, which I will call conventions. Let L be a linear mapping from V to W, let $\alpha = \{v_i\}_{i=1}^{n}$ is a basis is V, ley $\beta = \{w_i\}_{i=1}^{m}$ is a basis is W. and x in V.

One is an indexing convention often called EinsteinÕs notation wich we will denote ${\bf E}$. When using this convention we let

arow column = acolumnrow.

(This convention is extremely useful when dealing with a generalization of matricies and vectors known as tensors. Tensors are the syntax of much of physics and its good to get used to this notation here amoungst linear transformations.)

The other convention choice is whether to view composition as occurring on the left or the right, i.e. (TU)(x) = T(U(x)) (L) or (TU)(x) = U(T(x)) (R). (I feel its a bit 20th century (i.e. old fasion) to view composition on the left, but this is probably what you are most used to.)

The book's conventions are to not use the Einstein convention and to use left composition (not(E) and L)

\begin{displaymath}x =\sum_{i=1}^{n} a_i v_i \end{displaymath}


\begin{displaymath}L(v_j ) = \sum_{i=1}^{m} a_{ij} w_i .\end{displaymath}

With these conventions we let $[L]_{\alpha}^{\beta} = [a_{ij}].$

In class we will adopt the right composition convention, and will also use EinsteinÕs notation, i.e. E and L, with these conventions

\begin{displaymath}x =\sum_{i=1}^{n} a^i v_i \end{displaymath}


\begin{displaymath}L(v_j ) = \sum_{i=1}^{m} a^i_j w_i .\end{displaymath}

With these convention we let $[L]_{\alpha}^{\beta} = [a^i_{j}].$

In my life I usually use E and R, i.e.

\begin{displaymath}x =\sum_{i=1}^{n} a_i v^i \end{displaymath}


\begin{displaymath}L(v^j ) = \sum_{i=1}^{m} a^j_i w^i .\end{displaymath}

In this convention we let $[L]_{\alpha}^{\beta} = [a^j_{i}].$ (In practice the difference is that vectors become rows and matrix multiplication of a vector is on the right.)

Last week I foolishly mixed up what the book was doing and used (not(E) and R)

\begin{displaymath}x =\sum_{i=1}^{n} a_i v_i \end{displaymath}


\begin{displaymath}L(v_j ) = \sum_{i=1}^{m} a_{ji} w_i .\end{displaymath}

In this convention we let $[L]_{\alpha}^{\beta} = [a_{ji}].$ (Ironically, in terms of the above convention possibilities this is the exact opposite of the convention we will be using in class.)

As an exercise please redo the examples from last Thursday using the E and R notation and the book convention. Also note that the array obtained using R is the transposes of the L array. (Giving us our first taste of how fundamental the transpose operation really is.)





Math 24 Winter 2000
2000-02-02