next up previous
Next: About this document ...

Linear Independence

A set of functions f1(t), f2(t), ..., fn(t) is said to be
linearly dependent if there exist (real) constants c1, ..., cn, not all zero, such that

c1 f1(t) + c2 f2(t) + ... + cn fn(t) = 0

for all t in some interval I.

A set of functions is linearly independent if they are not linearly dependent; i.e. if


c1 f1(t) + c2 f2(t) + ... + cn fn(t) = 0


\begin{displaymath}\Longrightarrow c_1 = 0;\quad c_2 = 0; \quad ... \quad c_n=0\end{displaymath}

We will frequently be considering sets of just two functions. In this case, if f1(t) and f2(t) are linearly dependent, then there must be some constant c such that


f1(t) = c f2(t)

Otherwise, f1(t) and f2(t) will be linearly independent.

Solving Second Order Linear D.E.'s With Constant Coefficients

Consider the homogeneous D.E.


ay'' + by' + cy = 0

Let us suppose that we have a solution of the form
y(t) = ert. What would r have to be?

\begin{eqnarray*}
ay'' + by' + cy & = & ar^2e^{rt} + bre^{rt} + ce^{rt}\\
& = & (ar^2 + br + c)e^{rt} = 0\\
&\Longrightarrow & ar^2 + br + c = 0
\end{eqnarray*}


The equation ar2 + br + c = 0 is called the
characteristic equation of the D.E.





There are several cases to consider when solving the
characteristic equation:

1.
The equation has two real, distinct roots
2.
The equation has one real, repeated root
3.
The equation has two complex, distinct roots

Euler's formula

We know that the Taylor series for ex is


\begin{displaymath}e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}.\end{displaymath}

We derived this series from what we knew about the natural exponential function. Now, however, suppose we take this Taylor series to be the definition of ex. This will allow us to do interesting things, like compute eix, for example (where $i = \sqrt{-1}$).


\begin{eqnarray*}
e^{ix} & = & \sum_{n=0}^{\infty} \frac{(ix)^n}{n!}\\
& = & 1 ...
...ix^1 - \frac{i}{3!}x^3 + \frac{i}{5!}x^5 +
\frac{i}{7!}x^7 + ...
\end{eqnarray*}


which is perhaps starting to look familiar...


\begin{eqnarray*}
& = & \sum_{n=0}^{\infty} \frac{(-1)^n}{2n!}x^{2n} +
i\sum_{n...
...rac{(-1)^n}{(2n+1)!}x^{2n+1} \\ [.1in]
& = & \cos{x} + i \sin{x}
\end{eqnarray*}


So we have just derived Euler's formula:


\begin{displaymath}e^{ix} = \cos{x} + i \sin{x}\end{displaymath}

Case 3: complex roots

Given the D.E.

ay'' + by' + cy = 0,

suppose $z_1 = \lambda + i\mu$ and $z_2 = \lambda - i\mu$ are roots of the characteristic equation ar2 + br + c = 0.





Let's pretend for a moment that it doesn't matter that z1 and z2 are complex numbers. [Note: You can take derivatives and integrals of complex-valued functions in exactly the same way as real-valued functions.]





We will show that y1(t) = ez1t and y2(t) = ez2t are both solutions. Let's try y1(t):


a(ez1t)'' + b(ez1t)' + c(ez1t) = (ar12 + br1 + c) ez1t = 0

since z1 is a root of ar2 + br + c.





Hence, y1(t) = ez1t is a solution (and similarly,

y2(t) = ez2t is also a solution).

But what do these solutions really look like?





To find out, we apply Euler's formula:

\begin{eqnarray*}
y_1(t) = e^{z_1t} & = & e^{(\lambda + i\mu)t}\\
& = & e^{\lam...
...})\\
& = & e^{\lambda t}\cos{\mu t} + ie^{\lambda t}\sin{\mu t}
\end{eqnarray*}


Similarly, we could derive that $y_2(t) = e^{\lambda t}\cos{\mu t} -
ie^{\lambda t}\sin{\mu t}$.





Unfortunately, the solutions we have derived are complex-valued. For many modelling problems, this just doesn't make sense.





But there is hope. We apply the principle of superposition:


\begin{eqnarray*}
y_1(t) + y_2(t) & = & e^{\lambda t}\cos{\mu t} + ie^{\lambda t...
... + ie^{\lambda t}\sin{\mu t}\\
& = & 2ie^{\lambda t}\sin{\mu t}
\end{eqnarray*}


Since the principle of superpostion also tells us that any constant multiple of a solution is a solution, we can discard the coefficients of the sum and difference we just computed (even though one of the coefficients is imaginary) to obtain


\begin{displaymath}y_3(t) = e^{\lambda t}\cos{\mu t} \quad \textrm{and} \quad
y_4(t) = e^{\lambda t}\sin{\mu t}\end{displaymath}

and so we have, in fact, found two linearly independent, real-valued solutions.

Example:

Find the solution to initial value problem:


\begin{displaymath}y'' -6y' + 13y = 0. \hspace{.5in} y(0) = 1 ; \hspace{.5in} y'(0) = 7 \end{displaymath}





We observe that the characteristic equation is

r2 - 6r + 13 = (r - (3 + 2i))(r - (3 - 2i)),

hence

\begin{displaymath}y_1(x) = e^{3x}\cos{2 x} \quad \textrm{and} \quad y_2(x) = e^{3x}\sin{2 x}.\end{displaymath}

So to construct the general solution, we invoke the
Principle of Superposition and take a linear combination of y1 and y2 to obtain


\begin{displaymath}y(x) = c_1 e^{3x}\cos{2 x} + c_2 e^{3x}\sin{2 x}\end{displaymath}


\begin{displaymath}y = c_1 e^{3t}\cos{2 t} + c_2 e^{3t}\sin{2 t}\end{displaymath}

Now, we apply the initial condition, noting first that


\begin{eqnarray*}
y' & = & 3c_1e^{3x}\cos{2 t} - 2c_1 e^{3x}\sin{2 x} \\
&+& 3c...
... & (3c_1 + 2c_2)e^{3x}\cos{2 x} + (-2c_1 + 3c_2) e^{3x}\sin{2 x}
\end{eqnarray*}


The first initial condition, y(0) = 1 yields

\begin{displaymath}y(0) = c_1 e^0 \cos{0} + c_2 e^0 \sin{0} = c_1 = 1\end{displaymath}

The second, y'(0) = 7 yields

\begin{displaymath}(3c_1 + 2c_2) e^0 \cos{0} + (-2c_1 + 3c_2) e^0 \sin{0} = 3 c_1 + 2 c_2 = 7\end{displaymath}


\begin{displaymath}
% latex2html id marker 289
\therefore \quad c_2 = 2\end{displaymath}

Therefore, our solution is


\begin{displaymath}y(x) = e^{3x}\cos{2x} + 2e^{3x}\sin{2x}.\end{displaymath}




next up previous
Next: About this document ...
Math 23 Winter 2000
2000-01-22