|San José State University|
& Tornado Alley
Existence of Solutions to
Systems of Ordinary
The solution of differential equations has been the heart and soul of the physical sciences since the time of Isaac Newton. The question of whether or not a solution actually exists for an equation involving a derivative is vital. Fortunately for linear equations constructive proofs of the existence of solutions are readily available. One goal of the analysis is to be able to say that for a system of n linear equations there exists n independent solutions.
For a square matrix M define its expontential function exp(M) which maps square matrices into square matrices of the same dimension as
where I is the identity matrix of the same dimensions as M. Questions of convergence will be dealt with later.
For a matrix multiplied by a scalar t the definition reduces to
A set of linear differential equations of the form
can be expressed as
where X(t) is an n dimensional column vector and A is the n×n matrix of the coefficients ai,j. The solution requires the initial conditions X(t)=X0 be satisfied.
Now consider the exponential matrix function
The derivative of the right-hand side (RHS) of the above with respect to t gives
The matrix A can be factored as a premultiplier from each term so
What is left in the brackets is none other than exp(At). Therefore
Now consider X(t) = exp(At)X0. Differentiation by t shows that
Thus the function exp(At)X0 satisfies the system of differential equations dX/dt = AX and the initial conditions.
Consider a system of equations of the form
where C is a vector of constants. If A has an inverse then the system can be converted into the form
Let Y(t)=X(t)+D. Then the system becomes
This system has the solution Y(t)=exp(At)Y(0) and thus
The systems previously considered all had constant coefficients. If any coefficient is a non-trivial function of time then the system
has the solution
If A(t) has an inverse for all t then the solution to the inhomogeneous system
A square matrix M can be represented as
Where P is an orthogonal matrix; i.e., P-1=PT; i.e., the inverse of P is equal to the transpose of P. The matrix Λ is a diagonal martrix. Thus
This means that
The values of elements of Λ are called the eigenvalues of the matrix M and P is the matrix of its eigenvectors.
One small complication is that the eigenvalues of a matrix may be nonreal. That is no major difficulty in that exp(x+iy) is well defined. It is exp(x)(cos(y)+isin(y)). A more serious complication occurs when an eigenvalue occurs with a multiplicity greater than unity. If an eigenvalue is repeated then the second occurence does not constitute another independent solution.
If an eigenvalue λ occurs with a multiplicity p then
are all solutions.
(To be continued.)
A differential equation of order k can be converted into a system of k first order equations. This just involves defining new variables such that
Suppose a system of ordinary differential equations is representeed as
One approach to a solution is by iteration. Start with an arbitrary X0(t) then construct X1(t) by integration of
and likewise for X2(t) and beyond as
The question of convergence can be examined by looking at the differences yn=Xn(t)−Xn-1(t). Then
The RHS of the above can be approximated by (∂F/∂X)·yn(t). Thus
The y's would go asymptotically to zero if the eigenvalues of the matrices (∂F/∂X) are all of a negative real part.
(To be continued.)
HOME PAGE OF Thayer Watkins