Project # 10 Dominant Eigenvalue Computation
Introduction
The purpose of this project is to develop techniques to numerically calculate the largest eigenvalue of a matrix. The largest eigenvalue often gives a great deal of information about the stability of the solutions of many models such as population models. Commands such as eigenvals in Maple try to calculate all eigenvalues and, in the case of larger matrices, fail to do so. The method used is the "Power Method".
Background Basics of eigenvalues and eigenvectors; section 5.1 of Kolman, 6.1 of Bretscher, or 6.1 of Leon. You should be comfortable with the definition of eigenvalue /eigenvector, it will be applied throughout this project. Also you will need to use software to compute matrix products and scalar multiplication of matrices.
Assume in what follows that A is an nxn matrix with eigenvalues e1, e2, e3...,en arranged in decreasing order so that
|e1| > |e2| > . . . > |en|
making e1 the so-called dominant eigenvalue. Also assume that eigenvectors associated with e1, e2, e3...,en are denoted by v1,v2,...,vn (so Avi = ei vi ,i=1...n ).
Our problem is to find e1 and v1 for a given matrix A.
Some Simple Linear Algebra
In this project it may be helpful to remember that:
* any scalar multiple of an eigenvector is also an eigenvector (for the same eigenvalue)
* the acid test for being an eigenvalue and eigenvector is Ax = ex (If you multiply a vector by a matrix and get a scalar multiple of the same vector, then you have an eigenvector and the scalar is its eigenvalue)
* if u1 denotes the column vector (1,0,0,...,0)t then A u1 gives you the 1st column of A (try it!). In general, if ui is the column of all 0s except 1 in the ith position, then Aui yields the ith column of A
The Power Method
Conceptually, the Power Method is straightforward. First take any vector x and expand it in terms of the eigenvectors of A:
x = c1v1 + c2v2 + ... + cn vn (so the c's are the coordinates of x, as before )
Multiply both sides by A and use the linearity of matrix multiplication:
Ax = c1 Av1 + c2Av2 + ... + cnAvn
as well as the definition of eigenvectors:
Ax = c1e1 v1 + c2 e2v2+ . . . + cnenvn
Repeating the multiplication by A on both sides k times, we obtain:
Akx = c1ek1 v1 + c2 ek2v2+ . . . + cneknvn
Now for large k, since e1 is greater in absolute value than all the other ei s, e1 will be much greater than | ei | (all i=2...n). Hence the first term on the right hand side will dominate the others and we will have
Ak x = c1ek1 v1 for k large
Since any multiple of an eigenvector is still an eigenvector, we will have found an eigenvector corresponding to e1 . Furthermore if we compare the components of this vector to those of the next power (Ak+1x), they should all have the same ratio and that ratio should be e1!!
The only question that remains is: what is the vector x? It may be taken fairly at random, but we would not (if you look at the expansion) want it to be one of the other eigenvectors or a combination of them. OK; that's no help since we don't know them. In the applications we have used in this course and the problems that follow, it may be shown that a safe choice for x is simply the column vector of all 1s : (1,1,1,...,1)t
Example 1 Suppose that A is given to be
Using x as above, and the multiply command of Maple, for example, that
Now these at face value contain little of assistance. However, if we divide each one by their first component, we get
suggesting k is large enough to have gotten us near the limit (to 3 places). We check our result using the definition of eigenvalue/vector:
showing that we indeed did find the dominant eigenvalue, 4.820, and an eigenvector corresponding to it, (1,1.910,2.648)t How did we know we used a high enough power? Roughly, after scaling A10x and A11x we got the same result, to three decimal places. Had they differed, we would have gone to two consecutive higher powers.
For nonnegative matrices, the foundation for this method is provided by the powerful Perron-Frobenius Theorem which actually has two versions, depending what kind of nonnegative matrix one has.
Exercises:
1. Use the methods above to obtain the dominant eigenvalue and a corresponding
eigenvector for each of the following matrices
a. the original matrix from Project 4 (population dynamics and urban planning)
b. the transition matrix from any regular Markov Chain; 4x4 or bigger; your choice
c. the matrix
(in each case, your write up should indicate the eigenvalue and an eigenvector as well
as the powers you used to obtain it. All results should be accurate to 3 decimal places)
2. For each matrix from part 1, use an appropriate command from your software package which computes eigenvalues and eigenvectors and compare results
with your work from part 1.
3. a. The method does not always work. If A has two dominant eigenvalues (like 8 and -8) then the asymptotic behavior described above does not happen. Examine the following
matrix in light of this remark:
(in other words, try to apply the power method as we did earlier) Your report should describe what happened and what went wrong.
b. prove the following algebraic result: if x is an eigenvector of A with eigenvalue
e then x is also an eigenvector of (A + cIn) with eigenvalue e + c. (hint: use the
definition of eigenvalue/eigenvector)
c. Use your result from part b to obtain the dominant eigenvalue/eigenvector for the
matrix in part a by adding 2 down the diagonal.
4. One can also in some cases find the dominant eigenvalue by examining the characteristic equation of A by its graph. Remember where ever it crosses the x axis is an eigenvalue. You simply want the largest place it crosses. Use software commands to generate the characteristic polynomial of the matrix in 1c and plot it so as to find the largest root.
5. a. Algebra problem: prove if A is nonegative and has dominant eigenvalue e1 and
dominant eigenvector x0 (which you may assume has all positive entries) then
e1 is in between the largest of the column sums of A and the smallest. (hint: scale
x0 so its components add up to 1, then look at the definition of eigenvalue/vector)
Examine the problem in the discussion of this project for an example.
b. Use the above result to show that for a Markov chain, the dominant eigenvalue is
always 1.
c. Prove if the row sums of a matrix are all the same, then (1,1,1,...,1)t is an eigenvector. What can you say about its eigenvalue?
6. In order for the Power Method to work, what implicit assumptions have we made about
A and its eigenvalues/eigenvectors?