MA 2071

 

Using Matlab for Linear Algebra

                   note:  Matlab commands entered by user shown in    >>blue font

 

1.     Creating Matrices

rule:  name = [  entries separated by commas with ; at end of each row]

 

example:      >> A = [1,2,3; 4,5,6; 7,8,9]   creates a 3x3 matrix

 

A =

 

     1     2     3

     4     5     6

     7     8     9

 

note no semicolon at the end of the command I typed. This means the output

will be displayed (here a good idea). If you put a semicolon then the output is

suppressed  (sometimes a good idea where there is a ton of output).

 

example:  >> B = [3, -1,2]    creates a 1x3  row matrix (not a column)

 

example:  >> C = [1 , 0 , 0]’  creates a 3x1 column matrix.  The apostrophe at the end is the Matlab equivalent of matrix transpose.  (if you don’t remember that bit of trivia, a transpose operation on a matrix makes rows into columns and vice versa. In the textbook it is indicated by a superscript of T).

 

If you need to use the identity matrix, Matlab has a function to build it for you called  eye.  You tell it the size and you have it.

 

>>I3 = eye(3)            ( I3 was my choice for a name – use anything you want except lowercase i)

 

I3 =

 

1  0  0

0  1  0

0  0  1

 

2. Performing the Gauss-Jordan algorithm to get the Final Form (RREF) of a matrix is

          very easy!  If A is your matrix which you have already defined and entered then simply use rref:

 

                   >> A = [1,2,3;4,5,6;7,8,9]  

 

                   >> rref(A)

 

                   ans =    1  0  -1

                                     0  1   2

                                     0  0   0

 

which is the RREF or Final Form of this matrix. The matrix does not have to be square.

                            

 

3.  Matrix Arithmetic

 

is demonstrated fairly easily by the following examples, some of which refer to the matrices created earlier.

 

multiplication:

 

            >>  H = A * C

 

            H =

                        1

                        4

                        7

 

inversion:

            >>  J = [2, 3; 4, 1 ]

 

            J =

                        2  3

                        4  1

            >> K = inv(A)

 

            K =

 

                        -0.1000    0.3000             (note that Matlab uses floating point displays)

                          0.4000   -0.2000

(check) >> J*K

ans =

 

             1.0000   -0.0000

               0          1.0000

 

rank:      (number of nonzero rows in RREF)

 

            >>rank(J)

            ans =

                        2

 

determinants:

 

            >>det(J)

            ans =

                        -10

 

3.     Eigenvalue and Eigenvector Calculations

 

>>  eig(A)     computes the eigenvalues of A puts them in a column matrix

 

ans =                                   (for the matrix defined as A at the top of this page)

 

   16.1168

   -1.1168

   -0.0000

 

>> [P,D] = eig(A)             creates a square matrix P with eigenvectors as columns and

another matrix D with eigenvalues on the diagonal

 

 

P =

 

    0.2320    0.7858    0.4082

    0.5253    0.0868   -0.8165

    0.8187   -0.6123    0.4082

 

 

D =

 

   16.1168         0         0

         0   -1.1168         0

         0         0   -0.0000

 

this turns out to be absolutely perfect for the approach that Kolman takes in the textbook!!  The Principal Axis Theorem of Chapter 8 states that  these new matrices are related to the original matrix A by the key formula

 

                                    A = P D P-1

 

and we can easily demonstrate this in Matlab with minimal effort by now entering the command

 

>> P*D*inv(P)

 

and getting the output

 

ans =

 

    1.0000    2.0000    3.0000

    4.0000    5.0000    6.0000

    7.0000    8.0000    9.0000

 

which is the original matrix A !!!