In an earlier handout, we hinted about the possibility of solving a matrix equation by multiplying both sides of the equation by an inverse to the matrix . Now we are going to define the inverse matrix and see how to compute it. First, we need to define another concept:
The identity matrix (sometimes denoted , if the dimension is not clear from context) is the matrix that has 1's down its main diagonal and 0's everyplace else:
The identity matrices are the 1's (the ``units'' in technical mathematical language) of matrix multiplication. That is, if is the identity matrix, and and are any matrices of the right dimensions so the following products are defined, then
Now an matrix has an inverse if there is an matrix such that
Now that we have a definition of a matrix inverse, how do we find one? First, we need a small but important fact:
Fact: If and are matrices and , then also . (Remember that in general, , so the truth of this fact is not obvious.)
This fact means that if we can find an matrix with the property that , then we will know that also, so . And (guess what?) we know how to solve the matrix equation : Write down the augmented matrix , row-reduce it, then look at the equivalent matrix equation obtained from the new, row-reduced augmented matrix.
For example, let's try to find an inverse to the matrix
This is one way things can work out when we try to find the inverse of a matrix A. Here's another: let us try to find an inverse to the matrix
These are the two possibilities. We can collect this information into a procedure:
To find the inverse of a square matrix , write down the matrix and then row-reduce it. Either it will row-reduce to a matrix of the form , in which case , or it will row-reduce to a matrix of the form where has a row consisting entirely of zeroes, in which case has no inverse.