How do you find the inverse of #A=##((1, 1, 2), (2, 2, 2), (2, 1, 1))#?
3 Answers
# A^-1 = ( (0, -1/2, 1), (-1, 3/2, -1), (1, -1/2, 0) ) #
Explanation:
A matrix,
- Calculating the Matrix of Minors,
- Form the Matrix of Cofactors,
#cof(A)# - Form the adjoint matrix,
#adj(A)# - Multiply
#adj(A)# by#1/|A|# to form the inverse#A^-1#
At some point we need to calculate
# A=((1, 1, 2), (2, 2, 2), (2, 1, 1)) # ,
If we expand about the first row and "strike out" the row and column to form a smaller determinant and alternate signs we get;
# |A| = +(1)|(2, 2), (1, 1)| -(1) |(2, 2), (2, 1)| +(2)|(2, 2), (2, 1)| #
# \ \ \ \ \ = {(2)(1)-(1)(2)} -{(2)(1)-(2)(2)} +2{(2)(1)-(2)(2)} #
# \ \ \ \ \ = 0 - (-2) +2(-2) #
# \ \ \ \ \ = -2 #
As
#M=( ( |(2, 2), (1, 1)|, |(2, 2), (2, 1)|, |(2, 2), (2, 1)| ), ( |(1, 2), (1, 1)|, |(1, 2), (2, 1)|, |(1, 1), (2, 1)| ), ( |(1, 2), (2, 2)|, |(1, 2), (2, 2)|, |(1, 1), (2, 2)| ) )#
#\ \ \ \ = ( (0, -2, -2), (-1, -3, -1), (-2, -2, 0) )#
We now form the matrix of cofactors,
# ( (+, -, +), (-, +, -), (+, -, +) )#
Where we change the sign of those elements with the minus sign to get;
# cof(A)= ( (0, 2, -2), (1, -3, 1), (-2, 2, 0) ) #
Then we form the adjoint matrix by transposing the matrix of cofactors,
#adj(A) = cof(A)^T#
#\ \ \ \ \ \ \ \ \ \ \ = ( (0, 2, -2), (1, -3, 1), (-2, 2, 0) )^T #
#\ \ \ \ \ \ \ \ \ \ \ = ( (0, 1, -2), (2, -3, 2), (-2, 1, 0) ) #
And then finally we multiply by the reciprocal of the determinant to get:
#A^-1 = 1/|A| adj(A)#
#\ \ \ \ \ \ \ = (-1/2) ( (0, 1, -2), (2, -3, 2), (-2, 1, 0) ) #
#\ \ \ \ \ \ \ = ( (0, -1/2, 1), (-1, 3/2, -1), (1, -1/2, 0) ) #
We can easily check if this is the correct answer as
Explanation:
Here's the row reduction way, less to remember.....
switch R2 and R3
Explanation:
Here's one method:
Given:
#A = ((1, 1, 2),(2, 2, 2),(2, 1, 1))#
Make an augmented matrix by adding three columns containing the entries of a
#((1, 1, 2, |, 1, 0, 0),(2, 2, 2, |, 0, 1, 0), (2, 1, 1, |, 0, 0, 1))#
Perform a series of row operations to make the left half of this augmented matrix into an identity matrix.
Subtract row 3 from row 2 to get:
#((1, 1, 2, |, 1, 0, 0),(0, 1, 1, |, 0, 1, -1), (2, 1, 1, |, 0, 0, 1))#
Add row 2 to row 3 to get:
#((1, 1, 2, |, 1, 0, 0),(0, 1, 1, |, 0, 1, -1), (2, 2, 2, |, 0, 1, 0))#
Divide row 3 by
#((1, 1, 2, |, 1, 0, 0),(0, 1, 1, |, 0, 1, -1), (1, 1, 1, |, 0, 1/2, 0))#
Subtract row 1 from row 3 to get:
#((1, 1, 2, |, 1, 0, 0),(0, 1, 1, |, 0, 1, -1), (0, 0, -1, |, -1, 1/2, 0))#
Subtract row 2 from row 1 to get:
#((1, 0, 1, |, 1, -1, 1),(0, 1, 1, |, 0, 1, -1), (0, 0, -1, |, -1, 1/2, 0))#
Add row 3 to row 1 and row 2 ro get:
#((1, 0, 0, |, 0, -1/2, 1),(0, 1, 0, |, -1, 3/2, -1), (0, 0, -1, |, -1, 1/2, 0))#
Multiply row 3 by
#((1, 0, 0, |, 0, -1/2, 1),(0, 1, 0, |, -1, 3/2, -1), (0, 0, 1, |, 1, -1/2, 0))#
Now we can read
#A^(-1) = ((0, -1/2, 1),(-1, 3/2, -1), (1, -1/2, 0))#
One of the advantages of this method is that it works for square matrices of any size.