# How do you solve linear systems using matrices?

Mar 26, 2016

Consider a system of $m$ linear equations with $n$ variables, ${x}_{1}$ through ${x}_{n}$:

$\left\{\begin{matrix}{a}_{1 1} {x}_{1} + {a}_{1 2} {x}_{2} + \ldots + {a}_{1 n} {x}_{n} = {b}_{1} \\ {a}_{2 1} {x}_{1} + {a}_{2 2} {x}_{2} + \ldots + {a}_{2 n} {x}_{n} = {b}_{2} \\ \ldots \\ {a}_{m 1} {x}_{1} + {a}_{m 2} {x}_{2} + \ldots + {a}_{m n} {x}_{n} = {b}_{m}\end{matrix}\right.$

Notice that we can swap the position of two equations, multiply both sides of an equation by a constant, or add a multiple of one equation to another (the left hand side to the left hand side, and the right hand side to the right hand side), and in each of these cases, the solution set to the system remains the same.

Now, using matrix multiplication, we find that we can rewrite the system as a matrix equation:

$\left(\begin{matrix}{a}_{1 1} & {a}_{1 2} & \ldots & {a}_{1 n} \\ {a}_{2 1} & {a}_{2 2} & \ldots & {a}_{2 n} \\ \ldots & \ldots . & \ldots & \ldots \\ {a}_{m 1} & {a}_{m 2} & \ldots & {a}_{m n}\end{matrix}\right) \left(\begin{matrix}{x}_{1} \\ {x}_{2} \\ \ldots \\ {x}_{n}\end{matrix}\right) = \left(\begin{matrix}{b}_{1} \\ {b}_{2} \\ \ldots \\ {b}_{m}\end{matrix}\right)$

If we do any of the three operations we mentioned before, that is, swapping positions, multiplying by a constant, or adding a multiple of one equation to another, the resulting matrices will have had the same operation performed to the coefficient matrix and the matrix with the constant ${b}_{i}$ terms. The matrix with the variables remains unchanged. To focus on the parts that do change, we create an augmented matrix from the coefficient matrix and the ${b}_{i}$ matrix:

$\left(\begin{matrix}{a}_{1 1} & {a}_{1 2} & \ldots & {a}_{1 n} & | & {b}_{1} \\ {a}_{2 1} & {a}_{2 2} & \ldots & {a}_{2 n} & | & {b}_{2} \\ \ldots & \ldots . & \ldots & \ldots & | & \ldots \\ {a}_{m 1} & {a}_{m 2} & \ldots & {a}_{m n} & | & {b}_{m}\end{matrix}\right)$

Once we have the augmented matrix, we use those three operations -- swapping two rows, multiplying a row by a constant, or subtracting a multiple of one row from another row -- to transform the matrix into row echelon form or reduced row echelon form. A standard way of doing this is using Gaussian or Gauss-Jordan elimination.

Finally, we need to interpret the result. We will assume that the matrix is in reduced row echelon form, meaning there is no need to use substitution or change back to the original linear equation form.

• Any rows with all $0$s may be discounted, as they provide no information.

• If any row contains all $0$s in the left portion, but a nonzero constant on the right, then the system has no solution
(it is equivalent to saying $0 {x}_{1} + 0 {x}_{2} + \ldots + 0 {x}_{n} = c$ where $c \ne 0$, which is a contradiction). One system which would produce this result is
$\left\{\begin{matrix}{x}_{1} + {x}_{2} = 1 \\ {x}_{1} + {x}_{2} = 2\end{matrix}\right.$
(Is it clear why this cannot have a solution?)

• If any row contains more than one nonzero constant, and there is at least one solution (the previous situation doesn't occur), then there are infinitely many solutions. One system which would produce this result is
$\left\{\begin{matrix}{x}_{1} + {x}_{2} + {x}_{3} = 0 \\ {x}_{1} + {x}_{2} - {x}_{3} = 0\end{matrix}\right.$
(Is it clear why this has infinite solutions?)

• If neither of the prior situations occurred, then after removing any all-$0$ rows, the final result should look like this:

$\left(\begin{matrix}1 & 0 & 0 & \ldots & 0 & | & {c}_{1} \\ 0 & 1 & 0 & \ldots & 0 & | & {c}_{2} \\ 0 & 0 & 1 & \ldots & 0 & | & {c}_{3} \\ \ldots & \null & \null & \null & \null & | & \ldots \\ 0 & 0 & 0 & \ldots & 1 & | & {c}_{n}\end{matrix}\right)$

and the solution to the original system is

$\left\{\begin{matrix}{x}_{1} = {c}_{1} \\ {x}_{2} = {c}_{2} \\ \ldots \\ {x}_{n} = {c}_{n}\end{matrix}\right.$