Linear Algebra Note#2:Elimination
This note is based on MIT 18.06📒
Content
- #1: Row, Column & Matrix
- #2: Elimination👈
Elimination Operation
Elimination is used to solve the problem of combination of linear equations.
Given 3 linear equations:
\[ \begin{aligned} x+2y+z &= 2\\\\ 3x+8y+z &= 12\\\\ 4y+z &= 2 \end{aligned} \]
Represent them as an augmented matrix:
\[ \begin{gathered} \begin{bmatrix} 1 & 2 & 1 & | & 2 \\\\ 3 & 8 & 1 & | & 12 \\\\ 0 & 4 & 1 & | & 2 \end{bmatrix} \end{gathered} \]
Elimination
First, decide a pivot, multiply it and subtract the other rows so that the subtraction can knock out the corresponding coefficient of unknown in other equations. This will also affect the other elements in the rows being subtracted.
Here, we choose \(A_{11}\), multiply it and subtract the second row so the elements under the pivot becomes \(0\):
\[ \begin{gathered} \begin{bmatrix} \underline{1} & 2 & 1 & | & 2 \\\\ 0^* & 2^* & -2^* & | & 6^* \\\\ 0 & 4 & 1 & | & 2 \end{bmatrix} \end{gathered} \]
Note: There is no need to subtract the third row as the element under the pivot is already \(0\).
Then, we select the second pivot in column 2, the \(A_{22}\), and subtract the third row:
\[ \begin{gathered} \begin{bmatrix} 1 & 2 & 1 & | & 2 \\\\ 0 & \underline{2} & -2 & | & 6 \\\\ 0 & 0^* & 5^* & | & -10^* \end{bmatrix} \end{gathered} \]
Back-Substitution
After iterative multiplication and subtraction, we can finally get the last pivot in the last row (the \(A_{33}\) here). Then we can get the solution for the unknown directly corresponding with the last pivot as the coefficient:
\[ 5z=-10 \]
Backwards one-by-one to the top row, we can get solutions for y and x with the following equations, respectively:
\[ 2y-2z=6 \]
\[ x+2y+z=2 \]
How could elimination fail?
When there is no pivot to select but haven't gone through all the columns.
Explanation: this actually means the coefficient matrix lacks information of at least one dimension. In other words, it cannot fully represent the whole n-d space.
Matrix Operation
Column Combination
Column combination of a matrix \(A\) can be expressed as \(Ax\), which is actually the linear combination of the columns of \(A\):
\[ \begin{gathered} \begin{bmatrix} | & | & | \\\\ | & | & | \\\\ | & | & | \end{bmatrix} \begin{bmatrix} 3 \\\\ 4 \\\\ 5 \end{bmatrix} = 3 \mathrm{col_1} + 4 \mathrm{col_2} + 5 \mathrm{col_3} \end{gathered} \]
Row Combination
Column combination of a matrix \(A\) can be expressed as \(xA\), which is actually the linear combination of the rows of \(A\):
\[ \begin{gathered} \begin{bmatrix} 1 & 2 & 7 \end{bmatrix} \begin{bmatrix} - & - & - \\\\ - & - & - \\\\ - & - & - \end{bmatrix} = 1 \mathrm{row_1} + 2 \mathrm{row_2} + 7 \mathrm{row_3} \end{gathered} \]
Express Elimination as Matrix Operation
Each step of row operation can be expressed as an Elementary Matrix \(E\), here is an example of \(E_{21}A\), where the \(E_{21}\) aims to knock out the element at \(A_{21}\):
\[ \begin{gathered} \begin{bmatrix} 1 & 0 & 0 \\\\ -3 & 1 & 0 \\\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 1 \\\\ 3 & 8 & 1 \\\\ 0 & 4 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 1 \\\\ 0 & 2 & -2 \\\\ 0 & 4 & 1 \end{bmatrix} \end{gathered} \]
Insight:
- Each row in \(E\) corresponds to the same row in the result matrix.
- The elements of each row in \(E\) are the coefficients of the linear combination of rows in \(A\).
- Summary: each row in \(E\) corresponds to a linear combination of rows in \(A\).
Repeat the elimination operation:
\[ E_{32}(E_{21}A) = U \]
Applying the associative law, we can use a single matrix to do the whole job:
\[ E_{32}(E_{21}A) = (E_{32}E_{21})A = EA = U \]
Note: commutative law is false in matrix operations.
Permutation Matrix \(P\)
Permutation matrix \(P\) in \(PA\) is used to exchange the rows of A:
\[ \begin{gathered} \begin{bmatrix} 0 & 1 \\\\ 1 & 0 \end{bmatrix} \begin{bmatrix} a & b \\\\ c & d \end{bmatrix} = \begin{bmatrix} c & d \\\\ a & b \end{bmatrix} \end{gathered} \]
We can acquire the needed \(P\) easily by exchanging the rows of an identity matrix, because:
\[ PI=P \]
There can be matrix multiplied to the right side of \(A\) to exchange its columns:
\[ \begin{gathered} \begin{bmatrix} a & b \\\\ c & d \end{bmatrix} \begin{bmatrix} 0 & 1 \\\\ 1 & 0 \end{bmatrix} = \begin{bmatrix} b & a \\\\ d & c \end{bmatrix} \end{gathered} \]
Inverse of Elimination
How to get the \(U\) back to \(A\)?
The inverse matrix of \(E\), \(E^{-1}\), which can be seen as undoing of the \(E\)'s operations.
e.g. if \(E\) represents the operation of subtracting 3 times row 1 from row 2, then its inverse \(E^{-1}\) should be adding 3 times row 1 to row 2:
\[ \begin{gathered} \begin{bmatrix} 1 & 0 & 0 \\\\ 3 & 1 & 0 \\\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\\\ -3 & 1 & 0 \\\\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \end{bmatrix} \end{gathered} \]