Gaussian Elimination
The gaussian elimination algorithm, sometimes called the row reduction algorithm, is an algorithm that was first developed to solve systems of linear equations by using some simple operations on the rows of the matrix. The allowed operations are:
- Swapping two rows
- Multiplying a row by a non-zero number
- Adding a row to another row. This can also be combined with the above operation of multiplying a row to be able to add a multiple of a row to another. The multiple can also be negative resulting in subtracting a multiple of one row from another row.
These operations are allowed because they do not change the solution set of the system of linear equations. When solving a system of linear equations the conventional way by using algebra you are also performing the same operations but in a different way.
Given the system of linear equations:
You would probably solve it by trying to eliminate one variable at a time. For example let's solve the first equation for :
Now we can substitute this into the second equation:
and the third equation:
The above step could also have been achieved by addiding/subtracting the correct multiple of the first equation to the second and third equation, which is what the gaussian elimination algorithm does. To remove the variable from the second equation we could subtract the first equation multiplied by from the second equation:
First we multiply the first equation by :
Now we subtract this from the second equation:
And we have the same result as before. This process can be repeated for the third equation to get the same result as before. We can then also repeat the process to eliminate the variable from the third equation by adding/subtracting the correct multiple of the second equation to the third equation.
Then to finally solve for the variables we can perform back substitution, i.e take the found value for and substitute it into the second equation to solve for and then substitute the found value for and into the first equation to solve for .
Row Echelon Form
Above we have seen the idea of the algorithm but before looking at the algorithm, we need to define what it means for a matrix to be in row echelon form. A matrix is in row echelon form if it satisfies the following conditions:
- All rows that only contain zeros are at the bottom of the matrix.
- The first nonzero element in each row, called the leading entry or pivot, is to the right of all the pivots of the rows above it.
In row echelon form:
Notice that in the third row, the pivot is not directly to the right of the pivot in the second row, but it is to the right it doesn't matter if there are zeros in between.
Not in row echelon form:
Because in the third row, the pivot is not to the right of the pivot in the second row.
Reduced Row Echelon Form
Reduced row echelon form is a stricter form of row echelon form. A matrix is in reduced row echelon form if it satisfies the following conditions:
- It is in row echelon form.
- The pivot element in each row is equal to 1.
Solving Linear Systems
Orignally the gaussian elimination algorithm was developed to solve systems of linear equations.
SAY SOMETHING ABOUT USUALLY IT IS USED FOR SQUARE MATRICES
Before the algorithm can be used to solve a system of linear equations, the system must be transformed into a matrix, to be more specific, an augmented matrix. An augmented matrix is a matrix that contains the coefficients of the variables on the left side of the vertical line and the constants on the right side of the vertical line.
The system of linear equations:
Has the coefficient matrix:
And the augmented matrix:
Once we have the augmented matrix, we can use the gaussian elimination algorithm to solve the system of linear equations. We solve the system of linear equations by using the allowed operations to transform the augmented matrix into the row echelon form. Once the augmented matrix is in row echelon form, we can easily solve the system of linear equations by back substitution.
The gaussian elimination algorithm works because the allowed operations do not change the solution set of the system of linear equations. Simply changing the order of the equations in the system is obvious why it doesn't change the solution set. Adding an equation to another equation is allowed because imagine you have the following system of linear equations:
You can't simple add 100 becuase you have to add the same amount to both sides of the equation and then the equation would no longer be in its standard form and by transforming it into its standard form you would have to subtract 100 from both sides of the equation which would result in a hole lot of nothing.
However, what you can do is add on the right side 100 to the first equation to left side because it is equal to 100. You can think of it as adding 100 to both sides but whilst keeping the equation in its standard form.
Multiplying an equation by a nonzero number is allowed because it is the same as multiplying both sides of the equation by the same number, it doesn't change the linear equation at all.
Combining these two operations, we can see that we can add a multiple of one equation to another equation and it will not change the solution set!
So now that we know why the gaussian elimination algorithm works, let's actually solve our system of linear equations from above using it.
And we have our augmented matrix in row echelon form!
Why did we first swap the rows? Well, we did that because we wanted to make the first pivot 1. We could have also multiplied the first row by but that would have resulted in a fraction and we don't want that.
Why did we subtract 2 times the first row from the second row? Well, we did that because we wanted to make the second pivot 0.
Back Substitution
Now that we have our augmented matrix in row echelon form, we can solve the system of linear equations by back substitution. Back substitution is the process of solving for the variables starting from the bottom of the matrix and working our way up. We can do this in two ways, either by using algebra or by using the gaussian elimination even further.
We can take the bottom row of the augmented matrix and quickly solve it for thanks to the row echelon form by dividing both sides by 5.
Now that we know , we can substitute it into the first row of the augmented matrix and solve for .
To completely solve the system of linear equations, we can use the gaussian elimination algorithm to further transform the augmented matrix from row echelon form to reduced row echelon form. This will make it easier to solve for the variables.
And now we have our augmented matrix in reduced row echelon form! We can now easily see that and and that is it!
Compacted Form
When working with large matrices, having to write out all the rows and columns for each operation can be cumbersome. To make it easier to work with large matrices, it can be useful to compact the matrix by only looking at the columns that still need to be processed.
Gauss-Jordan Elimination
Reduced Row Echelon Form
Free Variables
the columns that do not contain a pivot are called free variables. The number of free variables is equal to the number of columns minus the number of pivots.
Number of Solutions
using the rank of the matrix we can determine the number of solutions to a system of linear equations.
Meaning of regular and singular matrices???
Also something about homogenous systems and non-homogenous systems. Trivial solutions and non-trivial solutions.
Consistency conditions