Matrices
You can think of a matrix as rectangle of objects arranged in rows and columns, like a chessboard. For computer scientist you can also imagine a matrix as a 2D array. A matrix with rows and columns is called an matrix. The number of rows and columns therefore define the size or dimensions of a matrix. The objects in a matrix or commonly reffered to as elements or entries. A matrix containing only real numbered elements is called a real matrix and can be defined as follows:
The elements of a matrix are indexed by their row and then there column. So or sometimes also is the element in the th row and th column. This means that a matrix can also be defined as .
The number of rows and columns of a matrix are always written in the order of rows first and columns second! So a matrix with 2 rows and 3 columns is written as a matrix, not a matrix, so .
A matrix with only one row is called a row vector or n-tuple and a matrix with only one column is called a column vector. When talking about vectors we usually refer to column vectors. A matrix that only has one row and one column can be thought of as a normal number and is called a scalar and therefore omit the matrix brackets. To differentiate between the three types of matrices we usually use bold capital letters for matrices, bold lower case letters for column vectors and normal lower case letters for scalars.
A column vector, i.e. ur usual vector, can be written as follows:
A row vector can be written as follows:
A scalar can be written as follows:
Square Matrix
As you can imagine a square matrix is a matrix where the number of rows and columns are equal. The number of rows or columns is reffered to as the order of the square matrix. So a square matrix of order is an matrix, i.e. . Square matrices have a number of useful properties and are therefore often used in many different applications.
Null/Zero Matrix
If a matrix only contains elements that are zero it is called a zero or null matrix. A null matrix is denoted as as the dimensions are usually implied by the context. If you want to be explicit you can also write where is the number of rows and is the number of columns.
If the matrix is a column vector we usually use the lower case letter to denote a null or zero vector.
Diagonal Matrix
A diagonal matrix is a square matrix where all the elements that are not on the main diagonal are zero. The main diagonal is the diagonal that goes from the top left to the bottom right of the matrix, in other words where the row and column index are equal, i.e. where .
So a diagonal matrix is where for all and . If a diagonal matrix is defined by the following elements then the matrix can be written as:
The definition of a diagonal matrix can also be extended to non-square matrices. In this case the matrix is still a diagonal matrix if all the elements that are not on the main diagonal are zero.
The following non-square matrix are still diagonal matrices:
Identity Matrix
An identity matrix sometimes also called unit matrix or identity of order is a square diagonal matrix where all the diagonal elements are equal to . The identity matrix is often denoted as or where is the number of rows and columns in the matrix.
Triangular Matrix
A triangular matrix is a square matrix where all the elements above or below the main diagonal are . If all the elements above the diagonal are it is called a lower triangular matrix becuase it only has elements below the diagonal that are non-zero:
If all the elements below the diagonal are it is called a upper triangular matrix because it only has elements above the diagonal that are non-zero:
A lower triangular matrix :
An upper triangular matrix :
Matrix Operations
Matrix Addition
Two matrices can be added together if they have the same dimensions, i.e. and . The addition of two matrices is defined as the element-wise addition of the matrices:
Or more formally:
Additive Identity
As you can expect the addition of any matrix with the null matrix results in the original matrix:
Additive Inverse
We can also expect to find a matrix that when added to results in the null matrix:
The matrix is called the additive inverse of and is denoted as where each element in the matrix is defined as the negative of the corresponding element in the matrix :
By using the additive inverse we can also subtract two matrices. The subtraction of two matrices is defined as the addition of the first matrix and the additive inverse of the second matrix:
Scalar Multiplication
A matrix can be multiplied by a scalar, i.e. a constant. This is defined as the element-wise multiplication of the matrix with the scalar:
Or more formally:
Matrix Multiplication
Matrix multiplication is a bit more complex than addition and scalar multiplication. One would think that multiplying two matrices together would be as simple as multiplying each element of the first matrix with the corresponding element in the second matrix and would therefore just like addition be an element-wise operation and only defined for matrices with the same dimensions. However, this is not the case, this would be the Hadamard product of two matrices. The reason for a more complex definition is due to relationship that the matrix multiplication has with linear transformations to which we will come later.
Matrix multiplication is only defined for matrices where the number of columns in the first matrix is equal to the number of rows in the second matrix. We will see why this is the case later. The result of multiplying two matrices together is a new matrix where the number of rows is equal to the number of rows in the first matrix and the number of columns is equal to the number of columns in the second matrix. So the dimensions in a matrix multiplication are defined as follows:
The actual calculation of the elements in the resulting matrix is a bit more complex. The element in the -th row and -th column in the resulting matrix is defined as the sum of the products of the elements in the -th row of the first matrix and the -th column of the second matrix. So the element in the resulting matrix is calculated as follows:
or more formally:
There are three different ways to think about matrix multiplication, the element view, the row view and the column view. The first way would be to think of how a single element in the resulting matrix is calculated. A single element in row and column is calculated as the sum of the products of the elements in the -th row of the first matrix and the -th column of the second matrix. Later on you will see that this is the dot product of the -th row and the -th column as vectors.
The second way to think about matrix multiplication is to see how a column in the resulting matrix is calculated. The first column in the resulting matrix corresponds to the linear combination of the columns in the left matrix where the weights are the elements in the first column of the right matrix. This pattern carries on for the other columns in the resulting matrix.
The third way to think about matrix multiplication is to see how a row in the resulting matrix is calculated. The first row in the resulting matrix corresponds to the linear combination of the rows in the right matrix where the weights are the elements in the first row of the left matrix. This pattern carries on for the other rows in the resulting matrix.
Commutative Property
Matrix multiplication is not commutative so:
This means that the order in which you multiply is important! This already becomes apparent when you look at the dimensions of the matrices. If you multiply a matrix with a matrix you get a matrix. However, if you multiply a matrix with a matrix you get a matrix.
Even if the dimensions of the matrices are the same the result of the matrix multiplication can be different depending on the order of the matrices.
For the element we can clearly see why the order of the matrices is important:
There are however some special matrices that are commutative to each other. These matrices are called commutative matrices or commute to each other so:
For the three matrices , and :
We can calculate the following:
We can see that the matrices and commute to each other.
Associative Property
Matrix multiplication is associative so:
This mean that the order in which you group the matrices in a multiplication does not matter. However, this does not mean that the order in which you multiply the matrices does not matter. As we have seen before matrix multiplication is not commutative.
The associative property is very useful for linear transformations as it allows you to combine multiple transformations into a single transformation.
We have the following matrices:
Then we can see that the following two equations become the same:
Distributive Property
The matrix multiplication is left distributive over addition and right distributive over addition so:
For the matrices , and :
Then we can see that the following two equations become the same:
Multiplicative Identity
With regards to matrix multiplication the identity matrix is a special matrix. The identity matrix functions as the multiplicative identity or also called the neutral element for matrix multiplication. Meaning that if you multiply any matrix with the identity matrix you get the same matrix back. The identity matrix is also commute to any matrix.
The reason as to why the identity matrix functions as the multiplicative identity can quite easily be seen if you think of the first row in the identity matrix as selecting all the elements in the first row of the second matrix. Then the second row in the identity matrix selects all the elements in the second row of the second matrix and so on. So it can be thought of as the 1 in each row/column of the identity matrix selecting the corresponding row/column in the second matrix.
The same is true for the other way around:
Permutation Matrix
We have seen that the identity matrix is a special matrix that functions as the multiplicative identity for matrix multiplication and can be used to select the corresponding row/column in another matrix. A permutation matrix commonly denoted as is a matrix that can be used to permute the rows or columns of another matrix. A permutation matrix is a square matrix where each row and column contains exactly one 1 and all the other elements are 0. Depending on if you want to permute the rows or the columns you can use the permutation matrix as the left or right matrix in the multiplication.
To permute the rows of a matrix you can use the permutation matrix as the left matrix in the multiplication. By swapping the rows of the identity matrix you can permute the rows of the matrix.
We can swap the first and second row of the matrix :
To permute the columns of a matrix you can use the permutation matrix as the right matrix in the multiplication. By swapping the columns of the identity matrix you can permute the columns of the matrix.
We can swap the first and second column of the matrix :
Strassen's Algorithm
Winograd's Algorithm
Transpose
The transpose of a matrix is a matrix where the rows and columns are swapped. The transpose of the matrix is written as . Formally the element in the -th row and -th column in the matrix becomes the element in the -th row and -th column in the matrix .
Or also more formally:
This results in the transpose of a row vector being a column vector and the transpose of a column vector being a row vector. The same goes for a matrix where the transpose of a matrix results in a matrix with its dimensions swapped. So the transpose of a matrix is a matrix.
Transposing a matrix:
Transposing a column vector is very useful when you want to write vectors in a readable way in a text, as you can see. Most commonly in textbooks when a vector is defined it is defined as a transposed column vector.
Transposing a row vector:
Properties of Transpose
There are a few useful properties of the transpose of a matrix that are worth remembering. Thist first it that if a matrix is transposed twice it is the same as the original matrix:
Another important property is that when you apply a transpose to some expression in brackets the order of the matrices is reversed. For addition this does not matter as it is commutative but for multiplication it does matter. So:
Symmetric Matrix
If a matrix is equal to its transpose it is called a symmetric matrix. So and for each element in the matrix . As you can quite easily imagine a prerequisite for a matrix to be symmetric is that it is a square matrix as otherwise the transpose of the matrix would have different dimensions.
Another thing that makes sense is that a diagonal matrix is always symmetric as all the elements that are not on the diagonal are zero.
A symmetric matrix:
A diagonal matrix is also symmetric:
Skew-Symmetric Matrix
A skew-symmetric matrix is a matrix where the elements of the matrix are equal to the negative of the elements in the transpose of the matrix. So and for each element in the matrix .
Transposing on a Computer
When you want the transpose of a matrix you don't actually need to perform any operations. You can just change the way you access the elements of the matrix. Rather than accessing the elements row by row you can access them column by column.
Depending on the size of the matrix and how many times you need to access the elements of the matrix this can be a lot faster than actually transposing the matrix. However, if you need to access the elements of the matrix multiple times it is probably faster to transpose the matrix first and then access the elements due to memory locality.
To transpose a square matrix in-place you can use the following algorithm which you can think of as swapping the elements of the matrix along the diagonal:
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
swap(A[i][j], A[j][i]);
}
}
For a non-square matrix you need to use a slightly more complex algorithm.
Inverse
The inverse of a matrix is a very important and special matrix that when multiplied with the original matrix results in the identity matrix. The inverse of a matrix is denoted as .
Not all matrices have an inverse. A matrix that has an inverse is called an regular/invertible/non-singular matrix. A matrix that does not have an inverse is called a degenerate/singular/non-invertible matrix. If a matrix is invertible then the inverse is unique, i.e. there is only one inverse for a matrix. For now we will not discuss how to calculate the inverse of a matrix or when a matrix is invertable but we will do so later on in the corresponding sections for matrix rank, determinant and matrix inversion.
Similarily to the transpose of a matrix the inverse of a matrix has some properties that are worth remembering:
The following matrix is invertible:
The following simple matrix is not invertible:
Parts below here are still a work in progress and might not belong here.
Frobenius Norm
The Frobenius norm is a way to measure the size of a matrix. It is defined as the square root of the sum of the squares of all the elements in the matrix. So for a matrix the Frobenius norm is defined as follows:
You can also think of it as just taking the matrix and flattening it into a vector and then calculating the length of that vector, i.e. the Euclidean/L2 norm of the vector.
If we define the matrix :
Then the Frobenius norm of is:
Trace
The trace of a matrix is the sum of all the diagonal elements in the matrix and is denoted as . Because it is the sum of the diagonal elements it is only defined for square matrices, i.e. :
Add properties of the trace. and proof that it is the sum of the eigenvalues.
Orthogonal / Orthonormal Matrix
Very unclear what the difference is between these two. I think an orthogonal matrix is a matrix where the columns are orthogonal to each other but don't have to be normalised. And an orthonormal matrix is a matrix where the columns are orthogonal to each other and are normalised, i.e. have a length of .
This section is still a work in progress!