Vectors
We have already seen that vectors are just special types of matrices with either one row or one column, primarly with one column. However, depending on the field you are working in, a vector can mean many different things. In computer science, a vector is just a list of numbers, i.e. a 1D array or n-tuple. The elements of a vector are more commonly called components rather than elements.
In maths, a vector can be thought of as an arrow in space that have a starting and ending point, also reffered to as the tail and the head of the vector. Most commonly vectors are denoted by a lowercase letter either in bold or with an arrow above it, e.g. or . If the vector is defined by two coordinate points, and , then the vector is denoted as , this type of vector is a position vector. Usually the starting point of the vector is at the origin, in 2D space or in 3D space etc. making the head of the vector the point or etc. this vector is then in the standard position. However, it is important to note that the vector is independent of the starting point, i.e. the vector is the same no matter where it starts only the direction and length of the vector matter.
Vectors are easily visualized in 2D and 3D space, but can be extended to any number of dimensions. If we define that all vectors have the same starting point, they can be uniquely be defined by their ending point which is the same as the direction and length of the vector. The length of the vector is also called the magnitude.
This is also in line with vectors in physics, where vectors are used to represent physical quantities that have both magnitude and direction such as velocity, force, acceleration, etc. For example, a force vector indicates both the magnitude of the force and the direction in which it is applied. Unlike position vectors, these vectors do not necessarily have a fixed starting point at the origin. Instead, they can be applied at any point in space, and their effects are determined by their magnitude and direction. The length of the vector represents the magnitude of the physical quantity, and the direction of the arrow indicates the direction in which the quantity acts.
Vector Addition
To add two vectors together, we simply add the corresponding components of the vectors together i.e. we just add element-wise. This also means that the two vectors must have the same number of components/dimensions. This is equivalent to matrix addition. So we can add two vectors and as follows:
We can also visualize vector addition nicely in 2D and 3D space with position vectors. Geometrically we can think of vector addition as moving the tail of the second vector to the head of the first vector. The resulting vector is the vector that starts at the tail of the first vector and ends at the head of the second vector. This also results in the the two vectors forming the sides of a parallelogram.
Scalar Multiplication
We can multiply a vector by a scalar, i.e. a number, by multiplying each component of the vector by the scalar. So if we have a vector and a scalar , then we can multiply them together just like when we multiply a matrix by a scalar:
We can also visualize scalar multiplication nicely in 2D and 3D space. Geometrically we can think of scalar multiplication as stretching or shrinking the vector by the scalar. This is why scalar multiplication is also called vector scaling and the number is called the scalar. If the scalar is negative, then the vector will be flipped around, i.e. it will point in the opposite direction.
Subtraction
Subtracting two vectors is the same as adding the first vector to the negative of the second vector, i.e. multiplying the second vector by . So if we have two vectors we can subtract them as follows:
When visualizing the subtraction of two vectors, we can think of it as adding the negative of the second vector to the first vector, i.e. moving the tail of the second vector after flipping it to the head of the first vector.
Another geometric interpretation of vector subtraction is that the resulting vector is the vector that points from the head of the second vector to the head of the first vector after moving the tail of the second vector to the tail of the first vector. From this interpretation, we can clearly see that the subtraction of two vectors is not commutative, i.e. the order in which we subtract the vectors matters as the resulting vector will point in the opposite direction just like in normal subtraction, and . If you think of as the vector , then you can also see that and after rewriting the equation we get visually.
Linear Combination
If we combine the concepts of vector addition and scalar multiplication, we get the concept of a linear combination. A linear combination of vectors is the sum of the vectors scaled by some scalars. So if we have a set of vectors and a set of scalars , then we can combine them as follows:
The scalars are the weights of the vectors, i.e. they determine how much each vector contributes to the resulting vector.
If and are defined as:
We can combine them as follows:
Linear combinations are the basis(hehe) of linear algebra and are used to define many more complex concepts such as linear independence, vector spaces, and linear transformations.
We can show that all vectors are linear combinations of other vectors. For example we can create all vectors with two components by combining the following two vectors:
This becomes pretty clear if we think of the two vectors as x and y axis in 2D space. Any point in 2D space can be defined by the x and y coordinates, i.e. the linear combination of the two vectors. These vectors are called the standard basis vectors and we will go into more detail about them later. However, these are not the only vectors that can be used to create all vectors in 2D space, we could also use the following two vectors:
We can show that any vector can be created by combining the two vectors by setting up a system of equations and seeing if we can solve it.
So we can see that depending on what values we want our vector to have, we can find the scalars and that create the vector. This is the basis of linear combinations. However, not any two vectors can be used to create all vectors in 2D space, the two vectors must form a basis for the space. In other words they must be linearly independent and span the space. We will go into more detail about this later.
Special Combinations
Depending on the scalars we use in the combination, we can create some special types of combinations:
- Linear: We have already seen this case where the scalars can be any real number, so .
- Affine: This is a linear combination where the sum of the scalars is equal to one, i.e. . The affine combination is used to create a point that lies on a line or plane defined by the vectors. This is because the combination can be rewritten as follows: .
- Conic: This is a linear combination where the scalars are non-negative, i.e. . The conic combination is used to create a point that lies inside the convex hull of the vectors. The convex hull is the smallest convex set that contains all the vectors.
- Convex: This is a mix between the affine and conic combinations, i.e. the scalars are non-negative and the sum of the scalars is equal to one, i.e. and . The convex combination is used to create a point that lies inside the convex hull of the vectors and on the line or plane defined by the vectors.
Multiplication between Vectors and Matrices
A vector can be left or right multiplied by a matrix. The multiplication between a vector and a matrix is defined just as the matrix multiplication so the dimensions of the two must be compatible. If we have a matrix and a column vector , then we can multiply them together as follows:
As we can see from the definition above, the result is another column vector with components. We can also see that the resulting vector is a linear combination of the columns of the matrix where the weights are the components of the vector . This is why when the matrix is square the multiplication is often called a linear transformation, because it transforms the vector into another vector. If the vector is on the left side of the matrix, then the vector needs to be transposed for the multiplication to work, i.e. the dimensions of the vector must be compatible with the matrix. So if we have a row vector and a matrix , then we can multiply them together as follows:
Just like when right multiplying a vector by a matrix, the result is a row vector with components. We can also see that the resulting vector is a linear combination of the rows of the matrix where the weights are the components of the vector .
A right multiplication of a matrix and a vector:
A left multiplication of a matrix and a vector:
Linear Independence
Two vectors are linearly independent if neither of them can be written as a linear combination of the other. In other words, two vectors are linearly independent if they are not scalar multiples of each other. It is however easier to find to define and check for linear dependence. The vectors and are linearly dependent if:
this can also be written as:
where is the zero vector. This means that the vectors and are linearly dependent if they are collinear, i.e. they lie on the same line. The two equations above can also be used to define linear independence, we just replace the equal sign with a not equal sign.
If and are defined as:
then and are linearly dependent because:
However, if and are defined as:
then and are linearly independent because no scalar multiple of can be equal to .
Linear Independence of More Than Two Vectors
Never used this.
Dot Product
The dot product or also called the inner product is the most common type of vector multiplication. This is defined just like the matrix multiplication. However, so the dimensions of the two vectors must be the same. To achieve this, the second vector is transposed. The dot product is often denoted as, but sometimes also as rather than to avoid confusion with the matrix multiplication or scalar multiplication. So if we have two vectors and , then we can multiply them together as follows:
From the dimensions we can also clearly see that the dot product results in a scalar which is why it is also called the scalar product, not to be confused with scalar multiplication!
Unlike the matrix multiplication, the dot product is commutative, meaning that the order in which we multiply the vectors together does not matter.
The dot product is also commutative for real numbers, meaning that the order in which we multiply the vectors together does not matter as long as the first vector is transposed. This is because the dot product is the sum of the products of the corresponding components of the two vectors and the pairs of components are the same in both cases.
Norms
A norm is a function denoted by that maps vectors to real values and satisfies the following properties:
- It is positiv definit meaning it assigns a non-negative real numbers, i.e a length or size to each vector.
- If the norm of a vector is zero, then the vector is the zero vector.
- The norm of a vector scaled by a scalar is equal to the absolute value of the scalar times the norm of the vector, i.e. where and .
- The triangle inequality holds which we will see later.
In simple terms the norm of a vector is the length of the vector. There are many different types of norms, but the most common ones are the and norms, also known as the Manhattan and Euclidean norms respectively. The norm is a generalization of the and norms. We denote a vector's norm by writting it in between two vertical bars, e.g. , and the subscript denotes the type of norm, e.g. or etc. If the subscript is omitted, then the norm is assumed.
Manhattan Norm
The Manhattan norm or norm is defined as the sum of the absolute values of the vector's components.
It is called the Manhattan norm because it can be thought of as the distance between two points along the axis of a rectangular grid, like the streets of Manhattan or any other city with a grid-like structure. No matter how we move along the roads of Manhattan, the distance between two points is always the same.
If is defined as:
then the norm of is:
Euclidean Norm
As the name suggests, the Euclidean norm or norm is the distance between two points in Euclidean space, i.e. the straight line distance between two points. For the 2D case, the Euclidean norm is just the Pythagorean theorem, i.e the length of the hypotenuse of a right-angled triangle.
From the defintion above we can actually see that the Euclidean norm is the square root of the dot product of the vector with itself.
Cauchy-Schwarz Inequality
The Cauchy-Schwarz inequality states that the dot product of two vectors is always less than or equal to the product of the two vectors' norms.
We want to prove that for any two vectors and , the inequality holds.
Case 1: When one of the vectors is the zero vector
If one of the vectors is the zero vector, then the inequality holds because the dot product is zero and the product of the norms are also zero. So then the inequality becomes:
Case 2: If both vectors are unit vectors
If both vectors are unit vectors, then the inequality becomes the following:
We can then rewrite the dot product as the cosine of the angle between the two vectors, because the norms are one this also simplifies to:
The cosine of the angle between two vectors is always between -1 and 1. The inequality however also takes the absolute value of the dot product, so the inequality holds.
Case 3: Any two vectors
If the vectors are not unit vectors, then we can scale the vectors to be unit vectors. We don't need to worry about dividing by zero as we've already shown, if any of the vectors is the zero vector the inequality becomes zero.
From above we know that , so we can write the following:
Triangle Inequality
The triangle inequality states that the norm of the sum of two vectors is less than or equal to the sum of the norms of the two vectors.
This can also visually be seen in the 2D case, where the direct path from one point to another is always shorter than the path that goes through another point. Or also that the hypotenuse of a triangle is always shorter than the sum of the other two sides.
vectorTriangleInequality.png
Let's first look at the norm of the sum of two vectors squared.
Now we can use the Cauchy-Schwarz inequality on the middle term and get:
So we can rewrite the norm of the sum of two vectors squared and take the square root to get the triangle inequality.
P-Norm
The idea of the norm is to generalize the and norms. The norm is defined as:
An arbitrary norm is rarely used in practice, most commonly the and norms are used. For some use-cases the norm is used, which is defined as:
In other words, the norm is vector component with the largest absolute value.
If is defined as:
then the norm of is:
and the norm of is:
Meaning of the Dot Product
The question now is what is the dot product actually?
We can also visualize the dot product nicely in 2D and 3D space. The dot product of two vectors is the cosine of the angle between the two vectors multiplied by the length of the two vectors if we place the tails at the same point. So if we have two vectors and , then we can calculate the dot product as follows:
where is the angle between the two vectors. We can also calculate the angle between the two vectors by rewriting the equation above as follows:
Where is the inverse cosine function, also called the arccosine function.
If and are defined as:
then the angle between and is:
Orthogonal Vectors
We call two vectors orthogonal if the angle between them is 90 degrees, i.e. they are perpendicular to each other. If two vectors are orthogonal, then their dot product is zero, because . So if we have two vectors and , then we can check if they are orthogonal as follows:
Outer Product
The outer product of two vectors is the opposite of the dot product. The outer product of two vectors results in a matrix. So if we have two vectors and , then we can multiply them together as follows to get a matrix . The outer product can be denoted as a matrix multiplication or as with the symbol .
Or more formally:
From above we can see that the outer product of two vectors results in a matrix where the columns are the first vector scaled by the components of the second vector and the rows are the second vector scaled by the components of the first vector. So the matrix forms a dependent set of vectors, i.e. the columns/rows of the matrix are linearly dependent. Because the size of largest set of linearly independent vectors is 1, the rank of the matrix is 1.
Matrix Multiplication as Outer Product
A fourth view of matrix multiplication is that it is the sum of the outer products of the columns of the first matrix and the rows of the second matrix. So you can interpret each outer product as a layer of the resulting matrix. This in turn shows that any matrix can be written as a sum of rank 1 matrices.
The matrix multiplication of two matrices and can be written as the sum of the outer products of the columns of and the rows of .
Linear Projections
Is a bigger topic can be related to matrix transformations via the outer product of the two vectors or something is related to the dot product and the angle between the two vectors and the length of the two vectors
Normalization
Normalizing means to bring something into some sort of normal or standard state. In the case of vectors, normalizing means to scale the vector in a way that it's length is equal to one. Often we denote a normalized vector by adding a hat to the vector, e.g. is the normalized vector of . So we can say if , then is normalized. From this definition we can see that to normalize a vector, we simply divide the vector by it's length, i.e. we divide the vector by a scalar. So if we have a vector , then we can normalize it as follows:
This normalized vector will have the same direction as the original vector, but it's length will be equal to one. By eliminating the length of the vector, we can uniquely identify a vector by it's direction. This is useful because we can now compare vectors based on their direction, without having to worry about their length. All these normalized vectors are also called unit vectors and if they are placed at the origin in 2D they span the unit circle.
Orthonormal Vector
We can now combine the idea of orthogonal vectors and normalized vectors to get orthonormal vectors. Orthonormal vectors are vectors that are orthogonal to each other and have a length of one.
Standard Unit Vectors
An example of orthonormal vectors are the standard unit vectors. The standard unit vectors can be thought of as the vectos that correspond to the axes of a coordinate system. Later on we will see that these vectors can be used to span any vector space and form the standard basis of the vector space. The standard unit vectors are denoted as where is the index of the vector. The also corresponds to the index of the component that is one, while all other components are zero. The dimensionality of the vector is inferred from the index, so is a 1D vector, is a 2D vector, is a 3D vector depending on the context.
It is quiet easy to see that the standard unit vectors are orthonormal, because they are orthogonal to each other and have a length of one. It also easy to see that any vector can be written as a linear combination of the standard unit vectors, this is why they are so useful and will become an important concept later on when talking about vector spaces and bases.