Linear Algebra (Get Started Here)

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on tumblr
Tumblr
Share on pinterest
Pinterest
Linear Algebra Math matrix form on a red background

Table of Contents

Are you looking to advance in mathematics and maybe even scientific disciplines? Like calculus, linear algebra is crucial to scientists and engineers, playing a key role in many modern advancements. Linear Algebra uses both theory and examples to help you learn the power of this valuable tool.

Introduction to Linear Algebra

Linear algebra is commonly used in statistics, engineering, computer science, economics and of course mathematics. But what is linear algebra anyway? When I think of linear algebra the first thing that comes to my mind is the use of matrices and vectors. Technically however, linear algebra is the study of linear functions and vectors.

Even broken down this way it still sounds like quite a bit. Next, let’s look more closely at how vectors, linear functions and matrices are related. Vectors are distances that have direction in the coordinate plane. Similarly, linear functions are functions that represent straight lines in the coordinate plane. It follows that we write linear functions as vector additions. These vector additions are easily organized with a matrix.

You might be wondering, why bother using vectors and matrices with linear functions? Aren’t linear functions already as easy as it gets function wise? However, the fact that linear functions are so nice to work with is why this branch of mathematics exits in the first place. Often times, you want to begin by modeling a situation with a linear model. However, even a system of linear functions can become complicated quickly. As a result, you will need a good way to organize your functions and perform operations on them. That’s why it can be useful to think of the functions as a sum of smaller vector parts. Then consequently use matrices to organize them. Linear algebra will help you develop the skills to do this.

Linear Algebra Starts with Linear Equations

Now that you have an idea of what linear algebra is, let’s look at how linear equations, matrices and vectors work together to solve systems.

When you put your systems of linear equations into a matrix you can change how they look and make them easier to solve. You do this by using row operations. Then the process you use to help eliminate variables to find solutions is called Gaussian elimination.

After you have learned how to solve matrices using Gaussian elimination the next step is to start thinking about the connection between matrices and vectors. Many of the operations on matrices have similarities with the types of operations you can do with vectors.

Remember when you solved linear equations in an algebra course? You needed to have at least the same number of equations as you had variables to solve for. If you didn’t have enough unique equations, you couldn’t find all of the solutions. Sometimes you could have equations that were really the same just written in a different way. These equations, in a matrix, determine the number of columns and rows. This leads us to an important definition known as the rank of a matrix. The rank of a matrix is the number of unique equations that aren’t combinations of another.

Linear Transformations

To a large part, linear algebra, is the study of linear transformations. A linear transformation must meet two criteria. First it is any transformation that maps members of one vector space to another vector space. Second it also preserves the laws of scalar multiplication and vector addition. Furthermore, you can look at the definition of a linear transformation as a matrix representation as well. Therefore both vector spaces and matrices are equivalent ways of looking at linear transformations.

Interestingly, many of the transformations that you looked at in geometry are considered linear transformations. You can check to see that a transformation is linear if the origin stays in the same place and all the lines in the first vector space remain lines after the transformation. Do you recall learning in geometry about dilations, contractions, orthogonal projections, reflections, rotations and vertical and horizontal shears? If so you might be happy to know that they are all examples of linear transformations. These transformations can be done with matrices as well.

It’s nice to have these new tools to work with and ways of thinking but we are still missing a key piece. Remember when you used addition, subtraction, multiplication and division to rewrite linear systems by using inverse operations? It would be nice to have this same tool to help solve a matrix and we do. If an inverse exits for a particular matrix then the matrix is said to be invertable. Unfortunately however, not every matrix has an inverse. As you will see in the lesson only square matrices have the possibility of having an inverse and not even all of them do.

Vector Spaces and Subspace

Let’s look a bit more closely at the properties of vector spaces. A vector space is a set of vectors that is closed under vector addition and scalar multiplication.  To clarify, this means for the set to be a vector space the rules of commutativity, associativity (of vector addition), additive identity, and additive inverse must exist. In addition the properties of associativity of scalar multiplication, distributivity of scalar sums, distributivity of vector sums also need to hold true.

Subspaces are vector spaces that are created from parts of larger vector spaces. In order to be a subspace there are three properties that must be true. The zero vector of the original vector space must be in the subspace. In addition, the subspace is closed under vector addition and scalar multiplications. Furthermore, in linear algebra, when we are working in a vector space we will also need to establish a coordinate system to use so that we have an idea of what the space looks like geometrically.

Finite and infinite dimensional vector space

You create a linear combination when you add two or more vectors together. In addition, the vectors being added can have a scalar multiple in front of them. Additionally, if you could find all of the possible linear combinations from a list of vectors that is called its span. The span must contain the smallest subspace of all of the possible linear combinations.

A vector space is finite-dimensional if you could write the list of vectors that spans the space. If you can’t write the list of vectors then you call it an infinite-dimensional vector space.

Kernals, Transformations and Linear Maps

Remember that a linear map is a transformation that preserves both vector addition and scalar multiplication. Thus, linear maps are also called linear transformations. Furthermore, in some situations they can also be called functions. The linear map creates an image of the original matrix. The kernel is a type of linear map.

The kernel of a matrix is the null space of the matrix. This is because it contains all of the elements that will map the vector space to zero when you perform the matrix multiplication of multiplying the kernel by the original matrix. Remember that when you get a linear space like this it is actually the set of solutions. Therefore the kernel is the answer to the question “what times the original matrix results in zero?”. So the kernel is often the solution set to a system of equations.

Now let’s look at the rank-nullity theorem. First recall that the rank of a matrix is given by the column space. Also the nullity of the matrix is the dimension of its null space. When you put the matrix in reduced row echelon form then the rank of the matrix is equal to the rank of the matrix in row echelon form. This is also true for the nullity. So the rank of the matrix plus the nullity of the matrix is equal to the number of rows.

Linear Independence

When you have a set of vectors, that set can either be classified as linearly dependent or linearly independent. A set of vectors is linearly dependent if at least one of the vectors in the set is a linear combination of the other vectors. Of course it follows that if a vector space is not linearly dependent it is linearly independent.

Orthogonality in Linear Algebra

Two vectors are perpendicular to one another if their dot product is equal to zero. In addition, if you are talking about vectors, then instead of the word perpendicular we usually say orthogonal. It is also equally important you remember that a vector that is one unit in length is called a unit vector. In fact, we can have vectors that are both a unit vector and orthogonal. We call these types of vectors orthonormal. It follows that, an orthonormal set, is a set made of vectors that are orthonormal.

Similarly, recall that a basis is a set of vectors in a vector space. Of course then, we call a basis made only of orthonormal vectors an orthonormal basis. Furthermore, remember that we can build a matrix out of orthonormal vectors. This kind of matrix is called an orthogonal matrix.

Sometimes in linear algebra, you want to take a basis that is not orthonormal and change it to one that is. You can do this by using something called QR factorization. You might be wondering why bother changing a matrix to an orthonormal one? In fact, rewriting the basis as an orthonormal one can make the matrix easier to manipulate and perform calculations on. This process is called QR factorization. Furthermore, the Gram-Schmidt Process is one of the most common ways to perform a QR factorization. To read more about orthogonal vectors and their special properties click here: Inner Products and Orthonormal Bases.

  1. Orthonormal Bases and Orthogonal Projections
  2. Orthogonal Matrix and Orthogonal Projection Matrix
  3. Gram-Schmidt Process and QR Factorization
  4. Inner Products and Orthonormal Bases

Determinants

If you have a square matrix then you can find the determinant. Interestingly in linear algebra, the determinant of a matrix is just one number either positive or negative. It is only one number because it is a measure of the volume of the rows of a matrix. It is important because without a determinant you can’t find the inverse of a matrix. This is why only square matrices are invertible. The determinant also shows up in other matrix calculations such as eigenvalues and eigenvectors.

It is easiest to find the determinant for a two by two matrix by using the Laplace expansion method. Unfortunately, once you have larger matrices this method can become tedious. This results from the process of having to break down the matrix into many parts. In addition to the definition, you can also use Gauss-Jordan elimination to find the determinant.

Eigenvalues and Eigenvectors in Linear Algebra

When you multiply a matrix by a scalar the resulting matrix is called an eigenvector. An eigenvalue is the scalar that you use to multiply to create the eigenvector. This linear transformation will be in the same direction as the eigenvalue. You can determine if a particular scalar is an eigenvalue of a matrix. Check to see if the determinate of the matrix minus the scalar times the identity matrix is equal to zero.

A diagonal matrix only has non-zero entries along the main diagonal. Therefor if a matrix is diagonalizable you can turn it into a diagonal matrix with a series of transformations. These are related to eigenvectors because a square matrix is diagonalizable if and only if it has the same number of independent eigenvectors as its row or column number.

Jordan Blocks

An operator in linear algebra is a map of how a vector space can move to produce other parts of the same vector space. A linear operator is an operator that follows the rules of a linear transformation, so it preserves addition and scalar multiplication. We often want to consider the types of operator that maps one subspace onto itself. It is so important that it has the name invariant subspace.

A Jordan diagonal matrix, also called a Jordan basis has zeros everywhere except for the main diagonal and the diagonal above it. The main diagonal can be any number and the diagonal above the main diagonal can consist of either zeros or ones.

Making the Complex Simple

Linear algebra is typically defined as the branch of mathematics dealing with vector spaces, linear equations, and linear transformations. These items are important because linear equations are very easy to solve (once you know how to do it!). Thus, they give efficient approximations for more complex equations. This holds true even for whole systems of equations in more dimensions than you can really picture in your mind, making it vital for some of the most important fields today such as quantum mechanics. And even something as everyday as compressing an image into a .jpeg file format; yup, that’s linear algebra. This Introduction to Linear Algebra book will help you master the basics in the field so you can continue on to great things.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on tumblr
Tumblr
Share on pinterest
Pinterest

Related Posts

Leave a Comment

Your email address will not be published. Required fields are marked *