Hello everyone, I hope you are all surviving your semester, wherever you may be! In this post, I want to show that (non-zero) alternating n-linear forms always exist over any vector space of dimension n. We already know that, if a non-zero alternating n-linear form on exists, then any other alternating n-linear form must be a scalar multiple of . We’ll use this fact, in a somewhat roundabout way, to define the determinant of a linear transformation. Then we can show the connections between determinants and matrices. The proof of the existence statement uses facts about the dual space of , so I want to review some basic facts about this space first.

**Definitions:** If is a vector space (finite or infinite dimensional) over a scalar field , a linear operator is called *linear functional* on . The vector space of all linear functionals on is called the *dual space* of and is denoted by . The dual space of is often called the * double dual* of and is often denoted .

The reader should verify that indeed is a vector space under the usual definitions of function addition and multiplication. For purposes of this post, we will only care about finite dimensional vector spaces, but it is interesting to note that for an infinite dimensional space, the (algebraic) dual is much larger than . For this reason, functional analysts often restrict themselves to working with continuous linear functionals on . In the future I hope to write about some of the differences and similarities between “algebraic” linear algebra and “analytic” linear algebra. Many of the results for infinite dimensional spaces (in functional analysis) are similar to results for finite dimensional linear algebra (algebraic results), however new proofs need to be introduced to work in analytic notions of continuity and convergence. This is a beautiful area of mathematics that I hope to cover eventually. For now though, we have the following:

**Proposition:** If is finite dimensional, say of dimension n, then also has dimension n. In particular these spaces are isomorphic.

*Proof:* That follows immediately from the fact that the two spaces have the same finite dimension, so it suffices to prove the first statement. Recall that any linear transformation on can be defined by its action on a basis of (to see this, given , write , where this expansion is unique. Then by linearity, ). Given such a basis , define by , so if , otherwise . I claim is a basis for . To this end, suppose . Then for each we have , so in particular . Thus the are linearly independent. Moreover, given any , write . Then I claim . To see this, given , write . Then , and for each we have , so summing over gives the result.

This result shows then that for finite dimensional spaces as well. Now, back to our discussion of alternating forms: why do we care about linear functionals? Well, for one thing, a linear functional is an alternating 1-linear form on , so this fact shows there are non-trivial 1-linear functionals on any non-trivial vector space, a fact that we will use in the next proof. Moreover, we will use these notions to define our larger n-linear form on . Dual spaces are very interesting, but they are not the main focus of this post. Let’s now turn to the main theorem in this post, and probably the most important theorem in this series on determinants:

**Theorem** Let be an n-dimensional vector space. Then there is a non-zero alternating n-linear form on .

*Proof:* I’ll show that for any that there is an alternating k-linear form on . If , then the result holds because the dual space of is non-trivial. Now let and suppose that is a non-zero alternating k-linear form. Thus there are with . Choose a vector (such a vector exists because ). There is a linear functional such that for and (for instance we could choose to be the standard dual basis vector since must be linearly independent by a proposition in the previous post). Define a function by

so, for instance, if , then

The reader may verify that is indeed a k+1-linear form on . Moreover, is non-zero, for we have by construction (since for ), and so it remains to show is alternating. To that end suppose with for some . Now, because is alternating, and occur in the argument of in all but two terms in the sum on the right in , the sum on the right collapses to a sum of two terms. There are two cases: if , then the sum is

which is clearly zero because . Otherwise, if , then write so that the terms in the sum will look like

and from this we can see that if we apply the permutation to the term on the right, then the arguments in agree, so since it is alternating they differ by only a sign (recall this fact from a previous post), and hence the sum is indeed zero. This shows that is alternating, which completes the proof.

So, that was messy! What did we do? Well, we start by finding a non-zero linear functional, and associate with it some subspace of . Then we pick something in the annihilator of that subspace (and a corresponding vector in ) and build out of it an alternating bi-linear form. We can be sure that it is non-zero because we set up our subspaces of so that this function is non-zero when vectors are chosen correctly. Moreover, to evaluate the function, we hold a lot of things fixed and permute two different entries and these permutation constructions, combined with the minus sign in the construction of , ensure that this new form is alternating. It’s not a very insightful proof, but you can certainly see the connections to determinants of matrices where we fix one row (which corresponds to ) and vary the arguments (which corresponds to taking determinants of the sub-matrices). Now, at long last, we are ready to define the determinant of a linear transformation. After that, in the following post, I will connect all these notions to matrices and prove that the matrix determinant is in fact an alternating n-linear form when thought of as a function on rows.

So how do we construct the determinant? Recall that any linear transformation on a 1 dimensional space must be multiplication by some scalar . To see this, note that in a one dimensional space, every vector , for some vector , thus , and for any vector we have . Now let be a linear operator on . Define a map on the space of alternating n-linear forms of by . Then the reader may verify that is a linear operator. Since the space of n-linear forms is one dimensional, there is a unique scalar so that for every alternating n-linear form on . This unique scalar is called the *determinant* of the linear transformation and is denoted by . For fun, let’s compute a few values of . Suppose for all , i.e. is multiplication by some fixed scalar. Then

and so we have . In particular then and . Finally, if are both linear transformations, let (the composition) and note that

and thus . But then for any alternating n-linear form on we have

from which it follows that , since scalars commute. Okay! That was a lot for one post! Now that we have defined the determinant, all we have left to do is connect it to matrices and then I’ll show you some cool theorems involving determinants and volumes of sets in . For now though, you’ll have to be content with just this. Enjoy!

If possible, in your next post will you also talk about why the product of the evalues equals the determinant? If there is a way to think about that with your geometric interpretation.