Hello everyone! Welcome to, and thank you for reading, my first non-test post, i.e. my first post about some actual mathematics! Perhaps because my teaching assignment this semester is a homework help session for introductory linear algebra courses (and perhaps because this is a fairly easy first topic for me to write on), I have decided to write about a very useful tool in matrix theory: the determinant. In my experience, introductory courses in linear algebra tend to skip over many of the details involved with defining the determinant and present it as a black box for checking if a matrix is invertible. Hopefully I can shed some light on this box for those who have never seen it before, and for those who have perhaps present a nice refresher on basic linear algebra.

As is usually the case in mathematics, one needs to make many preliminary definitions and prove some initial results before developing more complex ideas, and this is certainly the case for the determinant. Since I do not intend to build up the theory of vector spaces from the ground up, I’ll assume the reader is familiar with the definition of a vector space, as well as the ideas of bases, dimension and linear independence.

I want to write down one definition that is most likely familiar to the reader, but just in case it is not, here is the formal idea of a product of vector spaces. Let and be two vector spaces (over the same field ). Then define the *(external) direct sum* of and to be the vector space

together with the operations

and ,

where . It is easy to see that is a vector space over . Notationally, we usually drop the subscripts on the operations (they are clear from context), and often write . There is much more that could be said about the direct sum, but for our purposes, we will really only need to know that this space is in fact a vector space with the above operations.

Okay, now that those preliminaries are out the way, let’s get to a definition that may not be as familiar. Let be vector spaces over the same field . We say a function is a *k-linear form* if for all the identity

holds for all (so and are arbitrary vectors). In other words, if we hold any of the variables constant, the function is linear in the remaining slot. For example, the dot product on is a 2-linear (bilinear) form on . Like products of vector spaces, there is much that could be said about the general theory of k-linear forms, but we will concern ourselves with a special type of k-linear form. A k-linear form is an *alternating k-linear form* if whenever for some .

At first glance, this may seem to be a strange definition to make. Why should we concern ourselves with such functions when we are trying to compute determinants of matrices? Well, remember that the determinant of a matrix is zero if any two of the rows are the same. While I will actually approach the determinants from the perspective of linear transformations (and then apply them to matrices), this gives a little insight into where this discussion is going. To close this post, I’ll prove one result about alternating k-linear forms to give you the flavor of what is to come in the next few posts.

**Proposition:** If is an alternating n-linear form on , where is an n-dimensional vector space and are linearly dependent vectors, then .

*Remark:* In the case above, where is defined on the product of n copies of , an n-dimensional vector space, we often say that is an alternating n-linear form on for sake of notation. This property of an alternating linear form certainly holds for determinants of matrices, since we know if and only if the rows of are linearly dependent.

*Proof:* If any of the are zero, the result is trivial, so suppose they are all non-zero. Because the set is linearly dependent, there is so that is in the span of the remaining n-1 vectors. Without loss of generality, we may assume (by simply re-indexing the finite list of vectors), so say . But then by holding the second through n’th slots constant, by the linearity of we have

But because is alternating, each term in the sum on the right is zero, so the whole sum is zero, which completes the proof.

This proof, in light of the definition of an alternating n-linear form, was quite simple. Indeed the definition of alternating n-linear form is so strong that it’s not even clear such a function exists. We will see that, in fact, there is such a function for every , and it is unique up to multiplication by constant factors. It will take some work, but we will also show that the usual matrix determinant is such an alternating n-linear form and hence is unique up to re-normalization. Admittedly, this post was somewhat dry, but the definitions we made here are necessary for a proper treatment of determinants. Bear with me and I assure you the next few posts will be more interesting!

Sorry for commenting almost a month after this post. I thought the definition of an alternating multilinear form was a multilinear form whose sign changes when you transpose two of its arguments. The definition you gave follows from this definition, given that if two arguments are equal, then transposition of those arguments changes the sign but not the value, and so it must evaluate to zero. I don’t imagine that that the definition you provided implies the other definition, which might be preferable since this reflects the property exhibited by the determinant when you switch two rows, two columns, etc.

So, perhaps since you ask I should prove this then, but this fact follows from the facts I wrote down in post two about permutations multiplied by n-linear forms. “Switching two rows,” is essentially the use of the n-linear form , which as I note in the next post is . I avoided the proof of this fact in the post because it’s mostly just a computation that doesn’t give much insight. Perhaps a sketch is in order: If you have an alternating n-linear form and vectors , let’s say you want to switch and (for notation of course! The proof easily generalizes). Then . Where I used linearity and the alternating property of . But again since alternating, the terms on the right with are zero, so we have after rearranging . Of course then you can generalize this to any product of transpositions (i.e. any odd permutation) by induction.

Pingback: Basic AC, Part II: Choice in Algebra | whateversuitsyourboat