Last post I defined and talked a little bit about alternating n-linear forms on a vector space (or, more precisely, on the n-fold product of with itself). It’s not hard to see that the space of all n-linear forms on a vector space themselves form a vector space (under the usual operation for addition and multiplication of functions), and one can check that the space of alternating forms is indeed a subspace of the bigger space. While we do not quite yet know that this subspace is not the trivial subspace, we can prove that this space has at most dimension 1. First some notation:
Definition: Given an alternating n-linear form on and a permutation , define the alternating n-linear form on by .
Thus is the n-linear form obtained by permuting the input vectors. This is a somewhat special property of n-linear forms, and it will turn out to be useful for a few proofs. We call an n-linear form skew-symmetric if for every odd permutation . I leave it to the read to show that any alternating n-linear form is skew-symmetric and that is skew-symmetric if and only if for all permutations . In essence, just show the statements hold for transpositions and then write an arbitrary permutation as a product of transpositions. With these facts in hand, we can prove the following proposition:
Proposition: If is a non-zero alternating n-linear form and is a set of linearly independent vectors in , then .
Proof: The vectors form a basis for , so choose arbitrary vectors . Write in its unique expansion with respect to the basis and use the multilinearity of to write as a linear combination of terms of the form , where each for some . If happens that for some , then since is alternating, that term is zero, otherwise for some permutation . But if , it follows that, and since is an arbitrary collection of vectors in , this implies is the zero form, which we supposed it was not.
So, combining this fact with the last proof I did in the last post, we have essentially proven one of the most useful properties of determinants, i.e. the determinant of an matrix is non-zero if and only if the rows of the matrix are linearly independent. The trick in this proof was splitting up an arbitrary vector into a fixed basis decomposition and then further realizing that permutations of this basis are really just positives and negatives of each other. This fact will is closely related to the familiar fact about determinants of matrices that interchanging two rows of a matrix switches the parity of the determinant. For now though, we can get even more mileage out of this technique of writing an n-linear form as a multiple of its value on a basis. In fact, when cast in this light, my statement above that any two alternating n-linear forms must be dependent seems almost trivial! We can determine any alternating n-linear form by taking multiples of some scalar, so in some sense two n-linear forms are determined by two scalars. But since any two scalars are dependent, (the equation always has non-trivial solutions in a field) we can extend this idea to show that the forms themselves are also dependent. That’s the key idea of the proof, and really all that remains is to write it down.
Proposition: If and are n-linear forms on , then and are linearly dependent.
Proof: If or the statement is trivial, so suppose they are not. I aim to show that there are scalars , not both 0, so that , i.e. that their sum is the zero function on . To this end pick and expand them in terms of a basis for . As above, write as a linear combination of terms (the two terms must have the same sign because they have the same input vectors). Now, because are scalars (non-zero by the previous proposition), they are linearly dependent, so there are scalars , at least one of which non-zero, so that . But then combining these facts, we infer that and at least one of .
Okay, I think this is enough for tonight. I anticipate two more posts on determinants: one to prove the existence of alternating n-linear forms on n-dimensional vector spaces (it’s kind of a pain) and to define the determinant, and a second to make the connection between determinants of linear transformations and determinants of matrices (we are taking the linear transformation approach, as opposed to the traditional “matrix first” many courses usually follow). I should mention also that I am following Halmos, P. Finite Dimensional Vector Spaces. Springer, 1987 fairly closely for this material, just to get that out there. Anyways, until next time, enjoy!