Determinants – Part 2

Last post I defined and talked a little bit about alternating n-linear forms on a vector space V (or, more precisely, on the n-fold product of V with itself). It’s not hard to see that the space of all n-linear forms on a vector space themselves form a vector space (under the usual operation for addition and multiplication of functions), and one can check that the space of alternating forms is indeed a subspace of the bigger space. While we do not quite yet know that this subspace is not the trivial subspace, we can prove that this space has at most dimension 1. First some notation:

Definition: Given an alternating n-linear form \omega on V and a permutation \sigma\in S_n, define the alternating n-linear form \sigma\omega on V by \sigma\omega(v_1,\dots,v_n)=\omega(v_{\sigma(1)},\dots,v_{\sigma(n)}).

Thus \sigma\omega is the n-linear form obtained by permuting the input vectors. This is a somewhat special property of n-linear forms, and it will turn out to be useful for a few proofs. We call an n-linear form skew-symmetric if \pi\omega=-\omega for every odd permutation \pi. I leave it to the read to show that any alternating n-linear form is skew-symmetric and that \omega is skew-symmetric if and only if \pi\omega=(sgn\hspace{1mm}\pi)\omega for all permutations \pi. In essence, just show the statements hold for transpositions (i,j) and then write an arbitrary permutation \pi as a product of transpositions. With these facts in hand, we can prove the following proposition:

Proposition: If \omega is a non-zero alternating n-linear form and \{v_i\}_{i=1}^n is a set of linearly independent vectors in V, then \omega(v_1,\dots,v_n)\neq0.
Proof: The vectors \{v_i\}_{i=1}^n form a basis for V, so choose arbitrary vectors \{y_1\}_{i=1}^n\subset V. Write y_k=\sum_{i=1}^nc_{k_i}v_i in its unique expansion with respect to the v_i basis and use the multilinearity of \omega to write \omega(y_1,\dots, y_n) as a linear combination of terms of the form \omega(x_1,\dots,x_n), where each x_n=v_k for some k. If happens that x_n=x_m for some n,m, then since \omega is alternating, that term is zero, otherwise \omega(x_1,\dots,x_n)=\pi\omega(v_1,\dots,v_n)=\pm\omega(v_1,\dots,v_n) for some permutation \pi. But if \omega(v_1,\dots,v_n)=0, it follows that, \omega(y_1,\dots,y_n)=0 and since \{y_i\}_{i=1}^n is an arbitrary collection of vectors in V, this implies \omega is the zero form, which we supposed it was not.

So, combining this fact with the last proof I did in the last post, we have essentially proven one of the most useful properties of determinants, i.e. the determinant of an n\times n matrix is non-zero if and only if the rows of the matrix are linearly independent. The trick in this proof was splitting up an arbitrary vector y into a fixed basis decomposition and then further realizing that permutations of this basis are really just positives and negatives of each other. This fact will is closely related to the familiar fact about determinants of matrices that interchanging two rows of a matrix switches the parity of the determinant. For now though, we can get even more mileage out of this technique of writing an n-linear form as a multiple of its value on a basis. In fact, when cast in this light, my statement above that any two alternating n-linear forms must be dependent seems almost trivial! We can determine any alternating n-linear form by taking multiples of some scalar, so in some sense two n-linear forms are determined by two scalars. But since any two scalars are dependent, (the equation ax+by=0 always has non-trivial solutions in a field) we can extend this idea to show that the forms themselves are also dependent. That’s the key idea of the proof, and really all that remains is to write it down.

Proposition: If \omega_1 and \omega_2 are n-linear forms on V, then \omega_1 and \omega_2 are linearly dependent.
Proof: If \omega_1=0 or \omega_2=0 the statement is trivial, so suppose they are not. I aim to show that there are scalars \alpha,\beta, not both 0, so that \alpha\omega_1+\beta\omega_2=0, i.e. that their sum is the zero function on V. To this end pick \{y_i\}_{i=1}^k\subset V and expand them in terms of a basis \{v_i\}_{i=1}^n for V. As above, write \omega_1(y_1,\dots,y_n)+\omega_2(y_1,\dots,y_n) as a linear combination of terms \pm(\omega_1(v_1,\dots,v_n)+\omega_2(v_1,\dots,v_n)) (the two \omega terms must have the same sign because they have the same input vectors). Now, because \omega_1(v_1,\dots,v_n),\omega_2(v_1,\dots,v_n) are scalars (non-zero by the previous proposition), they are linearly dependent, so there are scalars \alpha,\beta, at least one of which non-zero, so that \alpha\omega_1(v_1,\dots,v_n)+\beta\omega_2(v_1,\dots,v_n)=0. But then combining these facts, we infer that \alpha\omega_1(y_1,\dots,y_n)+\beta\omega_2(y_1,\dots,y_n)=0 and at least one of \alpha,\beta\neq0.

Okay, I think this is enough for tonight. I anticipate two more posts on determinants: one to prove the existence of alternating n-linear forms on n-dimensional vector spaces (it’s kind of a pain) and to define the determinant, and a second to make the connection between determinants of linear transformations and determinants of matrices (we are taking the linear transformation approach, as opposed to the traditional “matrix first” many courses usually follow). I should mention also that I am following Halmos, P. Finite Dimensional Vector Spaces. Springer, 1987 fairly closely for this material, just to get that out there. Anyways, until next time, enjoy!

Advertisements

About Ryan

I'm a software developer at Hudl where I work on awesome software. Before that, I was a grad student in mathematics, interested in probability theory as well as analysis, more on the side of functional analysis and less on the side of PDEs. Apart from that I'm pretty lame. Though I do enjoy watching football, playing golf, and playing the trumpet.
This entry was posted in Linear Algebra. Bookmark the permalink.

9 Responses to Determinants – Part 2

  1. JCummings says:

    If you take requests:
    Maybe it is just one posts worth and so you’ll cover it in your fourth post, but I’d definitely like to read whatever you have to say about any intuition you have about what determinants measure (e.g. in terms of the linear transformation, what does a large positive value mean compared to a small positive, negative, etc.), different ways in which they are useful, and how they relate to eigenvalues, eigenvectors, canonical forms, etc.

    • soffer801 says:

      I totally agree. With this definition, is there a way to approach the Cayley-Hamilton Theorem that is more or less obvious? The only proof I’ve ever seen does it for diagonalizable matrices, and then shows that these are dense. Is there a more algebraic proof?

  2. Ryan says:

    The way I’ve been taught to think of a determinant is in terms of volume. In some sense it tells you how much a certain linear transformation affects the volume (so measure) of a set of vectors when you apply the transformation. E.g. a rotation has determinant 1, and if you were to rotate an object in (for example) three space you certainly would not change its volume (at least under Lebesgue measure =)), whereas the multiplication that multiplies every basis vector by two has determinant 2^n (where n=dim(V)) and so if for example n=2, and you were looking at the circle, you would multiply its radius by 2, which of course increases the 2 dimensional volume (area) by a factor of 4. In fact, this is kind of the idea behind “u-substitution” where we actually multiply a function by the determinant of a Jacobian matrix. I’ll try and write something about this in the fourth post, and I might get to say something about Radon-Nikodym derivatives while I’m at it! As far as Cayley Hamilton, I can’t remember the proof we saw in class two years ago, I’ll look at my notes, but I don’t think it used density of diagonal matrices. Otherwise, I’ll see what Halmos does. Should be fun to write about though!

    • JCummings says:

      In this volume was to think about it, is it just the magnitude of the determinant that matters? The sign just tells you something about how it was transformed? How does a determinant of zero fit into this? Is it always zero if one of the dimensions of the object whose volume we’re measuring collapses to zero? Or is a better way to think about the actual volume of the transformed object, since it will usually not end up with zero volume, correct?

      • Ryan says:

        Haha Jay you’re ruining all the stuff for me to write about in post 4! Without giving away too much, the volume way is basically for magnitude, non-zero determinants. The zero ones correspond to non-invertible transformations. The rules for applying them to integrals are not quite the same, but in a rough way of thinking about it, a determinant zero will somehow “kill off” a dimension. Think projection onto an axis. Then in some sense you’re taking an n-dimensional object and collapsing it to something smaller, which of course will have Lebesgue measure zero (a line has no “volume” in the everyday physical sense of the word). The sign essentially tells you about how the transformation works. Remember every invertible transform can be written as a product of elementary transformations, which sort of tell you how to make the transformation. For example, we could send basis vector b_1 to -b_1, which has determinant -1.

        • JCummings says:

          Sorry Hotovy! I actually wasn’t expecting you to reply to any of this stuff in the comments section. I figured you’d just make sure to talk about it in a post when you got there. Ok, no more questions! I look forward to reading Post 4!

  3. Z Norwood says:

    Dear Hotovy,

    Please see this comment on Andy’s blog and make sure I haven’t done anything stupid.

    Zach

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s