Determinants – Part 3

Hello everyone, I hope you are all surviving your semester, wherever you may be! In this post, I want to show that (non-zero) alternating n-linear forms always exist over any vector space of dimension n. We already know that, if a non-zero alternating n-linear form \omega on V exists, then any other alternating n-linear form must be a scalar multiple of \omega. We’ll use this fact, in a somewhat roundabout way, to define the determinant of a linear transformation. Then we can show the connections between determinants and matrices. The proof of the existence statement uses facts about the dual space V' of V, so I want to review some basic facts about this space first.

Definitions: If V is a vector space (finite or infinite dimensional) over a scalar field \mathcal{F}, a linear operator \phi:V\mapsto\mathcal{F} is called linear functional on V. The vector space of all linear functionals on V is called the dual space of V and is denoted by V'. The dual space (V')' of V' is often called the double dual of V and is often denoted V''.

The reader should verify that indeed V' is a vector space under the usual definitions of function addition and multiplication. For purposes of this post, we will only care about finite dimensional vector spaces, but it is interesting to note that for an infinite dimensional space, the (algebraic) dual is much larger than V. For this reason, functional analysts often restrict themselves to working with continuous linear functionals on V. In the future I hope to write about some of the differences and similarities between “algebraic” linear algebra and “analytic” linear algebra. Many of the results for infinite dimensional spaces (in functional analysis) are similar to results for finite dimensional linear algebra (algebraic results), however new proofs need to be introduced to work in analytic notions of continuity and convergence. This is a beautiful area of mathematics that I hope to cover eventually. For now though, we have the following:

Proposition: If V is finite dimensional, say of dimension n, then V' also has dimension n. In particular these spaces are isomorphic.
Proof: That V\simeq V' follows immediately from the fact that the two spaces have the same finite dimension, so it suffices to prove the first statement. Recall that any linear transformation L on V can be defined by its action on a basis \{x_i\}_{i=1}^n of V (to see this, given x\in V, write x=\sum_i c_ix_i, where this expansion is unique. Then by linearity, Lx=\sum_i c_iLx_i). Given such a basis \{x_i\}, define y_i\in V' by y_i(x_j)=\delta_{i,j}, so y_i(x_j)=1 if i=j, otherwise y_i(x_j)=0. I claim \{y_i\}_{i=1}^n is a basis for V'. To this end, suppose y=\sum_i \alpha_iy_i=0. Then for each x\in V we have y(x)=0, so in particular 0=y(x_j)=\sum_i \alpha_iy_j(x_j)=\alpha_j. Thus the y_i are linearly independent. Moreover, given any y\in V, write \eta_i=y(x_i). Then I claim y=\sum_i\eta_iy_i. To see this, given x\in V, write x=\sum_i\alpha_i x_i. Then y(x)=\sum_i \alpha_i\eta_i, and for each j we have \eta_jy_j(x)=\eta_j\alpha_j, so summing over j gives the result.

This result shows then that for finite dimensional spaces V''\simeq V as well. Now, back to our discussion of alternating forms: why do we care about linear functionals? Well, for one thing, a linear functional is an alternating 1-linear form on V, so this fact shows there are non-trivial 1-linear functionals on any non-trivial vector space, a fact that we will use in the next proof. Moreover, we will use these notions to define our larger n-linear form on V. Dual spaces are very interesting, but they are not the main focus of this post. Let’s now turn to the main theorem in this post, and probably the most important theorem in this series on determinants:

Theorem Let V be an n-dimensional vector space. Then there is a non-zero alternating n-linear form on \displaystyle\bigoplus_{i=1}^nV.
Proof: I’ll show that for any 1\leq k\leq n that there is an alternating k-linear form on V. If k=1, then the result holds because the dual space of V is non-trivial. Now let 1\leq k<n and suppose that \omega is a non-zero alternating k-linear form. Thus there are \{z_i\}_{i=1}^k\subset V with \omega(z_1,\dots,z_k)\neq0. Choose a vector z_{k+1}\notin\text{span}\{z_1,\dots,z_k\} (such a vector exists because k<n=dim(V)). There is a linear functional \phi\in V' such that \phi(z_i)=0 for 1\leq i\leq k and \phi(z_{k+1})\neq 0 (for instance we could choose \phi to be the standard dual basis vector since z_1,\dots,z_k must be linearly independent by a proposition in the previous post). Define a function \nu by

\dagger\hspace{3mm} \displaystyle\nu(x_1,\dots,x_k,x_{k+1})=\left(\sum_{i=1}^k(i,k+1)(\omega(x_1,\dots,x_k)\phi(x_{k+1}))\right)-\omega(x_1,\dots,x_k)\phi(x_{k+1})

so, for instance, if k=3, then


The reader may verify that \nu is indeed a k+1-linear form on V. Moreover, V is non-zero, for we have \nu(z_1,\dots,z_k,z_{k+1})=-\omega(z_1,\dots,z_k)\nu(z_{k+1})\neq0 by construction (since \nu(z_j)=0 for 1\leq j\leq k), and so it remains to show \nu is alternating. To that end suppose x_1,\dots,x_{k+1}\subset V with x_j=x_i for some 1\leq i< j\leq k+1. Now, because \omega is alternating, and x_i,x_j occur in the argument of \omega in all but two terms in the sum on the right in \dagger, the sum on the right collapses to a sum of two terms. There are two cases: if j=k+1, then the sum is


which is clearly zero because x_i=x_{k+1}. Otherwise, if 1\leq i<j<k+1, then write x=x_i=x_j so that the terms in the sum will look like

\omega(x_1,\cdots,x_{k+1},\cdots,x,\cdots,x_k)\phi(x)+\omega(x_1,\cdots,x_{k+1},\cdots ,x,\cdots, x_k)\phi(x)

and from this we can see that if we apply the permutation (i,j) to the \omega term on the right, then the arguments in \omega agree, so since it is alternating they differ by only a sign (recall this fact from a previous post), and hence the sum is indeed zero. This shows that \nu is alternating, which completes the proof.

So, that was messy! What did we do? Well, we start by finding a non-zero linear functional, and associate with it some subspace of V. Then we pick something in the annihilator of that subspace (and a corresponding vector in V) and build out of it an alternating bi-linear form. We can be sure that it is non-zero because we set up our subspaces of V so that this function is non-zero when vectors are chosen correctly. Moreover, to evaluate the function, we hold a lot of things fixed and permute two different entries and these permutation constructions, combined with the minus sign in the construction of \nu, ensure that this new form is alternating. It’s not a very insightful proof, but you can certainly see the connections to determinants of matrices where we fix one row (which corresponds to \phi) and vary the arguments \omega (which corresponds to taking determinants of the sub-matrices). Now, at long last, we are ready to define the determinant of a linear transformation. After that, in the following post, I will connect all these notions to matrices and prove that the matrix determinant is in fact an alternating n-linear form when thought of as a function on rows.

So how do we construct the determinant? Recall that any linear transformation L on a 1 dimensional space must be multiplication by some scalar c_L. To see this, note that in a one dimensional space, every vector v=cx, for some vector x, thus Lx=c_Lx, and for any vector y we have Ly=L(\alpha x)=\alpha Lx=\alpha c_L x=c_L\alpha x=c_Ly. Now let A be a linear operator on V. Define a map \overline{A} on the space of alternating n-linear forms of V by [\overline{A}(\omega)](x_1,\dots,x_n)=\omega(Ax_1,\dots,Ax_n). Then the reader may verify that \overline{A} is a linear operator. Since the space of n-linear forms is one dimensional, there is a unique scalar \delta_A so that \overline{A}(\omega)=\delta_A\omega for every alternating n-linear form \omega on V. This unique scalar \delta_A is called the determinant of the linear transformation A and is denoted by det(A). For fun, let’s compute a few values of det. Suppose Ax=\alpha x for all x\in V, i.e. A is multiplication by some fixed scalar. Then

[\overline{A}(\omega)](x_1,\dots,x_n)=\omega(\alpha x_1,\dots,\alpha x_n)=\alpha^n\omega(x_1,\dots,x_n)

and so we have det(A)=\alpha^n. In particular then det(I)=1 and det(0)=0. Finally, if A,B are both linear transformations, let C=AB (the composition) and note that


and thus \overline{C}=\overline{B}\overline{A}. But then for any alternating n-linear form \omega on V we have


from which it follows that det(AB)=det(A)det(B), since scalars commute. Okay! That was a lot for one post! Now that we have defined the determinant, all we have left to do is connect it to matrices and then I’ll show you some cool theorems involving determinants and volumes of sets in \mathbb{R}^n. For now though, you’ll have to be content with just this. Enjoy!


About Ryan

I'm a software developer at Hudl where I work on awesome software. Before that, I was a grad student in mathematics, interested in probability theory as well as analysis, more on the side of functional analysis and less on the side of PDEs. Apart from that I'm pretty lame. Though I do enjoy watching football, playing golf, and playing the trumpet.
This entry was posted in Linear Algebra. Bookmark the permalink.

1 Response to Determinants – Part 3

  1. JCummings says:

    If possible, in your next post will you also talk about why the product of the evalues equals the determinant? If there is a way to think about that with your geometric interpretation.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s