Hello everyone! I realize it’s been far too long since I’ve last posted, and I decided I don’t really want to write about Radon-Nikodym anymore. Maybe someday if I get requests I’ll write a couple more posts in that series, but for now I’m done. Long story made short, we can often use the determinant of a linear transformation and some given reference measure to compute measures of sets under said linear transformations (or, if you like, one can generalize to smooth coordinate transformations by using Jacobian, which is a “locally linear” coordinate change).

Anyways, I want to write about something different. Last year in the office, we began discussing the neatest results we had seen in our undergraduate courses. We listed some neat facts, and as I recall Jay enjoyed the result from linear algebra that given any , there is a unique polynomial so that . This is of course a special case of the famous Riesz Representation theorem. This theorem has many different statements depending on the context it’s used in. For our linear algebra class, we were given this particular one:

**Theorem: Riesz Representation Theorem:** Let be a finite dimensional vector space over and an inner product on . If is a linear functional on , then there is a unique vector so that for all .

The result about polynomials is then a special case, where (on ), , and is given by . The proof of this particular version of the RRT is not too difficult. When working in a finite dimensional space, one simply writes down a vector that works, and uniqueness follows from a simple calculation. I leave it to the reader to verify the proof of this particular version of Riesz. If you get stuck, check out (for example) Axler’s book *Linear Algebra Done Right* or Halmos’s *Finite Dimensional Vector Spaces*.

One of the (perhaps shocking) results in mathematics is that this result carries over to the case of infinite dimensions, provided we restrict ourselves to *continuous* functionals and * complete* inner product spaces. It turns out that in finite dimensional normed spaces (hence inner product spaces), every linear map is continuous, and all norms are equivalent. In fact we can pretty much show that any finite dimensional space looks like (which is complete) where is the dimension of the space in question. But what about the case of infinite dimensions? What if I were to replace with the space of all continuous functions on ? Maybe I want to use the space whose elements are all square summable sequences of complex numbers. What if I don’t have an inner product? Is there still a way to represent functionals by other vectors in my space (or a similar space)? In this set of posts, I’ll develop the machinery to, and ultimately prove stronger versions of the Riesz Representation theorem that apply to infinite dimensional vector spaces as well.

As I mentioned above, we need to put a few more restrictions on the functions and spaces we will consider when we pass over to infinite dimensions. I’ll spend the rest of this post talking about this. If we are working in an inner product space, we get a natural norm given by . In fact many of these results about functions will carry over to general normed spaces, so let’s consider these. As I mentioned above, in finite dimensional linear algebra, any two norms on a space are equivalent, and it turns out every linear map is continuous (with respect to the norms), so often in a basic linear algebra course (or even in an advanced linear algebra course), continuity is not discussed. Unfortunately, when we pass to infinite dimensions we do in fact lose this pleasantry. I’ll give you an example of a discontinuous linear map shortly, but first, here’s a nice characterization of continuity for linear maps.

**Proposition** Let be normed linear spaces and let be a linear map. Then is continuous if and only if there is a constant so that for all . Moreover, is continuous if and only if it is continuous at .

I’ll leave the proof of this fact to the reader, it’s a relatively straightforward calculation. The important thing is we can now make the following:

**Definition** Let be a linear map between normed spaces. If is continuous, we say is *bounded*. We denote and we call this number the norm of the function . We denote by the set of all bounded linear maps . If , we call the *dual space* of and denote it by . The reader may verify that the function is a norm on for any space .

**Examples**

- Let be the set of all continuously differentiable functions on and define by and we use the uniform norm on both spaces, i.e. . Then is linear (by elementary properties of derivatives), but
so is not bounded and hence not continuous. It’s perhaps interesting to note that this space is not complete in this norm! For example, the Stone-Weierstrass theorem tells us that the function can be uniformly approximated by polynomials (which are continuously differentiable infinite many times!) but of course is not even defined at . To resolve this, one can use the norm , where is the uniform norm, on . Then (exercise) is complete with respect to this norm. Moreover, for any , we have

,

so indeed is bounded with respect to this norm, and ! This shows that the two norms and are not equivalent on . As you can see, things really do start to behave strangely in infinite dimensions!

- Let be a finite dimensional inner product space with orthonormal basis with norm . Define (with its usual norm) by , where denotes the standard basis for . By the Pythagorean theorem, for all , we have
.

Thus is continuous, , and in fact one can show that for all . is an example of a unitary operator, which are the isomorphisms in the category whose objects are Hilbert spaces and whose morphisms are linear maps between them.

- Let be a normed space. For , define by . The reader may verify that is linear, , and . The space is often denoted . The map is a linear isometry (i.e. a norm-preserving linear map. Note that if is an isometry, then implies , thus , so , i.e. any isometry is automatically an injection, and hence is an isomorphism onto its range) mapping to . I’ll often denote by the image of under this correspondence. A
*Banach space*is a normed vector space that is also complete with respect to the metric . The reader may verify that is always a Banach space (in fact is always a Banach space) and from this it follows that is complete if and only if is a closed subspace of . Moreover, if is not complete, the map embeds as a dense subspace of its closure, which is itself a Banach space. The closure of in is called the*completion*of (with respect to its norm). If , we say is*reflexive*

Well, I don’t have a lot more to say for tonight as far as theorems and propositions go, but I’ll make just a few more comments on the above examples before I finish this post. In example three, the space is called the

*double dual*of . If is finite dimensional with some basis , define by , i.e. and for . The set is a basis for , and hence is isomorphic to its dual. The same argument shows that , so in finite dimensions every space is reflexive, and so the map is indeed surjective. In infinite dimensions, there are examples of spaces that are not reflexive. WARNING: if is an infinite dimensional vector space, let be the set of all linear functionals on , and the set of all linear functionals on (i.e. are the

*algebraic*dual and double dual of ). Then is NEVER isomorphic to . This is one of the first places where our analytic notions pay some dividends. For example, the space from above will turn out to be reflexive, even though its algebraic dual is much larger than the original space.

One final comment: if is a finite dimensional space, the map gives a (somewhat) canonical identification of with . In infinite dimensions, such a correspondence need not exist. We could try a similar trick, by writing down a basis for and defining in the same way. Unfortunately, this map need not be continuous. We could try and resolve this by defining by . The Hahn-Banach theorem (which maybe Adam could write about sometime?) tells us that can be extended to a bounded map on all of , but unfortunately we have absolutely no idea how to evaluate if ! One of the nice consequences of the Riesz Representation theorem is that it gives us a canonical map from an inner product space to its dual. We have this for finite dimensional spaces, and hopefully in the next couple of posts, I’ll show you that in fact we can do this for infinite dimensional spaces as well. Until then, enjoy!

Great post, Hotpants! You mentioned in Example 1 that is an infinite dimensional space. I’m curious if you have a basis in mind. Maybe Fourier basis? Or is there a “simpler” one.

Good question Jay! That actually brings up some interesting questions. First, note that is an infinite, linearly independent subset of , so this space really is infinite dimensional. When you say basis, you have to be careful though! There are two notions of basis: one is an algebraic basis (sometimes called a Hamel basis), in which case I’m not sure I can write one down for you (as Zach has written about, existence of a Hamel basis for an arbitrary vector space is equivalent to the axiom of choice). The thing with Hamel bases is that every element of must be expressible as a

finitelinear combination of basis elements. Note that since we are using finite sums, there are no issues of convergence.This is different from an analytic (Schauder) basis, where we essentially allow infinite sums. To be precise, a Schauder basis is a set of a Banach space so that for every there is a unique sequence of scalars so that , where convergence is with respect to the norm. Note, I think you can do this construction for non-separable spaces, you just have to ensure that for all all but countably many coefficients are non-zero. I know you can do this for a Hilbert space (will talk about this in later posts), but I’m not 100% sure for arbitrary Banach spaces. Now, by Stone-Weierstrass, polynomials are uniformly dense in , and I believe one can show by a greedy algorithm argument that in fact (I’m 95% sure this is true; Wikipedia claims that admits a Schauder basis in the uniform norm. I’m pretty sure this construction above works for .) The Fourier basis is used for Hilbert space and its associated norm. I’ll talk about this in a post later, but essentially it will happen that given any , we have , where convergence is now in the norm, i.e. the integral of their square difference goes to zero.

So, sorry for the long response… hope this answers your questions!

Don’t apologize for the long response, that was great! Thanks! I would say that you have made me look forward even more to your next post, but as I thought that my email dinged telling me that you have *already* posted again. Excellent!