Littlewood’s Three Principles (2/3)

Littlewood’s three principles provide useful intuition for those first learning measure theory:

“The extent of knowledge [of real analysis] required is nothing like as great as is sometimes supposed. There are three principles, roughly expressible in the following terms:

  1. Every set is nearly a finite sum of intervals.
  2. Every function is nearly continuous.
  3. Every convergent sequence is nearly uniformly convergent.”

– John Littlewood

In my last post, I furnished a proof of the first statement. I should mention that in passing that the principle is true in \mathbb{R}^n as well: every open ball in \mathbb{R}^n is nearly a finite union of cubes (the only change is that you cannot assume that any open set is the disjoint union of open cubes). Today I’ll go out of order and prove the third principle. The reason for this order will be made clear Friday, when I’ll employ the third principle to prove the second one.

The third principle states that “every convergent sequence is nearly uniformly convergent.” Let’s recall the difference between these two modes of convergence. To do so, I’ll employ an analogy.

Let’s fix a sequence \{f_n\}_{n=1}^{\infty} of complex-valued functions with X and let’s play the following game. I have two bags, labeled X and \mathbb{R}^+, with the first bag containing every element of X and the second bag containing every positive real number. You have one bag, labeled \mathbb{\mathbb{N}}, which contains every natural number. At the beginning of the game, you name a complex-valued function f. Also, since I’m no Hercules, we’ll make X finitely measured, i.e. \mu(X)<\infty.

In the first turn, I pull an \epsilon>0 out of the bag labeled \mathbb{R}^+. If you can find me an N large enough in \mathbb{N} so that f_n(x) is within f(x) for all x\in X, then you win the first round.

Then, in the second turn, I pull an \epsilon>0 out of the bag labeled \mathbb{R}^+, but this time I also pull an x out of the bag labeled X. If you can find me an N large enough in \mathbb{N} so that f_n(x) is within f(x) for this specific x, then you win the second round.

If you can find a function f that always guarantee a first-round win, no matter what \epsilon>0 I choose, then we say f_n converges uniformly to f. We call this uniform convergence, since you don’t need to know where I am in space in order to guarantee me that f_n(x) is close to f(x) – i.e. their distance is uniformly bounded. However, if you can never find such an f, then I win.

If you can find a function f that always guarantees a second-round win, no matter what \epsilon>0 and x\in X I choose, then we say f_n converges pointwise to f, since you have to be “wise to the point” I’ve selected in order to guarantee that f_n(x) is close to f(x) – i.e. distance is relative to your position in space. If your second-round win is guaranteed “almost surely” (this is a precise term, actually), that is, the values of x for which I don’t win has zero measure in X, then we say f_n converges pointwise almost everywhere to f.

Now here’s the tl;dr: We say that \{f_n\}_{n=1}^{\infty} converges to a function f

  • pointwise if, for all x\in X and \epsilon>0, there is an N\in \mathbb{N} (dependent on both x and \epsilon) so that \left|{f_n(x)-f(x)}\right|<\epsilon provided n\ge N.
  • uniformly if, for all x\in X and \epsilon>0, there is an N\in \mathbb{N} (dependent only on \epsilon) so that \left|{f_n(x)-f(x)}\right|<\epsilon provided n\ge N.

Example. Suppose I give you the collection of functions f_n: [0,1]\to \mathbb{R} given by f_n(x)=x^n. If you graph these functions you’ll see that the higher exponent smashes the function to zero, except at 1, where it is constant, i.e. f_n(x) converges pointwise to the function

f(x)=\left\{\begin{array}{ll}0\text{ if }x\in [0,1)\\ 1\text{ if }x=1\end{array} \right.

To see why, suppose I give you x\in (0,1) and \epsilon>0. If \epsilon>1, pick whatever you’d like in response. Otherwise, then you just need to solve the equation x^n<\epsilon. So N must be at least \frac{\ln(\epsilon)}{\ln(x)} (which is positive since \epsilon,x<1). The cases x=0,1 are trivial, and so you win in round 2. But you can never win in round one! Why? For each 1>\epsilon>0, we see that x^n\ge \epsilon for all x\in [\epsilon^\frac{1}{n},1). So no matter what N you choose, unless you know the position of x, you can’t get f_n(x) uniformly within \epsilon of f.

Now, suppose you get sick and tired of losing in the first round. So you demand a rule change: at the beginning of each game, you can get rid of as many x‘s as you want out of my bag labeled X. But, still wanting to make the game attractive to me, you stipulate that I can limit the proportion of x‘s you take out of my X bag, as small as I’d like (provided it’s positive). After all, the problematic set in the example above gets arbitrarily small as n grows, so hopefully you can pull off a win if you can cut that piece out. Egorov’s Theorem shows that, given these modified rules, you can always guarantee a win – in the first turn.

Theorem (Egorov’s Theorem) If \mu(X)<\infty, and \{f_n\}_{n=1}^{\infty} is a sequence of complex valued functions on X that converge pointwise to f almost everywhere. Then for every \epsilon>0 there exists E\subseteq X such that \mu(E)<\epsilon and f_n\to f uniformly on E^c.

Proof. Assume without loss of generality that f_n\to f everywhere on X (otherwise we can redefine each f_n appropriately). For each k,n\in \mathbb{N}, let

E_n(k)=\bigcup_{m=n}^{\infty}\{x: \left|{f_m(X)-f(x)}\right|\ge k^{-1}\}.

So E_n(k) is the set of all values x\in X for which f_m(x) is at least \frac{1}{k} away from f(x). Fix a k\in \mathbb{N}. The intersection \cap_{n=1}^{\infty}E_n(k) is the set consisting of all x for which \left|{f_n(x)-f(x)}\right|\ge \frac{1}{k} for all n\in \mathbb{N}. But f_n\to f everywhere, and so \left|{f_n(x)-f(x)}\right|\to 0 for all x\in X. So \cap_{n=1}^{\infty}E_n(k)=\emptyset. Moreover, it is easy to see that E_{n+1}(k)\subseteq E_{n}(k) for all n\in \mathbb{N}.

Since \mu(X)<\infty, the continuity of \mu from above shows that \mu(E_n(k))\to 0 as n\to \infty. Let \epsilon>0. For each k\in \mathbb{N}, choose n_k so large that \mu(E_{n_k}(k))<\epsilon 2^{-k} and let E=\bigcup_{k=1}^{\infty}E_{n_{k}}(k). Then

\mu(E)\le \sum_{k=1}^{\infty}\mu(E_{n_{k}}(k))<\epsilon\sum_{k=1}^{\infty}2^{-k}=\epsilon.

Now, for all x\not\in E, we have that \left|{f_n(x)-f(x)}\right|<k^{-1} for all n>n_k. So f_n\to f uniformly on E^c, as desired.

I should mention that one may relax the condition that \mu(X) and instead stipulate that \left|{f_n}\right|\le g for all n and some g\in L^1(\mu), which follows from the triangle inequality and a simple inclusion argument.

Next time (Friday), I’ll prove the second principle, known better as Lusin’s Theorem. Thanks for reading.

About Adam Azzam

This Fall I will be a first year mathematics PhD student at UCLA. I enjoy doing analysis - particularly of the functional variety.
This entry was posted in Analysis. Bookmark the permalink.

2 Responses to Littlewood’s Three Principles (2/3)

  1. Pingback: Littlewood’s Three Principles (3/3) | whateversuitsyourboat

  2. Pingback: Littlewood’s Three Principles (3/3) | Alexander Adam Azzam

Leave a comment