Introduction to Randomized Algorithms; Part 1

Hi everyone.  I am going to start a brief series on randomized algorithms.  I’ll probably focus all the posts around one or two examples so that I can avoid cluttering them up with too many legit probabilistic techniques that I haven’t talked about yet.  Alright, enjoy Post 1.

An algorithm is a set of instructions to accomplish some task.  Driving directions from Lincoln, NE to La Jolla, CA is an algorithm.  I could tell you an algorithm to solve a Rubik’s cube.  One step might say: If the upper-right square of the face that you are looking at is a different color than the upper left square, turn the cube in such-a-such way.  If they are the same color, do nothing.

The algorithms I have described so far are deterministic algorithms.  Each step tells you exactly how to move from the current state to the next one.  So, in particular, if two different people performed the algorithm from the same starting conditions, then after each step they will be in the exact same state.  Moreover they are guaranteed to succeed in performing whatever function the algorithm is there for.  (Assuming the algorithm’s a good one.)

So what is a randomized algorithm?  It is an algorithm where one or more of the steps are not deterministic.  Where instead there is some randomization built in.  Maybe one of the steps says “Now flip a coin.  If heads, put the ball in bin number 1.  If tails put the ball in bin number 2”  Each time you run the algorithm you do not necessarily get the same result.

You might be wondering whether a randomized algorithm is ever better than a best-possible deterministic one.  Well, the answer is yes.  There are several reasons why one might prefer to build in some randomization and during the next four posts I will be focusing on one of these.

There is an example that I want to show you that I think is just completely mind-blowing.  I first heard this question two summers ago from Andrey Grishpun.  I heard it again this year from Fan Chung.  Each had a different solution to the problem.  I will share both with you starting in this post with Fan’s solution.  But the fact that a solution could possibly exist is, in my opinion, quite spectacular and unexpected.

Question.  Say I have some continuous probability distribution over the reals and I pick two numbers independently under this distribution.  I show you the first one, but I have hidden both the second number and the probability distribution.  I ask you whether you want to keep that number or take the second number.  Your goal is to end up with the larger of the two numbers.  Can you come up with a way to decide whether or not to accept the first number so that your chance of ending up with the larger number is strictly greater than 50%?

Of course, it is super easy to deterministically get exactly 50%.  You could flip a coin to decide.  Or you could always take the number presented to you.  Because by independence, with probability 0.5 you were presented with the larger number to begin with.

But how do you do better than 50%?  Note that you know essentially nothing.  You don’t know anything about the second number nor what the probability distribution is.  You are randomly presented with a number and you have to say “I want” it or “I do not want it”.  And there really is nothing about one number that makes it seem better than any other.  Under one probability distribution 10^{10} might be a great number to take, under another it might ridiculously far below than the mean.  And the number line is just that, a line.  Keeping the first number because it’s, say, bigger than 0 and tossing it if it is less than zero, also seems pointless.  Lines do not have center points and any choice of one in such a problem is arbitrary and artificial.

I hope I have convinced you to some extent that you’d have to be crazy if you think there is any possible way to get better than 50%.  I mean, it’s just ludicrous to think so.  And, whenever I tell this problem to someone I always make them think about it and squirm for awhile until they have all but convinced themselves that I have made a horrible mistake, am dead wrong, and they will very soon figure out what I am overlooking to make such a foolish assertion.  That was precisely how I felt when Andrey first told me the problem.  So, if you are not to that point yet, please take a very moments to get there.

. . . . .

Very good.  And I must admit that to some extent you’re right.  No deterministic algorithm will get you the larger number with probability strictly greater than 50%.

So how, when given no information, can you tell the difference between a random number and something else that you absolutely nothing about?  Well, I’m glad you asked.  Here’s what you do.

The Algorithm.  First, let’s say that the number presented to you is a and the one hidden is b.  Pick some other probability distribution over the reals in which each interval is a positive-probability event.  For instance, we might as well just pick the normal distribution centered at 17.  (Recall that the normal distribution curve never stops.  It trails off to positive and to negative infinity.)

Now pick a real number based on that distribution, call it c.  We decide to keep a if a>c and we decide to reject a and take b if a\leq c.  That’s it.  A very simple algorithm.  But why does it work?  Think about the options for where c lies.

If c > \max\{a,b\}, then we will necessarily reject a and be right precisely when a is the smaller of the two numbers, i.e. 50% of the time.  If c< \min\{a,b\} then we will accept a and be right precisely when a is the larger of the two numbers, i.e. 50% of the time.  So far it’s a wash.  But!  When c \in (\min\{a,b\},\max\{a,b\}), then we will be right 100% of the time!  If a<c<b then we will reject a and take b, perfect!  And if b<c<a, then we will accept a and again end up with the larger number!

So because with positive probability we will choose a number inside the interval from \min\{a,b\} to \max\{a,b\}, we are therefore guaranteed to push our chance of success strictly about 50%.  Amazing.

So there you have it.  Sometimes a randomized algorithm can be strictly better than the best-possible deterministic one.

Two posts from now I will tell you Andrey’s solution after I first take a post to clean up a slight paradoxical-sounding issue that sort of came up above.  It is: Pick a random number from \mathbb{R} based on, say, the normal distribution.  Call it c.  But the event \{c\} has measure 0.  i.e. the probability that we were going to pick c was 0.  But somehow we did pick it, didn’t we?  How can you do a random experiment and end up getting something that you had probability 0 of getting?  What’s really going on?  Find out next time.

Advertisements

About Jay Cummings

I am a fifth year mathematics Ph.D. student at UCSD.
This entry was posted in Algorithms. Bookmark the permalink.

3 Responses to Introduction to Randomized Algorithms; Part 1

  1. Ryan says:

    Why do you insist on using a distribution that gives non-zero probability to every interval? Does something change significantly if you were to use, say, the uniform distribution on [-1,1]? What if I were to use a finitely supported distribution, maybe a uniform distribution on Graham’s number many natural numbers? Given how fast the Gaussian falls off to zero (heck, it can be uniformly approximated by functions with compact support) it seems like this wouldn’t make too much of a difference (and might be computationally easier? In practice, if you’re actually picking a random number, everything is “discrete” in some sense). Is this just a minor detail, or is there some part of the argument that I’m missing?

    • Jay Cummings says:

      The reason is that you want to guarantee that there is a positive probability that you land between \min\{a,b\} and \max\{a,b\}, no matter where those two (distinct with probability 1) numbers end up.So it is critical because the probability that you do so is precisely what is pushing you above 50%.

      With, say, the uniform distribution on [-1,1], if a happens to be 3 and b happens to be 4, then with probability 0 is c in between them and hence the probability is back at exactly 50%.

  2. Pingback: A Probability “Paradox” as per the Previous Post | whateversuitsyourboat

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s