How do we know that 1/2 + 1/4 + 1/8 + … = 1?

(See this post in another historical context:  How do we know that atoms really exist?)

Here’s one way to think about it:

There are other ways to think about this.  For instance, let’s call the sum s:

    \[ s=\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots \]

One thing that can be visualized in the diagram that might not be immediately obvious from the equation is that each number in the series is half of the preceding number.  So instead of writing

    \[ s=\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots, \]

we can write:

    \[ s=\frac{1}{2}+\left(\frac{1}{2}\right)\times\frac{1}{2}+\left(\frac{1}{2}\right)\times\frac{1}{4}+\left(\frac{1}{2}\right)\times\frac{1}{8}+\dots \]

But now look at something:

The series shows up again!  It’s just shifted to the right one place and every term is multiplied by \frac{1}{2}:

    \[ s=\frac{1}{2}+\left(\frac{1}{2}\right)\times\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots\right) \]

This just means you can substitute the series itself back into the equation:

    \[ s=\frac{1}{2}+\frac{1}{2}s \]

And we can solve this equation with simple algebra to get s=1.  This trick works any time you’re multiplying each number in a series by the same number.  We can try it with \frac{1}{3}:

    \[ s=\frac{1}{3}+\frac{1}{9}+\frac{1}{27}+\dots, \]

    \[ s=\frac{1}{3}+\left(\frac{1}{3}\right)\times\frac{1}{3}+\left(\frac{1}{3}\right)\times\frac{1}{9}+\left(\frac{1}{3}\right)\times\frac{1}{27}+\dots \]

    \[ s=\frac{1}{3}+\left(\frac{1}{3}\right)\times\left(\frac{1}{3}+\frac{1}{9}+\frac{1}{27}+\dots\right) \]

    \[ s=\frac{1}{3}+\frac{1}{3}s \]

which gives us s=\frac{1}{2}.  In fact, we can replace the multipliers \frac{1}{2} or \frac{1}{3} with any number r to get:

    \[ s=r+rs \]

or, in other words,

    \[ s=\frac{r}{1-r} \]

We’ll call this the “geometric series formula.”

The reason 1/2 + 1/4 + 1/8 + … is called a geometric series is because any number in the series is what’s known as the “geometric mean” of its two adjacent numbers. So for example, take the term 1/4 in the series above. The terms before it and after it in the series are 1/2 and 1/8. If you have a rectangle where one side is 1/2 units long and the other side is 1/8 units long, then the area of that rectangle will equal the area of a square with sides that are 1/4 units long. So from a geometric point of view, 1/4 is an average, or mean, of the values 1/2 and 1/8.

Let’s Get Pedantic

“So hold up, r can be any number?” you say, incredulously.

“Well, almost any number,” I say.

“What if I choose the number r to be -1?  Then the series goes (-1)+1+(-1)+1+(-1)+\dots  If I group every two numbers together, I get (1-1)+(1-1)+(1-1)+\dots=0+0+0+\dots=0.”

“Okay,” I say.

“But if I group the 2nd and 3rd together, and the 4th and 5th together, and leave the first out, I get -1+(1-1)+(1-1)+(1-1)+\dots=-1+0+0+0+\dots=-1.”


“And if I do it your way, with the geometric series formula you just told me, I get:

    \[ s=\frac{r}{1-r}=\frac{-1}{1-(-1)}=-\frac{1}{2} \]

So what gives?”

Alright, I may have overstated the case for my little formula just a tad.  But let’s take a closer look at the series.  If you just start adding terms together, you get -1 after the first term, (-1)+1=0 after the second term, (-1)+1+(-1)=-1 after the third term, and so on.  So in a sense, -1 and 0 are both acceptable answers, as the series oscillates back and forth between them.  And even -\frac{1}{2} is kind of an acceptable answer, since it’s halfway between the two.

We can make this notion more precise.  Looking at the new sequence generated by adding succesive terms of the series together: -1,0,-1,0,\dots, we can average the terms of this sequence to get a new sequence:  \frac{-1}{1}&=-1, \frac{-1+0}{2}&=-\frac{1}{2}, \frac{-1+0+-1}{3}&=-\frac{2}{3}, \frac{-1+0+-1+0}{4}&=-\frac{1}{2}, and so forth.  This sequence does converge to -\frac{1}{2}, meaning that, as we expected, the average value of the series (-1)+1+(-1)+1+\dots is -\frac{1}{2}.  This idea of averaging terms of a series which oscillates back and forth indefinitely is called Cesaro summation, after Ernesto Cesaro, a 19th century Italian mathematician.  It turns out that, as long as it’s not infinite, the Cesaro sum will always equal the result of the geometric series formula derived in the first section.

“But what about…”

“Alright,” you say, “that was a one-off case.  What if, instead of choosing r=-1, we chose r=2?  Then we get s=2+4+8+16+\dots, which clearly doesn’t bounce back and forth like the other series.  You can’t even average the values together, because the series keeps growing and growing. (Mathematicians would say that this series is not Cesaro summable, or that the Cesaro sum is infinite).  But if we use the geometric series formula, we get:

    \[ s=\frac{r}{1-r}=\frac{2}{1-2}=-2 \]

So you’re telling me that if we start adding together bigger and bigger positive numbers, and never subtract or add negative numbers, and the series always gets bigger and bigger, that all of those sums added together will give us -2?  Tell me the truth; you’re just making this up as you go along.”

This is precisely what Leibniz and Euler were doing in the 17th and 18th centuries.  Leibniz was happy to accept the geometric series formula as gospel, but Euler looked for a deeper explanation.  He, too, had a hard time denying the power of the geometric series formula in other situations like the one we mentioned in the previous section.  He thought about the graph of the formula s=\frac{r}{1-r}:

For our standard converging geometric series, as r gets closer and closer to 1 from below, the sum gets bigger and bigger (just as the series for r=\frac{1}{2} has a bigger sum than r=\frac{1}{3}).  Eventually, the sum goes to infinity and then “wraps around” to negative infinity, at which point it begins to get smaller.  Euler reckoned that this process was exactly what was happening with the series 2+4+8+\dots, that the numbers were getting so big, they were “wrapping around” infinity to negative infinity and coming out the other side to become negative.  This is how he reasoned that the sum of ever-increasing positive numbers would give a negative number.  He even figured out a formula to do this for all sorts of series.  We call this formula an Euler summation.  For geometric series, the Euler sum is always the same as the geometric series formula.

“Alright, smart guy, one more”

“Okay, that last one was really reaching, but I’ll let it slide for now.  But what about r=1?  Even Euler can’t deny that \frac{1}{1-1}=\frac{1}{0}, and every math teacher I’ve ever had tells me that you can’t divide by zero.  Ha!  I finally have you!”

It’s true, the geometric series has run out of tricks and is no longer of use to us.  The Cesaro sum is infinite and the Euler sum doesn’t exist.  Is there any way to make sense of this series?  Well, first of all, let’s take a look at what the series looks like.  If the first term is 1 and each successive term is just the preceding term multiplied by r=1, then the series is just s=1+1+1+1+\dots  So can it be summed?

Amazingly, it can!  The groundwork was laid by Euler and later by Riemann, who were studying sums of the form:

    \[ s=\frac{1}{1^x}+\frac{1}{2^x}+\frac{1}{3^x}+\dots \]

or, in more modern notation:

    \[ \zeta(x)=\sum_{n=1}^{\infty}\frac{1}{n^x} \]

This is known as the Riemann zeta function, and it has incredibly deep properties that, even after 2 centuries, mathematicians are still trying to unravel.  For our purposes, it suffices to recognize that our series is just what happens when you set x=0 in the Riemann zeta function:

    \[ \zeta(0)=\frac{1}{1^0}+\frac{1}{2^0}+\frac{1}{3^0}+\dots \]

Since any number to the zero power is 1, by plugging in zero for all the powers, we get back our original series 1+1+1+1+\dots  Now the only question is how we evaluate the Riemann zeta function at x=0.

It would be nice if we could relate this series to another series that we know how to sum.  For instance, our current series, 1+1+1+1+\dots, looks a lot like the series (-1)+1+(-1)+1+\dots, which we know has a Cesaro sum of -\frac{1}{2}.  We’ll actually consider another very closely related series, 1-1+1-1+\dots, which bounces back and forth between 1 and 0 and has a Cesaro sum of \frac{1}{2}.  So how do we turn the first series into the second series?  Like this:

    \begin{align*} &1+1+1+1+\dots\\ +\;\;&0-2+0-2+\dots\\ \cline{1-3} &1-1+1-1+\dots \end{align*}

The first series is just 1+1+1+1+\dots=\zeta(0).  We can take all the zeros out of the second series to get -2-2-2-2-\dots=-2\times\zeta(0).  And we mentioned earlier that the series 1-1+1-1+\dots is Cesaro summable to \frac{1}{2}.  So we have the equation:

    \[ \zeta(0)-2\zeta(0)=\frac{1}{2} \]

which we can solve to get \zeta(0)=-\frac{1}{2}.

 Why it matters

We’ve seen that mathematicians over the years have come up with ways to add up infinite sets of numbers with varying degrees of success and plausibility.  But a lot of this seems counterintuitive.  You might reasonably ask whether the math that we’ve done here has any bearing on the real world.  In fact, it does, in a technique called renormalization that’s used by physicists today.  It turns out that a lot of important theories in physics generate series like the ones above.  If we were to naively add up these series, we would get really weird results, like electrons with infinite electric charge.  Physicists can use procedures like the ones we’ve mentioned here to get results that make sense and make reasonable predictions that can be tested.  But  many physicists are suspicious of these methods and hypothesize that there’s a deeper reason that we don’t yet understand for why this math works.  For now, these methods allow us to tackle important physical problems, in addition to being the key to how we know that \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\dots=1.


Leave a Reply

Your email address will not be published. Required fields are marked *