The Riemann Hypothesis: The Story of a Millennium Problem

Updated: Apr 6, 2020

Hello everybody! This week, I wanted to give you an idea of the story behind a centuries-old problem and a glimpse into what cutting-edge math looks like. I've been informally looking at a super famous problem called the Riemann Hypothesis for a little while now, and I wanted to see what aspects of it all of you can grasp as well. It's such a fascinating problem and I hope I'm able to give you a bit of a perspective of where it comes from and why mathematicians care about it.


I'm going to assume my readers know a little bit about complex numbers (you know, a + bi?) just in terms of what they are and what the real and imaginary parts are, but other than that, I should hopefully have broken everything down enough that you don't need any other prerequisite knowledge. So let's begin!


A fair disclaimer: This article is definitely the hardest to read that I've ever written. Don't get frustrated with it. Set it down if it's confusing. Come back (even years later when you've had more math). The best way to learn math is to let it absorb itself into your brain. Let it simmer until it becomes intuitive, then continue. If this isn't for you, that's good too! Most of us like to spend more time in the concrete than the abstract. All that being said, understanding the Riemann Hypothesis is a process—but it's one that's extremely rewarding. In reading this article, I hope you can come to understand what exactly mathematicians research, and why they find their subject so pleasing.


A Bit of an Intro


The Riemann Hypothesis is one of the 7 Millennium Problems that was issued by the Clay Mathematics Institute of Cambridge, Massachusetts in 2000. For each of the problems, there's a $1 million reward for a solution! You might have heard of some of them: They include the P vs NP problem and the Poincaré Conjecture, but today, we're going to focus on the Riemann Hypothesis.


The history of the problem is fascinating. Bernhard Riemann was a prodigy who grew up fascinated more by geometry and dimensions than number theory. Yet, somehow his first and only paper in number theory (which was only six pages due to his fear of publishing an unpolished idea) completely transformed the subject. The hypothesis itself was only a mention in the paper, and it only gained the spotlight of extensive mathematical research when David Hilbert announced it as one of the most important problems at his famous conference at the turn of the century. Today, mathematicians truly recognize the impact this problem would have, and it currently has a million-dollar bounty attached to its proof.


If you know a little bit about the Riemann Hypothesis, you know that the problem claims to have an answer to an extremely fundamental problem. Ever since the human race began to conceptualize math, prime numbers have always fascinated us. The sequence 2,3,5,7,11,13,17,19,23,29... is so fundamental, yet it seems to have no order at all, twisting and turning every which way. If you read my series on prime numbers (part 1 and part 2), you probably heard me mention this fascination and even alluded to the hypothesis itself in relation to primes. The Riemann Hypothesis may just have an answer to why the primes behave the way they do, and that's what I'm going to attempt to explain.


But first, let me take what's seemingly a sidestep to defining a very strange function. It turns out that the fact that this function and the primes are even related is an extremely incredible revelation as it is, and these relationships may just be what we need to understand what the Riemann Hypothesis is asking—and what results it could yield.


The Zeta Function

To start out, we're going to define a function. The zeta function ζ(s) is expressed as follows:

Courtesy of medium.com

This function is super famous. You might have seen from one of my earlier posts that ζ(2) = 1 + 1/4 + 1/9 + 1/16 + 1/25 + ... = π^2/6. This is called the Basel problem, and its proof is also fascinating:

Let's think about where it makes sense for the zeta function to be defined. It turns out the series created by the zeta function converges (meaning it approaches a specific number) when s > 1.


To see why, note that when s = 1, we have a series called the harmonic series: ζ(1) = 1 + 1/2 + 1/3 + 1/4 + 1/5 + ... which diverges (meaning it never approaches a specific number and instead approaches infinity). The proof of why the harmonic series diverges usually goes something like this:

ζ(1) = 1 + 1/2 + 1/3 + 1/4 + 1/5 + ...

ζ(1)/2 = 1/2 + 1/4 + 1/6 + 1/8 + 1/10 + ... , so

ζ(1) = 1/2 + 1/2 + 1/4 + 1/4 + 1/6 + 1/6 + 1/8 + 1/8 + 1/10 + 1/10 + ...

Now let's compare the first and the second series. Since 1 > 1/2 and 1/3 > 1/4 and 1/5 > 1/6 and so on, comparing the first and the second series implies that ζ(1) > ζ(1)!

This is clearly not possible if ζ(1) is a finite number, so the series is not a finite number, and therefore approaches infinity.


The proof that for every real number s greater than 1, ζ(s) does indeed approach a finite number is a little harder (it usually involves calculus), but if you think about it, it begins to make intuitive sense that ζ(s) does indeed converge for s > 1.


But mathematicians didn't stop there. They did something called analytic continuation where they considered what would happen if they extended the definition for ζ(s). What if we were able to find a finite value for ζ(s) even if the value of s makes the series definition above give infinity in actuality? And to go even further, what if we made ζ(s) give a value for complex numbers? Through analytic continuation, mathematicians were able to find a value for ζ(s) for any complex value of s.


Analytic continuation is a process of preserving the essential properties of a function (usually it means keeping a property called differentiability, but you don't need to know what that means to understand this) and extending the function beyond the original definitions. In fact, when mathematicians realized that the square root function only seemed to make sense for real numbers and defined the square root of negative numbers as imaginary numbers, that was a form of analytic continuation! Even when people first defined negative numbers, that was considered continuation.


The important thing to realize here is we're not simply extending the zeta function willy-nilly however we want. We're doing it in a way that's not arbitrary, a way that makes sense, and a way that will be meaningful when applied (just like the complex numbers are across the scientific world).


The crazy thing is that this works! We can define the zeta function in such a way that extends it so that we can plug in s for the entire complex plane! If you've ever delved pretty far into the abstract math world, you might have seen this famous equation:

1 + 2 + 3 + 4 + 5 + 6 + ... = -1/12

No, 1 + 2 + 3 + 4 + 5 + 6 + ... does not really equal - 1/12. It approaches infinity, as it's clear that no matter what finite value 1 + 2 + 3 + ... + n is, we can get substantially higher than it by adding the next term. But, in a sense, 1 + 2 + 3 + 4 + 5 + 6 + ... does equal -1/12, even though it's negative and even though it's a fraction. And this is mind-blowing.

In a way, this is because ζ(-1) = -1/12.

There's way more I could say about this, but for now, I'll have to placate you with a great video so we can get back to the task at hand:

Essentially, raising a number to a complex power represents not just repeated multiplication (from the real part) but also a rotation (generated by the imaginary part). As a result, we're able to find values of ζ(s) even if s = 3-i or s=2+3i or even, as we discussed: s=-1.


Analytic continuation is especially hard to understand if you've never seen it before, so I swear by 3Blue1Brown's incredible video that allows you to both visualize and understand how analytic continuation works:

Yes, this concept is super abstract (I know!) but it's central to the heart of this problem, and watching his video will help explain it quite a bit better than I can—or the actual equations of the zeta function themselves can.


Now, we're in a position to state the Riemann Hypothesis.

The Riemann Hypothesis states that all the nontrivial zeroes of the zeta function (values of s such that ζ(s) = 0) have real part 1/2.


Let me try to break this down. Analyzing when the zeta function equals 0 reveals that all the negative even numbers (-2, -4, -6, -8, etc) definitely give ζ(s) = 0, but we call those the trivial zeros. We know they exist and they're not central to our investigation, so we call them "trivial." The nontrivial zeros are any other values of s that give ζ(s) = 0. The problem is to prove that all of these zeros can be expressed in the form 1/2 + ai for some value of a.


Mathematicians have already managed to prove that all of the nontrivial zeros have real parts between 0 and 1 (they call this zone the critical strip), and the problem is to narrow down this strip to a line (just real part = 1/2).


So what could this possibly have to do with primes? It seems we're just analyzing a random function. This is where this problem gets really fascinating, and that's the next step of my explanation.


The Primes

The first thing I'm going to talk about is another way to represent the zeta function. In a stroke of brilliance, Leonhard Euler was able to write ζ(s) in a different form:

ζ(s) = (1 + 1/2 + 1/4 + 1/8 + ...)(1 + 1/3 + 1/9 + ...)(1 + 1/5 + 1/25 + 1/125 + ...)...