Updated: Apr 6, 2020
Hello everybody! This week, I wanted to give you an idea of the story behind a centuries-old problem and a glimpse into what cutting-edge math looks like. I've been informally looking at a super famous problem called the Riemann Hypothesis for a little while now, and I wanted to see what aspects of it all of you can grasp as well. It's such a fascinating problem and I hope I'm able to give you a bit of a perspective of where it comes from and why mathematicians care about it.
I'm going to assume my readers know a little bit about complex numbers (you know, a + bi?) just in terms of what they are and what the real and imaginary parts are, but other than that, I should hopefully have broken everything down enough that you don't need any other prerequisite knowledge. So let's begin!
A fair disclaimer: This article is definitely the hardest to read that I've ever written. Don't get frustrated with it. Set it down if it's confusing. Come back (even years later when you've had more math). The best way to learn math is to let it absorb itself into your brain. Let it simmer until it becomes intuitive, then continue. If this isn't for you, that's good too! Most of us like to spend more time in the concrete than the abstract. All that being said, understanding the Riemann Hypothesis is a process—but it's one that's extremely rewarding. In reading this article, I hope you can come to understand what exactly mathematicians research, and why they find their subject so pleasing.
A Bit of an Intro
The Riemann Hypothesis is one of the 7 Millennium Problems that was issued by the Clay Mathematics Institute of Cambridge, Massachusetts in 2000. For each of the problems, there's a $1 million reward for a solution! You might have heard of some of them: They include the P vs NP problem and the Poincaré Conjecture, but today, we're going to focus on the Riemann Hypothesis.
The history of the problem is fascinating. Bernhard Riemann was a prodigy who grew up fascinated more by geometry and dimensions than number theory. Yet, somehow his first and only paper in number theory (which was only six pages due to his fear of publishing an unpolished idea) completely transformed the subject. The hypothesis itself was only a mention in the paper, and it only gained the spotlight of extensive mathematical research when David Hilbert announced it as one of the most important problems at his famous conference at the turn of the century. Today, mathematicians truly recognize the impact this problem would have, and it currently has a million-dollar bounty attached to its proof.
If you know a little bit about the Riemann Hypothesis, you know that the problem claims to have an answer to an extremely fundamental problem. Ever since the human race began to conceptualize math, prime numbers have always fascinated us. The sequence 2,3,5,7,11,13,17,19,23,29... is so fundamental, yet it seems to have no order at all, twisting and turning every which way. If you read my series on prime numbers (part 1 and part 2), you probably heard me mention this fascination and even alluded to the hypothesis itself in relation to primes. The Riemann Hypothesis may just have an answer to why the primes behave the way they do, and that's what I'm going to attempt to explain.
But first, let me take what's seemingly a sidestep to defining a very strange function. It turns out that the fact that this function and the primes are even related is an extremely incredible revelation as it is, and these relationships may just be what we need to understand what the Riemann Hypothesis is asking—and what results it could yield.
The Zeta Function
To start out, we're going to define a function. The zeta function ζ(s) is expressed as follows:
This function is super famous. You might have seen from one of my earlier posts that ζ(2) = 1 + 1/4 + 1/9 + 1/16 + 1/25 + ... = π^2/6. This is called the Basel problem, and its proof is also fascinating:
Let's think about where it makes sense for the zeta function to be defined. It turns out the series created by the zeta function converges (meaning it approaches a specific number) when s > 1.
To see why, note that when s = 1, we have a series called the harmonic series: ζ(1) = 1 + 1/2 + 1/3 + 1/4 + 1/5 + ... which diverges (meaning it never approaches a specific number and instead approaches infinity). The proof of why the harmonic series diverges usually goes something like this:
ζ(1) = 1 + 1/2 + 1/3 + 1/4 + 1/5 + ...
ζ(1)/2 = 1/2 + 1/4 + 1/6 + 1/8 + 1/10 + ... , so
ζ(1) = 1/2 + 1/2 + 1/4 + 1/4 + 1/6 + 1/6 + 1/8 + 1/8 + 1/10 + 1/10 + ...
Now let's compare the first and the second series. Since 1 > 1/2 and 1/3 > 1/4 and 1/5 > 1/6 and so on, comparing the first and the second series implies that ζ(1) > ζ(1)!
This is clearly not possible if ζ(1) is a finite number, so the series is not a finite number, and therefore approaches infinity.
The proof that for every real number s greater than 1, ζ(s) does indeed approach a finite number is a little harder (it usually involves calculus), but if you think about it, it begins to make intuitive sense that ζ(s) does indeed converge for s > 1.
But mathematicians didn't stop there. They did something called analytic continuation where they considered what would happen if they extended the definition for ζ(s). What if we were able to find a finite value for ζ(s) even if the value of s makes the series definition above give infinity in actuality? And to go even further, what if we made ζ(s) give a value for complex numbers? Through analytic continuation, mathematicians were able to find a value for ζ(s) for any complex value of s.
Analytic continuation is a process of preserving the essential properties of a function (usually it means keeping a property called differentiability, but you don't need to know what that means to understand this) and extending the function beyond the original definitions. In fact, when mathematicians realized that the square root function only seemed to make sense for real numbers and defined the square root of negative numbers as imaginary numbers, that was a form of analytic continuation! Even when people first defined negative numbers, that was considered continuation.
The important thing to realize here is we're not simply extending the zeta function willy-nilly however we want. We're doing it in a way that's not arbitrary, a way that makes sense, and a way that will be meaningful when applied (just like the complex numbers are across the scientific world).
The crazy thing is that this works! We can define the zeta function in such a way that extends it so that we can plug in s for the entire complex plane! If you've ever delved pretty far into the abstract math world, you might have seen this famous equation:
1 + 2 + 3 + 4 + 5 + 6 + ... = -1/12
No, 1 + 2 + 3 + 4 + 5 + 6 + ... does not really equal - 1/12. It approaches infinity, as it's clear that no matter what finite value 1 + 2 + 3 + ... + n is, we can get substantially higher than it by adding the next term. But, in a sense, 1 + 2 + 3 + 4 + 5 + 6 + ... does equal -1/12, even though it's negative and even though it's a fraction. And this is mind-blowing.
In a way, this is because ζ(-1) = -1/12.
There's way more I could say about this, but for now, I'll have to placate you with a great video so we can get back to the task at hand:
Essentially, raising a number to a complex power represents not just repeated multiplication (from the real part) but also a rotation (generated by the imaginary part). As a result, we're able to find values of ζ(s) even if s = 3-i or s=2+3i or even, as we discussed: s=-1.
Analytic continuation is especially hard to understand if you've never seen it before, so I swear by 3Blue1Brown's incredible video that allows you to both visualize and understand how analytic continuation works:
Yes, this concept is super abstract (I know!) but it's central to the heart of this problem, and watching his video will help explain it quite a bit better than I can—or the actual equations of the zeta function themselves can.
Now, we're in a position to state the Riemann Hypothesis.
The Riemann Hypothesis states that all the nontrivial zeroes of the zeta function (values of s such that ζ(s) = 0) have real part 1/2.
Let me try to break this down. Analyzing when the zeta function equals 0 reveals that all the negative even numbers (-2, -4, -6, -8, etc) definitely give ζ(s) = 0, but we call those the trivial zeros. We know they exist and they're not central to our investigation, so we call them "trivial." The nontrivial zeros are any other values of s that give ζ(s) = 0. The problem is to prove that all of these zeros can be expressed in the form 1/2 + ai for some value of a.
Mathematicians have already managed to prove that all of the nontrivial zeros have real parts between 0 and 1 (they call this zone the critical strip), and the problem is to narrow down this strip to a line (just real part = 1/2).
So what could this possibly have to do with primes? It seems we're just analyzing a random function. This is where this problem gets really fascinating, and that's the next step of my explanation.
The first thing I'm going to talk about is another way to represent the zeta function. In a stroke of brilliance, Leonhard Euler was able to write ζ(s) in a different form:
ζ(s) = (1 + 1/2 + 1/4 + 1/8 + ...)(1 + 1/3 + 1/9 + ...)(1 + 1/5 + 1/25 + 1/125 + ...)...
It's the product of the sums of all of the fractional powers of all the primes. Notice that since every single positive integer can be represented as the product of primes, every single reciprocal of a positive integer can be represented as the product of the reciprocals of primes:
For example, 12 = 2*2*3 and 1/12 = (1/2)(1/2)(1/3) = (1/4)(1/3)
or 100 = 2*2*5*5 and 1/100 = (1/2)(1/2)(1/5)(1/5) = (1/4)(1/25)
and every other integer can also be involved in similar lines of reasoning.
Thus, every term of the zeta function can be formed as a product of terms in what came to be called Euler's Product above, and every finite product of terms above is a term of the zeta function. Therefore, the two are equal.
The formula way of writing Euler's Product Formula is this. (∏ here just means "multiply all this," the same way Σ means "add all this.")
And the way it is often proven is by the fact that multiplying the zeta function by 1 - 1/p^s essentially "removes" every term that contains p, and multiplying ζ(s) by all such products formed from all such primes removes every term, leaving only 1. Therefore dividing out all of the 1 - 1/p^s factors gives us the expression above. The start to that proof is below and a full explanation is here.
It is this relationship—the fact that the zeta function is intrinsically related to the primes—that is eventually going to lead us to the fascinating final conclusion.
But before we get directly into how this explains the primes, I have to introduce something called the Prime Number Theorem.
At some point in the history of analyzing primes, mathematicians stopped caring about the primes as individual entities and the direct differences between them (after all the sequence 3-2, 5-3, 7-5, 11-7, 13-11, 17-13, 19-17 = 1,2,2,4,2,4,2... held no real answers), and they started looking at the big picture. They defined a function called π(x) that spits out the number of primes less than or equal to a given x. For example, π(12) = 5, and π(23) = 9. Here, pi and circles have absolutely nothing to do with what's going on here: π is simply the Greek letter used to represent the function.
I actually talked a good bit about both π(x) and the Prime Number Theorem in one of my last articles, and it gives a full and complete explanation of what this means. I'd like you to read the beginning of that article before continuing this one, so I'll keep it brief here: The Prime Number Theorem gives an approximation to π(x).
It states that π(x) gets closer to closer to another function that's much easier to grasp, x/ln(x), and more specifically, a just-as-simple function called the logarithmic integral. (It probably looks scary if you haven't had calculus, but I swear it's not that bad):
This means that mathematicians finally have a grasp on how the prime numbers as a whole behave. This theorem was revolutionary not only because it gave an approximation to π(x) but because it actually proved that Li(x) gets closer and closer to π(x) as x gets larger and larger to the point where if you choose a huge x-value, Li(x) essentially equals π(x).
But after all, this is just an approximation. Is there a way to take this approximation and find an exact value for π(x)?
This is the moment you've been waiting for. The importance of the Riemann Hypothesis is that it would give us a way to express the primes exactly, something that millennia worth of mathematicians only dreamed of doing.
It turns out that there is a way to express the zeta function in terms of its nontrivial zeros (almost like quadratics can be expressed in terms of their zeros, as in x^2 +2x-3 = (x+3)(x-1) but a bit more complicated), and that—together with the fact the zeta function can be expressed as the Euler Product Formula in terms of the primes—can be manipulated into this equation:
Here, J(x) is not quite π(x), but it's very closely related. It's called the prime power counting function and it's expressed as follows:
You essentially get one point for a prime, half a point for a prime squared, a third of a point for a prime cubed, etc. It's a little bit different and less direct than π(x) is, but it is just as magical.
So, what do all these crazy equations mean? It means that we can find an error term for J(x) (and indirectly π(x)) in terms of the zeros of the zeta function! That second term above is a sum of the logarithmic integrals of ρ where ρ represents the nontrivial zeros of the zeta function. We can pretty much disregard the last two terms as they're simply constants and small error terms, and focus on the fact that we just represented a fundamental prime-counting function exactly by using functions of the nontrivial zeros of the zeta function. This brings order to the primes.
The only problem is, this only works if the Riemann Hypothesis is true, that is, if ρ represents zeros that all have real part 1/2.
For a longer explanation of exactly how these equations came to be—and more on what the zeta function has to do with the primes, watch another incredible video, James Grime's explanation of the Riemann Hypothesis:
What Does All This Mean?
At this point, everything I just showed you was incredibly abstract. We walked through a bunch of concepts, a bunch of equations, and a bunch of ideas that we can't really fully understand until we spend years on the problem (and more than likely, get a degree). If you get too lost in the details, this may seem to mean nothing at all, so let's zoom out a bit now. Let's focus on the big picture.
Proving the Riemann Hypothesis would bring us order to the primes. The equations may look insane and crazy, but they break down to an explanation of how primes (and in turn, numbers as a whole) work. This is not only a far-from-intuitive explanation, but it's quite beautiful. It's bizarre that the zeta function that seems so far from related to the primes reveals so much about them. It might not be true that the Riemann Hypothesis tells us everything we ever need to know about the primes—mathematicians still have quite a ways to go—but it would tell us something incredible about a field that we've made so little progress in for centuries. There have been so few fundamental statements about the primes at all because they're so hard to understand, and this will tell us that we can do something, anything! to lead us on the way to progress.
Moreover, in giving us an exact formula for J(x), the Riemann Hypothesis takes down with it many other problems. Hundreds (even thousands) of papers have been written assuming the Riemann Hypothesis to be true, proving countless things to be true if only the Riemann Hypothesis was solved. Among other things, solving the Riemann Hypothesis would prove the Weak Goldbach Conjecture (Every odd number can be expressed as the sum of three primes) and hundreds of other amazing problems.
It would also transform how our digital security system works. To steal a paragraph from my previous article, let me say this. Our computers today rely on prime numbers. When our messages travel from computer to computer in code, they're often encrypted with absurdly large prime numbers. To simplify it a lot, the computer will encrypt the message by multiplying the large prime numbers together to form an even larger number, and the receiving computer will decrypt it because it knows the key: the two primes that were used to encrypt it. Any computer that might intercept the message can't decode it because since we know so little about primes at this stage, it's practically impossible to factorize a huge number and find the two primes that were used to make it. (If you want to know more, look up the RSA algorithm!)
To tell a bit of a funny story, in 1997, the Princeton mathematician Enrico Bombieri released an April Fool's joke saying that the Riemann Hypothesis had been solved, and the convinced and very worried NSA (the US National Security Agency) sent agents down to Princeton to figure out just what this would do to national security! That is to say, the Riemann Hypothesis certainly has a lot to do with our current world. What the Riemann Hypothesis would do is take us so much closer to understanding where the primes will land—and it will fundamentally change how cryptography is conducted. It also has a lot of effects on so many other fields involving primes (remember the cicadas? So much is related to primes). We might end up breaking how our current computing system works in the process, but we'll figure out so much that will advance technology and knowledge in general in the long-run. Primes are so fundamental that any advancements affect practically everything in a domino effect, and the Riemann Hypothesis is the key to all of that.
So where might the solution lie? It turns out that there is a whole class of functions called L-functions, a group of zeta-like functions that each have their own similarly stated Riemann Hypothesis. In tracing a path from one L-function to the next, mathematicians are searching for the thread that will unravel it all, and prove the elusive statement. Here's a great video on that:
Hang on to your horses because maybe, just maybe, this baffling, centuries-old problem will proven in our lifetimes, and then, we'll be living in brilliant mathematical history.
My three favorite resources are the three videos I embedded in this article (3Blue1Brown's visualization, James Grime aka singingbanana's explanation and the one just above on L-functions from Numerphile), but I also love these two articles:
And, if you're looking for a laugh, this is probably the funniest math paper I've ever read:
If you're looking to investigate some on your own, check out the L-function database:
There's also two books on the Riemann Hypothesis I would recommend:
The Music of the Primes by Marcus du Sautoy
Prime Obsession by John Derbyshire
Both go into the history of the problem a lot more than I did here, and I think reading about the history of math is (almost!) as fascinating as reading about math itself, so I would recommend them both, especially the second.
Again, if you read (and watched) the whole way through and it's still hard for you to grasp what's going on here, it's no big deal! This problem is so insanely abstract that it's hard to comprehend what is happening—and even to comprehend what it is asking in the first place. Thousands of articles have been written on this, and there's honestly way more I've learned that I could not possibly put in this article. This is an introduction, not a comprehensive manual, and I certainly don't know much more than the tip of the iceberg.
I do hope however that this article was informative for those who do care. If you have a question, some feedback or even if you find an error, leave it in the comments or shoot me a message through the contact form. We're all learning together, and I'd always appreciate some advice and direction. Have a great Spring Break everybody!
Cover Image courtesy of 3Blue1Brown and YWD