Cookies   I display ads to cover the expenses. See the privacy policy for more information. You can keep or reject the ads.

Video thumbnail
Imagine you bite into an apple and find a beheaded worm.
Eeeh.
But it could have been worse.
If you had found only half a worm in the apple, you’d now have the other half in your mouth.
And a quarter of worm in the apple would be even worse.
Or a hundredth.
Or a thousandth.
If we extrapolate this, we find that the worst apple ever is one without worm.
Eh, no, this can’t be right, can it?
What went wrong?
I borrowed the story of the wormy apple from Michael Berry, who has used it to illustrate
a "singular limit".
In this video, I will explain what a singular limit is and what we can learn from it.
A singular limit is also sometimes called a "discontinuous limit“ and it means that
if some variable gets closer to a certain point, you do not get a good approximation
for the value of a function at this point.
In the case of the apple, the variable is the length of the worm that remains in the
apple, and the point you are approaching is a worm-length of zero.
The function is what you could call the yuckiness of the apple.
The yuckiness increases the less worm is left in the apple, but then it suddenly jumps to
totally okay.
This is a discontinuity, or a singular limit.
You can simulate such a function on your smartphone easily if you punch in a positive number smaller
than one and square it repeatedly.
This will always give zero, eventually, regardless of how close your original number was to 1.
But if you start from 1 exactly, you will stay at 1.
So, if you define a function from the limit of squaring a number infinitely often, that
would be f of x is the limit n to infinity of x to the power of 2 times n, where n is
a natural number, then this function makes a sudden jump at x equals to 1.
This is a fairly obvious example, but singular limits are not always easy to spot.
Here is an example from John Baez that will blow your mind, trust me, even if you are
used to weird math.
Look at this integral.
Looks like a pretty innocent integral over the positive, real numbers.
You are integrating the function sin(t) over t, and the result turns out to be Pi over
two.
Nothing funny going on.
You can make this integral a little more complicated by multiplying the function you are integrating
with another function.
This other function is just the same function as previously, except that it divides the
integration variable by one-hundred-and-one.
If you integrate the product of these two functions, it comes out to be Pi over 2 again.
You can multiply these two functions by a third function in which you divide the integration
variable by two hundred and 1.
The result is Pi over 2 again.
And so on.
We can write these integrals in a nicely closed form because zero times 100 plus 1 is just
one.
So, for an arbitrary number of factors, that we can call N, you get an integral over this
product.
And you can keep on evaluating these integrals, which will give you Pi over 2, Pi over 2,
Pi over 2 until you give up at N equals 2000 or what have you.
It certainly looks like this series just gives Pi over 2 regardless of N.
But it doesn’t.
When N takes on this value:
The result of the integral is, for the first time, not Pi over 2, and it never becomes
Pi over 2 for any N larger than that.
I leave you a reference for the proof in the information below the video.
The details of the proof don’t matter here, I am just telling you about this to show that
mathematics can be far weirder than it appears at first sight.
And this matters because a lot of physicists act like the only numbers in mathematics are
2, pi, and Euler’s number.
If they encounter anything else, then that’s supposedly “unnatural”.
Like, for example, the strength of the electromagnetic force relative to the gravitational force
between, say, an electron and a proton.
That ratio turns out to be about ten to the thirty-nine.
So what, you may say.
Well, physicists believe that a number like this just cannot come out of the math all
by itself.
They called it the “Hierarchy Problem” and it supposedly requires new physics to
“explain” where this large number comes from.
But pure mathematics can easily spit out numbers that large.
There isn’t a priori anything wrong with the physics if a theory contains a large number.
We just saw one such oddly specific large number coming out of a rather innocent looking
integral series.
This number is of the order of magnitude ten to the forty-three.
Another example of a large number coming out of pure math is the dimension of the monster
group that is about 10^53.
So the integral series is not an isolated case.
It’s just how mathematics is.
Let me be clear that I am not saying these particular numbers are somehow relevant for
physics.
I am just saying if we find experimentally that a constant without units is very large,
then this does not mean math alone cannot explain it and it must therefore be a signal
for new physics.
That’s just wrong.
But let me come back to the singular limits because there’s more to learn from them.
You may put the previous examples down as mathematical curiosities, but they are just
very vivid demonstrations for how badly naïve extrapolations can fail.
And this is something we do not merely encounter in mathematics, but also in a lot of physical
systems.
I am here not thinking of the man who falls off the roof and, as he passes the 2nd floor,
thinks “so far, so good”.
In this case we full well know that his good luck will soon come to an end because the
surface of earth is in the way of his well-being.
We have merely ignored this information because otherwise it would not be funny.
So, this is not what I am talking about.
I am talking about situations where we observe sudden changes in a system that are not due
to just willfully ignoring information.
An example you are probably familiar with are phase transitions.
If you cool down water, it is liquid, liquid, liquid, until suddenly it isn’t.
You cannot extrapolate from the water being liquid to it being a solid.
It’s a pattern that does not continue.
There are many such phase transitions in physical systems where the behavior of a system suddenly
changes, and they usually come along with observable properties that make sudden jumps,
like entropy or viscosity.
These are singular limits.
Singular limits are all over the place in condensed matter physics, but in other areas,
physicists seem to have a hard time acknowledging their existence.
An example that you find frequently in the popular science press are calculations in
a universe with a negative cosmological constant, that’s the so-called Anti-de Sitter space,
which falsely raise the impression that these calculations tell us something about the real
world, which has a positive cosmological constant.
A lot of physicists believe the one case tells us something about the other because, well,
you could take the limit from a very small but negative cosmological constant to a very
small but positive cosmological constant, and then, so they argue, the physics should
be kind of the same.
But.
We know that the limit from a small negative cosmological constant to zero and then on
to positive values is a singular limit.
Space-time has a conformal boundary for all values strictly smaller than zero, but no
longer for exactly zero.
We have therefore no reason to think these calculations that have been done for a negative
cosmological constant tell us anything about our universe, which has a positive cosmological
constant.
Here are a few examples of such misleading headlines.
They usually tell stories about black holes or wormholes because that’s catchy.
Please do not fall for this.
These calculations tell us nothing, absolutely nothing, about the real world.
Thanks for watching, see you next week.