Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Wednesday 26 June 2024

The unreasonable effectiveness of memorable metaphors

I read Wigner's notorious essay on the "unreasonable effectiveness" of mathematics recently, but I wasn't overly-impressed. The famous phrase seemed to be the only good thing about it. But then an article from Quantum Magazine came along with the even more provocative title, "How is science even possible ?", and that takes things in some interesting directions.

Wigner's basic argument boils down to : mathematics works so very well indeed, that the universe must truly operate on mathematical principles. The counter-argument is that mathematics is and can only be a human description of what the universe does. In that case it's not at all surprising that we can describe the universe with extreme precision, but we haven't figured out anything about what's really going on. Huge swathes of mathematical theorems exist with no applicability to observed reality, so we're just picking-and-choosing the ones that happen to work.

After mulling this one over, my sympathies went in all kinds of directions before eventually recoalescing. Let's do Wigner first.

 

Wigner begins with an admission (albeit in truly torturous language) that there's nothing intrinsically preventing there being alternative models which provide equally good descriptions of the same phenomena, as per my favourite example here. That would seem to shut down the "we really understand what's going on" argument straight off : how could we, if all our models were riddled with degeneracies ? But...

Those mathematical concepts which do work, work widely. The Gaussian distribution turns up everywhere : the statistics of noise from a radio telescope share much in common with dropping balls through a series of pins in a Galton board. Linear and quadratic relations abound. Forces of all kinds, generated by totally different underlying physical principles, decay according to the inverse square law. Laws governing the spread of disease can be applied to those governing the dissemination of information. And so on.

There is no need to be surprised by this : situations which may look at first glance dissimilar actually have more in common than may first appear, at least in the ways which matter for the observations. What's more interesting is that we don't need a a single unique mathematical model for every situation, broad trends exist which allow us to re-use the same basic models over and over again. That we've found other descriptions which are mathematically valid but inapplicable is not relevant, says Wigner : the fact that our limited toolkit is so widely applicable is what's interesting.

He goes on to say that while our most basic mathematical principles were indeed derived directly from observations, the same isn't true of the more advanced stuff. Complex numbers, for example, aren't directly visible from real data and were devised purely as an exploration of mathematics in itself, yet they have numerous applications to describing real-world data. The language of mathematics appears to be the language of reality itself... says Wigner, at any rate.

Perhaps his most interesting point is that the "physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena". Or indeed to that of individual phenomena, as he then elaborates. Newton could only measure data to test his theory of gravitation to a precision of about 4%, but later data improved on this by many orders of magnitude before a discrepancy was found. That is, we don't need to use a new mathematical description each time our data improves a little bit; yes, eventually theories do tend to break, but we have to push observations much deeper than anything justified by our initial models to find their breaking point. Often, when testing models in realms for which they were never designed, "we got something out of the equations that we did not put in".

Wigner admits that we may never find an Ultimate Theory, no full unification of all our "laws". They may, in the end, only work well in the domains in which they were designed and be in outright conflict elsewhere. Nevertheless, he concludes that mathematics is so widely applicable, so beautifully elegant, that it is indeed "unreasonably effective". We have no reason to expect it to be as useful as it is.

But still, mathematical models do have breaking points – you can't use Newton's theory of gravity in all situations. The mathematics of the improvements from Einstein to correct what appeared to be small deviations are massively more complex than Newton's simple algebra, to say nothing of the truly horrendous process by which Einstein derived General Relativity or the massive conceptual leaps needed to make such a theory possible. Einstein broke Newton hard.

From that point of view, mathematics could be only said to be unreasonably effective until it isn't. We have no idea what sort of revolutionary thinking will be needed to replace GR or the kind of discovery it will take to categorically disprove it.

It feels perhaps... evolutionary. A crude light-sensing organ tells you something about your world, a direction-capability gives you considerably more information, colour and higher resolution still more. It allows you to create ever more accurate and more precise models of the external world, which can be perfectly adequate to describing your observations, but ultimately reach a breaking point. One could never infer the existence of galaxies beyond Andromeda from human vision alone. Eventually you just need better data, as you indeed need better mathematics. True, your existing models might do better than you expect for a while but not forever. At some point a discovery is world-shattering, paradigm-shifting. However good your simpler mathematics is at the earlier scales, in the new ones it becomes so much wasted ink. Wrong is, ultimately, wrong, even if the scale of the wrongness initially makes it hard to spot.

To try a different metaphor, it might be like painting a picture. If you can't make out the details of a distant hill, you can usually imagine them well enough to add them in anyway. And lo, when you go closer to the hill, most of the time you will find that yes, those yellow fields were indeed cornfields, because past knowledge is a pretty good guide in most cases. But every once in a while, you'll find it was something completely unexpected, like hundreds of people wearing yellow hats. Your ability to describe reality may have remarkable accuracy and precision most of the time, but even a single failure points away from its deeper truth.

This is all very Humean, more so than I'd like. Is it still interesting, from the Wignerian perspective, that your simplified view of the world does do better than expected most of the time ?

Possibly. The argument may be made that the world does indeed operate mathematically, but our mathematics is still too imprecise to give us a truly correct description (that is, the language we use to express it, not (just) the specific formulae and equations we contrive). We may yet reach a point where our data have such exquisite quality, and our mathematics so brilliantly expressed, that we devise a model so harmoniously perfect that it can be extrapolated and interpolated with infinite perfection. We would have painted the ultimate picture, majestic in its beauty, stupendous in its grandeur, terrifying to behold.

Or perhaps not. Time to move on to the QM piece.


It starts off badly, musing about how weird it is that our brains are able to do maths at all. You might as well wonder how it is we're able to do any form of cognition at that point : a fascinating bit of neuroscience, no doubt, but at useless starting point for any philosophy. You have to take it for granted that we can think. And that we, being evolved from the universe, should be able to apprehend it to some mediocre degree is scarcely a revelation either. 

It gets better. They link back to Wigner, noting the surprise is being able to predict hitherto-unexpected phenomena and to make those predictions with uncanny accuracy. But, they say, this only ever applies where all the variables are known. Newton might only have been able to observe planetary motions with finite accuracy, but the only variable at work here is gravity. As long as his observational systematic and random errors were sufficiently well-behaved, it's not surprising his theory came out to be so astonishingly accurate : he could see and model the underlying trend extremely well, and all other effects (e.g. thermal radiation) are negligible to the nth degree for something like this. His view of the data was already good enough to see the underlying principles at work; further observations only sharpened the image rather than changing the channel, so to speak.

This is why and when a reductive approach works, when there really are just a very few known and knowable processes at work. In this case you indeed get an "unreasonable effectiveness of mathematics", but even that stands revealed as unsurprising : simple processes lead to simple effects, provided everything is well-measured.

But if you don't have simple processes... "things that happen at such small scales inside a nucleon at very high energies have nothing to do with, you know, why a bird can fly or stuff like that."

Biological evolutionary results can't be derived from basic physics, for two reasons. First is simply the astronomically large number of processes at work. Second, more interestingly, are emergent phenomena like temperature and pressure that are physically meaningful but can't be reduced to molecular, let alone sub-atomic, physics. Conversely, you can approximate physical processes of enormously complex systems using extremely simple mathematics, e.g. Newton's gravity again, or thermodynamics. As the QM piece puts it, you can make "a model of a model of a model of a model" and it still works. Such gross simplifications don't happen in biology, or at least not as often as you can get away with it in physics.

Not all physics can be simplified, of course. Some of it is fantastically hard. And chemistry ? Let's not even go there. 

Which reveals another flaw in the "unreasonable effectiveness of mathematics" : it often doesn't work at all. You can't do it for economics or sociology, even if you might be able to disentangle broad trends. It works really well, occasionally, in very simple systems, but is in no sense an inescapable truth. Things, they say, which are dependent on many other things, past histories and so on, will never be well-modelled with simple mathematics in the way that the gravitational field of a planet can be.


I would like to suggest that this non-reductive approach has a fun consequence : it reconciles infinity with physics. I've suggested this before (see also links therein) but I want to make it a bit more explicit and emphasise things a little differently. Suppose that there is no real smallest scale to reality, that if we could keep shrinking we would do so indefinitely. We go from the ordinary objects of tables and chairs down to the size of an ant, where water tension and airflow and static charges start playing a massively bigger role, down to the high-speed world of atoms and molecules where the whole concept of a solid substance vanishes, down to the realm of probabilistic electron fields, down, down, down through to the quantum foam itself, and find... 

Who knows ? Not us. Never us, in fact.

Reality below these scales ceases to be meaningful; the answer to "what would happen if we kept on slicing the smallest particle" revealed to be forever, "we'd have another smallest particle". Or more likely, something fully beyond human imagination. Nothing on this scale would have any direct effect on us at all.

All this I posited before, suggesting that this saves causation, but I'll go a step further here and say it allows the harmonious union of an infinite reality with a finite universe. Or if you prefer, you can think of the universe as both finite and infinite at the same time.

What I mean is that our observable universe would be finite, truly finite, with a limited number of stars and planets and elephants and whatnot. The numbers might be incomprehensibly large but they would be ordinary real numbers. But the broader reality itself would be truly infinite. It would not be a fractal but something of staggeringly greater complexity, with layers within layers, worlds within worlds each unknown and literally unknowable to the other. I mean that even in the strongest sense, that the one could never observe the other; the yawning gulf of physics between them forever unbridgeable.

Importantly, this would not be a multiverse in the "there's one where Hitler won WWII" or "where Kennedy didn't get shot" variety. No, in this scenario, Kennedy and Hitler were both absolutely unique, singular individuals; there's no problem whatever in asking "which is the the real Hitler ?" because there was only ever the one. Things would be infinite but not in a way that could be invoked as a statistical solution to any physics problem, e.g. inflation. It would be neither a fractal nor repeating in any sense, an infinite set of finite worlds. Each might have its own rules of operation but each would be utterly unique. This kind of non-foundational model (where the bottom literally falls out of the world) is of such wild insanity as to make a Simulation Hypothesis of infinite Hitlers look like a cosy tea party.

Wigner's Toolkit would be perhaps partially saved in this case. There would only be a finite number of different processes and circumstances, so it would be even less surprising that a small number of models could have a wide variety of applications.  Mathematics would only be "unreasonably effective" if those higher-order complexities were actually applicable to those other worlds, something we could however never know. There is perhaps a pleasing tangent to neutral monism here, but we wouldn't need to say that the other levels of reality consisted of our secret dreams or mystical visions or whatnot : this would give them no greater substance than their being imaginary in the everyday common-sense meaning of the word. Nevertheless, it would be make reality itself unknowable but observational reality comprehensible, which to me at least is rather satisfying.

And the depth of the Utmost Infinite extends in all directions... beyond the quantum foam lies no smallest scale nor a point of origin, beyond the superclusters of galactic filaments lies yet more unknown; and in time too, there is no shortest interval, no moment of creation, nor any longest duration, no end in any sight.

Brain fart over, you may go about your business.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

The unreasonable effectiveness of memorable metaphors

I read Wigner's notorious essay on the "unreasonable effectiveness" of mathematics recently, but I wasn't overly-impresse...