Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 6 April 2026

The Wisdom of Russel Howard's Grandmother

Today's post resumes my unusual habit of amateur epistemology.

I've explored the definition of understanding many times, concluding that it's the knowledge of how things connect and interrelate. The more we understand something, the better we can predict how it behaves in novel situations. Fair enough as far as it goes, but I've always found the main issue with this is what happens when we reach some seemingly irreducible fact that we can't understand. We can always memorise a mathematical operator, however complex it might be, but that's not at all the same as being able to apply it in anger.

Here I can offer two explanations for why we reach such limits. The first I can suggest immediately, while the second will take a bit longer and be developed over the rest of the post.

The first explanation is simply hardware. It may be that we just can't hold more than a certain number of connections to some mental objects, just as we can't process things at arbitrarily high speeds. Perhaps the brain allocates only a certain volume to each topic, and when we reach its limit, we simply can't add anything more into that particular bucket. It might be that either we just cannot absorb anything more on a broad subject*, or that an apparently singular individual item just requires too many connections to too many others for us to properly understand it – in essence, a straw that breaks a camel's back*.

* Like when Homer Simpson forgets how to drive because he takes a wine-making course.
** Perhaps somewhat literally. I had several lecture courses which imparted negative knowledge, meaning I had less understanding of the subject than when I went it.

This, I think, gets us a long way, but still doesn't fully address the more fundamental limits of understanding. For that, I'm going to pursue the more philosophical issue of wisdom. Even Plato never really came up with a convincing definition of this, so you can't accuse me of lacking ambition.

It's probably helpful here to recap the last time I examined such issues. In that post I ventured four main ideas :

  • Analytical thinking asks, "what if this is true ?", exploring the full consequences of a proposition.
  • Critical thinking asks, "is this actually true ?", being a concern for accuracy rather than with exploring any consequences.
Intimately connected with these two main points were two other slightly more amorphous concepts :
  • Curiousity is the yearning for more knowledge. It can take different forms, such as the desire to learn about more and more topics (e.g. consuming endless Buzzfeed lists) versus the desire to verify existing claims, but the essence of it is the same.
  • Multi-level thinking is the ability to consider a position on different scales, e.g. whether each line of code is syntactically correct versus whether the underlying method is doing what it's supposed to be doing. Grammar Nazis versus fact-checkers, I guess.
All of these are closely related, and separating them like this is somewhat artificial... but, as we shall see, useful.

Which all leads to my proposed definition of wisdom : knowing the best thing to do

Hmm. That seems a bit trivial to bother with.

A slightly less compressed form might help : knowing how to carry out the best solution to a problem. But this is probably still too compressed to seem of any use, so let's deconstruct this more fully. It's been carefully phrased to include two key aspects. First, the wise thinker must be able to assess a proposed solution and realise if there's a better approach. But secondly, the alternative they suggest must be something they actually know how to enact. After all, there's no wisdom in realising that everyone would be happier (say) if you gave them all more money unless you have a workable scheme to raise the necessary funding.

This definition works, I think, for both moral and purely logical problems. To give a recent example of the latter : I gave ChatGPT a coding question, saying I'd found a particular method to solve my problem which should work, but it explained that there was a much better approach so it went off and implemented that instead (in this case it worked perfectly – and this was a problem I'd previously spent some weeks trying to figure out from first principles*). 

* For the interested reader : I wanted to use binary_fill_holes to fill in meshes in Blender with an integer grid of vertices. ChatGPT realised that there was a much better, though badly-documented, Blender-internal solution that was the ideal way to meet my objective. With hindsight, my own solution was actually pretty darn close, but the specific implementation was full of holes... pun intended, sorry not sorry.

Morally, the obvious case is Jurassic Park. Sure, you could bring back dinosaurs, but that famously doesn't mean that you should. Or as Russel Howard says, sure, legally you can wake up your gran while dressed as Hitler, but that doesn't mean it's a good idea.

In some sense this could be described as supercritical thinking. It's concerned not just with all scales of the problem itself, but goes beyond that to consider its full consequences in context, to address whether the proposed solution would be a good approach or even whether the problem is one that needs solving at all. It's a union of critical, analytical, and multi-level thinking all combined and expanded. Rather than knowing what sort of thinking to apply, as I tentatively suggested previously, wisdom might be better described as turning rational thought up to 11.

And this takes us back to the limits of understanding. Wisdom here might be in recognising that these limits simply cannot be broken, that trying to probe any deeper won't result in anything useful – that we should stop when we have a definition that's actionable and have shown that further investigation won't bring any more improvements*. Likewise, I could go further with this post to better define what I mean by "best solution". But this would open up an enormous can of worms that probably wouldn't help and would make everything much longer. The wiser course of action seems to be stop here as far the definition goes. What remains is only a few clarifications and some practical consequences.

* In this particular case, we end up trying to define words by using other words, and hit the limit of what pure language can convey. I have some further musings on this which I may or may not get around to writing up eventually.

Wisdom, like intelligence, is sometimes used as a synonym for raw knowledge. If instead we say it means knowing what's the best thing to do, then clearly this requires knowledge, just as it needs both critical and analytical thinking skills. But it's not the same as any of these. We can immediately see that the correlation could be imperfect, that someone might have a huge breadth and depth of knowledge but be unable to see the relevant similarities across different fields.

This strongly suggests that wisdom is a thing that can be taught... at least, to the same extent that knowledge can be taught. The behaviour of LLMs, as per the earlier example, might offer some clues here. For these, I strongly suspect that knowledge and wisdom are strongly coupled, that all you need to make them wiser is a larger training data set and a bigger context window – they'll consider more information from more fields of expertise at more and more scales automatically*. That said, you can't really "teach" an LLM anything except by fully retraining it, which in effect gives you a whole new model. All you can really do is instruct them.

* Though of course, the quality of the output still depends on the quality of the input training data as well as the prompt.

This is not much like humans, who can certainly be taught and indeed are (mostly) capable of learning from their mistakes. Indeed, wisdom requires knowing what to avoid just as much as when to proceed. Whereas in an LLM wisdom may emerge naturally as a function of size of training data, this is (I think) likely only a trivial result of that training data containing more and more wise behaviour. It's much harder to gauge whether this happens for humans through absorbing sheer volume of knowledge, though I suspect if we're only ever told, "these things are true, learn these parrot-fashion" instead of, "here's how to evaluate knowledge", the result tends to be someone who's neither wise nor critical. Our own training certainly does matter.

Of course, LLMs don't really have beliefs and opinions in the same way that humans do. An LLM is a mass of statistical information and probabilistic weights, with no real fixed ideas at all – certainly not between conversations in the same way that humans have some ideas that they hold as almost permanently fixed. But likewise, updating our own world view in response to new information is seldom easy, just as including it in an LLM isn't as simple as telling it something in a single conversation. The analogies are interesting just as much as for their differences as their similarities.

In any case, the ability to update one's world view is not the definition of wisdom, but it does follow directly from it. The wise thinker knows when a single fact is of limited consequence and when it may necessitate a paradigm shift. They are able to judge when the new information is itself likely wrong and when it's their own existing ideas which are at fault. They consider also the metadata of who said it and why – they do not evaluate it purely on its own merits*. So the ancient Greeks were right to value to self-knowledge, as understanding one's own biases is essential in understanding how we respond to new data, but this isn't wisdom in itself, just as the ability to learn from experience is part of it, but not itself the definition of wisdom.

* The idea that ignoring the source of information is somehow actually the correct, rational approach is a curiously persistent and incredibly widespread error. 

Finally, it's obvious how the Jurassic Park scientists were intelligent but unwise. Russel Howard, by contrast, is much less intelligent but much wiser : having  a really dumb idea but realising that this would be a Bad Thing To Do. What of his grandmother ? Well, if she wakes up convinced that Hitler has returned, she's not very wise at all, but if she realises that this is so incompatible with her well-established knowledge that Hitler is long dead, then probably she's a lot wiser than her grandson. So c'mon Russel, put it to the test. For SCIENCE !

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

The Wisdom of Russel Howard's Grandmother

Today's post resumes my unusual habit of amateur epistemology. I've explored the definition of understanding many times , concluding...