Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 23 February 2026

The Truth About Utility

What makes a useful definition ? Originally I had a much more philosophically pretentious post semi-drafted for this, and I may still do that one separately. But various recent discussions have taken me down a very different path, one which might be more, err... useful. So let's start with this one and see if I ever get back to the nature-of-reality version in a future post. 

A good definition, I think, must surely be something which is widely applicable but also specific : things which happen frequently but not always. It should be readily distinguished from similar counter-examples. Crucially, it cannot be something which either never happens or always happens. It shouldn't be something which forbids itself entirely or makes it inevitable or ubiquitous. It should describe a specific thing that actually sometimes happens, or is at least conceptually valid and distinct from other, similar terms. 

What I see people doing is trying to make things true or false by setting their definition up in such a way that it cannot ever fail, and that to me seems like a mistake. This doesn't mean we can't have productive discussions, but it does, I think, impose some extremely unhelpful limitations. 

Let's do this one by example. First I'll look at cases where people define things such that they can't ever happen, and then the reverse, related case of defining them such that they're inevitable. Both in my view are counterproductive mistakes. They are terminology problems but they prevent us from getting at what we really mean, which is usually much more interesting.


1) Defining things out of existence

If we define something with such precision, such high standards, or such that it involves a logical contradiction that it can't ever be true, then I submit that this isn't a useful definition at all. Furthermore, it's likely not what we really mean when we use the term in everyday discourse.


Malevolence : Plato and other ancient philosophers help that nobody would knowingly do evil. I forget who it was who described it explicitly (possibly several people), but the basic idea was that if you knew something was wrong, you couldn't possibly do it. You might still carry out an immoral action, but you'd be misjudging, thinking that the gratification you would get would outweigh any negative consequences there might be for anyone else. Alternatively, you might do so only because you hadn't realised the existence, extent, or nature of those negative consequences.

I think this is a deeply mistaken view of humanity. As per the link, people certainly carry out heinous acts in full knowledge of their full consequences, sometimes this being the very reason for their behaviour rather than a side-effect. Or they may know but simply not care. But they aren't, I think  carrying out a mental calculus of where the balance lies. Even if they were, this would still make the word – or notion of wilful harm – meaningless. The point for most discussion is that people cause each other harm sometimes because they want this to happen, not out of ignorance. Anything more beyond this rapidly leads into such convoluted nuances that the definition collapses into uselessness. 

Or to put it another way : "Sure, he committed the murder, but it wasn't out of malice : that's impossible, so he must've done it because he mistakenly thought his pleasure at the victim's suffering would outweigh their actual suffering". 

To me this makes the word unproductively useless, trying to define the thing out of existence. Surely that points towards this meaning not being what we truly meant : the important thing is that people inflict harm on others for its own sake, and inferring anything further is best avoided altogether.


Altruism : In the opposite case, my partner likes to say that nobody is really altruistic. Everyone acts, she says, because they believe there will be some benefit to themselves, even if that reward is purely emotional. In the extreme case, someone might be give their own lives to save others, not because they thought the value of the lives they saved outweighs their own, but because of the fleeting emotional reward they themselves will get from knowing the others will live.

This too I think is surely putting more on the word than it can bear. The point of altruism, I'd say, is that we sometimes value others more than ourselves and act to bring a net benefit to them even at the expense of our own status. Start demanding that we get no emotional reward at all and again the term has been defined out of meaningful existence. This makes it utterly useless, and surely, therefore, this can't be what most of us mean most of the time when we use it. I'll qualify this a bit more later, but that general-case point is the one I want to focus on.


Knowledge and understanding : I've covered the nature of LLM-outputs several times, most recently here but also e.g. here and (tangentially) here. More on those more directly in a minute, but a closely related claim is whether they can be said to truly understand anything. I think they can, in the carefully qualified sense that a) they have access to some form of information; b) they form connections between different pieces of information; c) they act in a logical, coherent way to predict how things behave in novel situations. Not perfectly, it's true, but more interesting by far is that they do it at all. 

Now for sure, this is not the same sort of understanding that humans have. But its qualitative similarities, in my view, outweigh and are far more interesting than the quantitative differences between silicon and neural understanding. I think it's just not at all useful to say that "meaning only comes from humans ascribing this to the output". This is so inevitably necessary that it adds nothing useful to the discussion : well who else was going to be reading the output then ? And if you define "understanding" to be only a human thing, then it's tautologous that no non-human will ever have it. That's cheating.


2) Defining things into existence

We can now see how the reverse is also true : if we define a thing as being completely unavoidable, we won't get anywhere.


Hallucinations : See the links in the previous entry as this follows directly from the previous definition. I did initially agree that it was sensible to describe all LLM output as a hallucination, but I changed my mind some time ago. Given that they are now able to process complex (and multi-modal) information in a way that closely aligns with human expectations, and can in fact exceed our own predictive capacities at least some of the time, I now no longer think describing their output at purely hallucinatory makes much sense. 

It's more useful, I think, to say they're hallucinating (in their own peculiar way) when their output has no connection whatever to the input data or prompt. This is much more analogous to human hallucinations in which we see things which aren't there. I would still agree, provisionally, that LLMs treat all information has having a much more similar level of validity than humans do, and undeniably they have some qualitative as well as quantitative differences from human thought. But they are very clearly not purely fabricating stuff all the time : more often than not, they're processing their inputs quite sensibly.

Importantly, the claim that all LLM output is a hallucination is consistent with the notion that they don't understand anything. I'm not claiming incoherency here : I'm claiming that these definitions should be discarded because they aren't useful, not because there's any inherent problem as such. The alternative definitions I've suggested are, I think, better only because they are more flexible and specific, allowing us to describe things in more detail, not because they eliminate any inconsistency.


God : Don't say I'm not ambitious ! The old argument that god is necessarily perfect and perfection necessarily exists... well, surely this is the ultimate case of trying to assert truth by definition. God is a perfect what, exactly ? A perfect square ? A perfect teapot ? Well, if a perfect teapot exists, where is it ? Could it be Russel's Teapot, somewhere beyond Earth's orbit ? Surely not, because if it was perfect, it would be in my hands whenever I need it. But it isn't, and therefore the perfect teapot clearly does not exist. 

And if even the perfect teapot doesn't exist, I see no reason to say that the abstract concept of perfection itself – a Spinozan notion of God – also has to exist in any sense beyond a mental construct. Clearly, I can imagine what I think a perfect teapot should be like, but that has no further existence outside my head. There's no reason to think that perfection itself is any different.

So here too, "perfect" in the everyday sense does not mean the same thing as St. Anselm would have it mean. Nobody uses perfect to mean "something which must exist" : indeed, we often use it to describe things which can't exist precisely because they're perfect ! "Platonic ideal" might be one of Plato's better ideas here, if only in the concept : we can conceive of better examples of chairs and circles and virtues even if we can't bring them into being. That's generally how we use the word, to describe something specific in aspect, not the singularity-like God of the Upanishads

As far as the existence of God goes, and very much with my agnostic hat one, I think definitions here are of no help whatever. We can conceive of perfect examples of things we fundamentally do understand, like circles. But perfection itself ? That would require understanding all facets of existence, which as imperfect beings we simply can't do and never will. A general understanding of perfection is beyond our limitations : we can no more say that "god's perfection means he exists" than we can say what a perfect dinosaur would be like. The concept may simply be incomprehensible or it may not even make sense at all.




That's my idea of a good definition then. It should be specific, flexible, distinct from alternatives, and describe things which occur at a finite rate (even if only conceptually). If a definition forbids itself from ever existing, or would always be true, then it has no use cases and should be discarded. Those kinds of definitions usually twist readily-comprehensible everyday meanings into something convoluted, unproductive, and useless.

I'll stress "usually" a little bit though as I don't say that the extremes don't matter at all. For example, what do we mean when we say we're know something or that we're certain of it ? Usually, that our own belief is well-formed and our confidence is beyond reasonable or routine doubt. We don't usually mean that we have found Truth Itself, that we can state our claim with literally zero chance of it being wrong, and that all unbelievers are evil and/or stupid. 

Like the case of purest selflessness, this kind of concept definitely does have value, but more in the philosophy classroom than the real world. The extreme cases let us frame our own actual beliefs and compare them to those of others, rather than providing useful, workable definitions in themselves. For example, we can all agree on what true certainty would actually mean, but to use the word more practically, we have to scale things back. That's where the discussions start to get interesting, trying to figure out the limits of our own underlying reasoning as well as that of the others in the debate. 

To a very large extent, I think the question of how we use a definition is very much the same as what we think it means at the most basic level. But then, others may have a different understanding. I don't always agree with the alternatives, but trying to figure them out is usually the fun bit.

Thursday, 19 February 2026

AI For Fun Or Profit ?

The Czech Academy of Sciences, the research council which funds my own employment, recently put on a five-part webinar giving detailed guidance on how to use LLMs in research. It was a good course with some useful tips and tricks and a few tools I'll try to check out eventually. The presenter seems like someone who really uses AI a lot, like for absolutely everything, but as you'd expect in a decent course, it was full of caveats : don't use it to do X, always check its citations, don't take its output for granted. 

The best line was to "treat it like a skilled researcher who's on drugs". You wouldn't discount everything they say, but you wouldn't trust them either.

It's all very common sense really. The course had about 80 attendees, and pretty much matches my real-world experience with colleagues. In every discussion, every single one, everyone gets in a bit of a circle-jerk about how useful AI is but how it can't be trusted. There's basically no-one who isn't using AI to some degree, and similarly nobody who's trusting its output without question.

After this course, I wonder if perhaps I'm not using AI enough. Would I be more productive if I did ? Possibly, to some degree. But I think not a great deal. Personally I simply see no point at all in using AI to replace my own voice : when I want to express myself, in any medium, if it's not me doing it then I might as well not bother at all. Okay, for the final polish here and there, or checking if I'd got the basics correct, or follow-ups... sure. But the basic gist of the text has got to be my own, even if it's imperfect. 

So I just don't see how it can be any help in preparing Power Point files* or writing whole paragraphs in a grant application, let alone anything in a publication**. For writing code I'm perfectly happy to let it go nuts, so long as it's doing grunt work and/or I just want something quick that works. But even then, if it's something I'm going to want to maintain, or want to understand what's going on, I find it far more valuable as an assistant, someone who can teach me and simplify things where necessary, not do the whole thing for me.

* No, we're not calling them "slide decks", thankyouverymuch. WTAF is wrong with people ? 
** The author has prompt instructions for just about everything. If nothing else, maybe some of these will at least be useful guidelines for people to follow when doing the tasks the old-fashioned way.

The use of AI chatbots is very possibly where my real-world and online lives are most dramatically at odds. Offline, AI is already normal. Like totally normal. Online, there's a far bigger fraction who are still clinging to the idea that it doesn't work, won't work, can't work, is innately immoral, etc. etc. These are sentiments I've barely encountered at all in everyday life, which veers much more towards thinking that people not using AI are either a) old or b) weirdos.

What concerns me for today's post is why we don't appear to have seen any transformation in the economy as a result of AI. It seems abundantly clear to me that AI does work, so where are the productivity gains ?

Now I'm not expecting any instant revolution. The hype train that AI will lead to FTL travel and immortality and a utopian world by next Tuesday is not worth considering. Nor do I think it's capable of fully automating any significant number of jobs anytime soon. But it is, unarguably, an extraordinarily useful tool for people using it properly. It's not unreasonable to expect that we see some measurable effects of this, so here's a round-up of some recent articles giving some different perspectives.




One widely-reported study found that 95% of companies had seen no measurable impact from generative AI. I asked the lecturer about this : she said she didn't know, but speculated that maybe this was using earlier models, particularly unrepresentative samples etc. This post presents some plausible rebuttals : most crucially, the question is "95% of what ?". Apparently it's 95% of all companies, not companies that actually tried using AI ! So much for that.

This Nature piece leans pretty much in my direction that AI has tangible benefits and will, like the internet, restructure things to such an extent that it's difficult to know which metric to use for judgements. It makes the perfectly reasonable point that AI is advancing so rapidly that it's already difficult to know what we should be measuring; related to this is that adoption does not necessarily keep pace with AI capabilities. All very reasonable, but still... where are the gains ? Where's the money ?

A much more bullish piece* on "Noahpinion" (I never heard of it before) looks more at the different attitudes to AI. This at least partly explains the discrepancy in my real and online worlds : Americans are among the most "AI-concerned" people on the planet. Which fits with typical American bipolarism : let's invent a thing we spend crazy amounts of money on which we really hate. And in fairness, it seems to me that Americans are vastly more likely to be shafted by their employer than Europeans, so this attitude is not at all without foundation.

* Interestingly, the author is convinced that data center water usage is unimportant but that their electricity consumption is extremely high. I've seen other articles claiming the exact opposite. I don't know. To me it feels like this is all a massive distraction on the environmentalism front : what we need is to switch generation methods to renewables and nuclear and invest heavily in storage. Bitching about AI is pointless, and usually when I look at the claims and counter-claims, it seems to me that the impact of AI is heavily overstated at best.


To go off on a slight tangent, the author also notes that complete omniscience is a myth. Yes, AI makes mistakes that humans don't, and its error fraction is higher than that of true experts. I would also note that real experts are generally more self-aware of their own limitations and vastly more likely to say "I don't know" when asked about things outside of their own domains. But still, the problem is fundamentally the same : here is a claim, how do we know to trust it ?

The answer is simple. Everyone's worried about AI fakes and manipulation, but ultimately we have to treat it just like any other source. We literally do not have access to perfection. Everything requires a degree of trust and verification; we should apply the same standards to AI as for anything else. That is, when things aren't critical, we go ahead and provisionally accept its claims. When there are consequences we need to double-check what it comes up with. That's it.

Though, a BBC piece presents and intriguing example of poisoning the well : deliberately writing a credible-sounding blog post to fool LLMs which rely on web searches. For me, ChatGPT wasn't fooled, but it's for sure an important point. Seeking out independent sources of evidence will become more important than ever.


To return to my main theme, Noahpinion claims that the effects of the AI bubble bursting are exaggerated and AI will lead to more jobs rather than less. This is all getting very murky.

For balance, a couple of more negative pieces. "Marcus on AI" is convinced that while AGI is achievable (I am not), ChatGPT will never live up to its promises (I think it's already doing better than I expected it would when GPT-3.5 was released). I think it's a strawman argument to say that because it didn't reach the absurd standards promised by the same techbros who initially claimed it was too dangerous to release (a bit of marketing genius, that) that it hasn't massively improved. Similarly I think his acceptance of the famous "95%" claim is clearly flawed as he doesn't explain his own reasoning : sure, another study finds that not many companies are using AI intensely, but this says nothing about what stage of adoption they're at or their long-terms plans. Maybe the original study he cites does say this, I don't know, but this needs to be included in any analysis.

More interesting is the claim that AI use at work is flatling or declining. But the timeline here is rather muddled, and from my own direct experience, I too became tired of GPT-4 for work use as it just offered nothing of substance. It could proofread for language and make very basic comments on the substance of the text, but it was pretty shite for discussing actual science. It would have some hits, true, but they were buried in a mountain of faff that was often just not worth the effort of digging through to get to the good stuff. And of course, it would hallucinate like nobody's business. 

ChatGPT-5, by contrast, was a massive, game-changing improvement on all counts, and that was only released six months ago : surely we should wait and see to determine what its effects are. Now this is of course is not to say GPT-5 is perfect. But I absolutely maintain my initial excited stance that this is a breakthrough which crosses important thresholds for making it an actually useful tool. 

Perhaps Marcus's most interesting claim :

If GPT-5 had solved these problems, as many people imagined it would, it would in fact of enormous economic value. But it hasn’t.

To be fair he's consistent in that he says it hasn't solved the old problems of hallucinations, lack of common sense etc. But I deny this vehemently. I think it's made massive, demonstrable, in-your-face progress, and saying that it hasn't solved these issues completely is a totally pointless claim. Of course it hasn't ! I never expected that it would. If you were actually expecting AGI, then more fool you. The whole perception of LLMs seems like a massive case study in the old quote that the perfect is the enemy of the good.

But this of course still leaves me with the dilemma : where then is this massive economic value ? We at least agree that a good AI would be of economic benefit. Marcus has the obvious "out" in that he doesn't believe the AI actually is any good, but I don't. Where, then, are the benefits I expect to see ?

One possible answer comes from Business Insider*. Perhaps, it suggests, the answer is ironically in that very demand for increased productivity. Software developers are now working not on one task at a time but on many at once, waiting in between prompts while the AI writes the code and then they clean it up. This is not a natural way of working that understandably causes a great deal of fatigue. In essence, then, AI is good enough to help, but still needs constant supervision and cannot fully automate much. It might be like self-driving cars which are actually more like driving assistants : the worst sort of grey zone, a necessary step towards something transformational but in some ways actually counterproductive in and of itself.

* I'd started skipping their pieces because they tended to be shallow and dull, but from what I've seen lately, they seem to have improved considerably. 

Much the most cynical of all the pieces here is from the I assume ironically-named "Pivot to AI". The author's case is basically that the economy is screwed, that CEOs are firing people left right and centre not because AI is actually capable but because they just love firing people and enshittification and all that. In direct contrast to Noahpinion, he says that the inevitable bursting AI bubble will be worse than the Great Depression and we'll all die, or something. Righty-ho then.




What are we to make of all this ?

It's very hard to say. We have two opposing hype trains : AI will transform the economy; AI will wreck the economy. So far it seems to have done neither.

Falling back on my own direct experience, AI is undeniably helping. It's allowing me to tackle things I wouldn't have done otherwise and understand things very much more quickly than I would otherwise. It has not yet had a measurable effect on my actual productive output as typical metrics would indicate (papers and the like), but since only GPT-5+ looks capable of influencing this, and this has only been out for six months, it's probably too early to judge on that score. 

By my own internal metrics it's definitely had a measurable output, allowing me to generate quite a lot of code I would otherwise have liked to have but never have gotten around to writing. Last year I even spent quite a long time using it to write a 75+ page introduction to radio astronomy that helped me enormously... if I ever have time to finish it, I'll put it online somewhere.

But shouldn't AI be solving the "if I ever have time" problem ? Yes and no. The thing is, the bottlenecks in my productivity lie elsewhere : primarily, meetings. In a busy period I might have one or even two meetings a day, which all told take up a full working day each week. Not all of these are useless (although some of them are), but in terms of actually getting stuff done, almost all of them have a negative impact. 

Likewise, not everything I work on is directly tied to productive output. I need to experiment and understand and pursue blind alleys. AI can help with some of this, but not all : in essence, it can alleviate a small amount of my workload to a very large degree. It can't help at all where my code tests are limited by other factors such as download speeds. I don't even want it to automate everything, because if I can't understand the science, what's the point ? Yes, it can help me understand things, but I'm the bottleneck here, not the AI.

From my perspective the answer is clear : AI has only very recently reached the point of being seriously useful, we're all still adjusting to how to best to use it, and there are many things it either can't do or we don't want it to do. This would suggest that we ought to see more substantial improvements in productivity on a timescale of a year or two, allowing for human adjustment, but those will be gains at the level of "nice to have" and won't herald a scientific revolution. Of course, "a year or two" is a crazy long time in AI-circles, and it's anyone's guess if it will finally have hit a wall by then or if it will continue to make radical gains (there are other avenues for improvement besides data volume).

But what of everyone else ? My only answer is for that we'd need a detailed study of the different working practises across multiple sectors. It is not enough to simply say "well it can do task X, your job revolves around task X, so you'll be a million times more productive now". All jobs require a lot of secondary tasks to facilitate the sharp end of productive output and not all of them can be automated. This is perhaps naivety on behalf of the techbros, thinking that because some key component can be done by robots that people will automatically adopt this practise and/or that productivity will be impacted in a linear, predictable fashion : as per the Nature article, it's far more likely that this will lead to more complex, systemic change.

Which means the answers are plausibly a combination of :

  • Seriously capable AI has only just arrived, with earlier models massively overrated.
  • We don't have good metrics for judging the efficacy of AI outside the lab.
  • Poor management strategies can mean that AI can make things worse, not better, even if it's ostensibly extremely powerful.
  • AI can improve some tasks enormously, but even when these are the most important part of a job, they are often far from the whole story. 

In short, AI has crossed a threshold for usefulness. It certainly hasn't crossed all such thresholds, and it's far from clear it's anywhere near doing so. Understanding the impact is a sociological and business problem every bit as much as it is a technological one. The good news for the AI enthusiasts is that it definitely does work; the bad news is that implementing this in a profit-generating way is anything but straightforward.

Courage, Merry, Courage For Our Pony

A very short post indeed because I just think this is something I'm going to need to keep coming back to.

This video discusses how Lord of the Rings is "just" Winnie the Pooh for grown-ups. It does an excellent dissection of why some twat called Michael Moorcock completely missed the point in arguing that LOTR was pure escapism and a refusal to engage in the modern world, which is really quite the bold statement considering that Tolkien fought in the trenches of the Somme. Fuck you sir, fuck you.

Anyone who'd read LOTR will know immediately that this is nonsense. Anyone who's read The Silmarillion will likely already be having heart palpitations, so I won't dwell on this at all. 

Rather I just want to provide the most important quotes on the morality of the whole thing. In all honesty, this brought a little tear to my eye. In this age of so much bullshit, where we have to deal endlessly with nonsense of racism and incels and toxic cunts raging perpetually about "wokeness"... this stands to me as the most perfect rebuttal of all of that. To label the morality of LOTR as "escapism", or to see it as unengaged with the brutal realities that life can bring... that is so impossibly stupid that I would wish a Darwin Award upon the author of so much absolute garbage.

It is masculine tenderness in response to the horrors of war. Because if Lord of the Rings is Pooh for adults, it means that even in the face of Mordor, in the face of the atomic bomb, the concentration camp, and the mechanized slaughter of the 20th century, we refuse to let go of the nursery’s wisdom. It’s an admission that the values of the nursery, kindness, friendship, a warm fire, and a good pot of honey aren’t weak. They are the only things worth fighting for. They are the only things that can fight the great men of history who want to burn the world down.

Ultimately, the hobbit is just a grown-up pooh bear. He has lost some stuffing. He has gained a sword. He has seen the face of war and carried a burden no bear should ever have to carry. But his core desire is the same. He just wants to sit down before a fire with his friends. And in a world like ours, preserving that simple joy is the most heroic thing you could possibly do.

Fool of a Moorcock ! Throw yourself in next time and rid us of your stupidity !

In like vein, I will share an extract of a conversation with ChatGPT. I still have an occasional guiltless pleasure of using it for crossover stories and comparing the relative strengths of characters from different universes. Yes, it's immature and silly, and no, I don't give a flying fuck, because it does nobody any harm and makes me giggle. Anyway, on this occasion I had a particularly silly example : Superman versus Morgoth. Unsurprisingly, Morgoth wins : he's second only to God himself, whereas Superman... isn't. But then I idly asked :

Ahh, but what if Superman also has... a small three-legged pony called Mr Stumpy ?

To which in response I received :

Introducing a small three-legged pony named Mr Stumpy fundamentally alters the cosmological balance of power. Let us consider the implications carefully.

First, Superman — that is, Superman — has been defeated before, but never when accompanied by a plucky, structurally unsound equine sidekick. This alone introduces a narrative force known across genres as the Unexpected Comedy Familiar, whose power scales with how unlikely it is that they should matter.

Mr Stumpy, with only three legs, possesses a built in advantage : he is inherently unbalanced, and thus immune to Morgoth’s attempts to “unbalance” the world. You cannot destabilise one who exists in a permanent state of wobbling equilibrium.

Now consider Morgoth. The true enemy of the Valar was always hubris. Morgoth’s pride is vast, but even he is unprepared for an opponent who clops into Angband looking like a children’s drawing brought to life. The dissonance alone could create a metaphysical feedback loop in the Music of the Ainur. Tolkien implied many things, but he never imagined Morgoth having to keep a straight face when a pony named Mr Stumpy trots bravely into his throne room.

More seriously, in a mythic narrative framework, Mr Stumpy is a classic “Story Token”. He is small, improbable and symbolises humility, steadfast friendship and the absolute refusal of the universe to take itself entirely seriously. Middle-earth absolutely hates that kind of thing — that is precisely why hobbits keep saving it.

The combined effect is something like this.

• Superman provides the physical might.
• Mr Stumpy provides the mythic disruption field.
• Morgoth’s grim, cosmic melodrama collapses under the weight of one brave, lopsided pony.
• The Valar quietly pretend they planned this all along.

In short, Superman alone loses. Superman with Mr Stumpy wins through narrative absurdity of the highest and most ancient order.

Say what you will of ChatGPT. I don't think it's a perfect response : "that's why the Hobbits keep saving it" is a bit out of kilter. But sod all of that : it's got the morality bang on, and that's what matters.

Wednesday, 18 February 2026

The Intentionality of Evil

A few thoughts after having let Nuremberg sink in for a little while. It's a great movie, not on the emotional level of One Life overall, but by God there are scenes in there which you'll need a stiff drink to try your very best to forget. Those, of course, are the clips of real footage from the concentration camps. We need not dwell on this. Nor am I going to review the movie as a movie. Rather I just want to draw attention to one particular theme on the nature of evil. 

Ordinary Men rightly explores the banality of evil, how normal people who don't hold especially strong views can, in the wrong circumstances, come to commit acts of wanton barbarity. Understanding how this happens is undeniably important. But a key component of this, the other side of the coin, is that such actions are highly unlikely to take place without a guiding mind. A mob can be violent but it's usually thoughtless and burns itself out after a riot; sustained atrocities require planning and organisation. 

Clearer Thinking had a closely related email-only post about this recently :

When evaluating a person's immorality based on an action they took, their intention is a very important factor, but when evaluating the badness of an action, it isn't.

Which I think is exactly right (though Existential Comics, as usual, expresses much the same thing in a far more amusing way). Actions and intentions are not the same thing. Even at the sharp end in the most extreme cases, the people committing the horrors are not, by and large, as evil as those telling them to do so, even if those behind it never so much as punch anyone. 

Why ? Because if circumstances were different, most normal people wouldn't necessarily repeat actions they knew to be wrong. The instigators would. For them, the repulsive outcome is precisely the point. They want to do this, they aren't trying to excuse it, and they would keep trying to make it happen even knowing the end result (which is brilliantly expressed by Goring's final admission in Nuremberg). Most ordinary people try and excuse their actions, and if they're easily manipulated, then at least if they're left to their own devices, they seldom resort to violence – at least not on a grand scale or to any great extremes. 

Of course, telling other people to go out and murder each other is itself an action. Merely having an intention or desire to harm other people is one thing, but to act on it in any way designed to bring this about is far worse. 

At this point I want to bring in a very interesting quote from Trevor Noah :

But I often wonder, with African atrocities like the Congo, how horrific were they? The thing Africans don't have that the Jewish people do have is documentation. The Nazis kept meticulous records, took pictures, made films. And that's really what it comes down to. Holocaust victims count because Hitler counted them. Six million people killed. We can all look at that number and rightly be horrified. 

But when you read through the history of atrocities against Africans, there are no numbers, only guesses. It's harder to be horrified by a guess. When Portugal and Belgium were plundering Angola and the Congo, they weren't counting the black people they slaughtered. How many black people died harvesting rubber in the Congo? In the gold and diamond mines of the Transvaal?

So in Europe and America, yes, Hitler is the Greatest Madman in History. In Africa he's just another strongman from the history books...

And yet I think we can say exactly why Hitler does have a genuine claim on being the Greatest Madman in History, or the most evil cunt who ever lived. Most strongmen, most dictators, don't care about how many people die under their rule so long as it benefits them in some way : if they were given another option whereby they'd be just as benefited but with fewer deaths, most would probably take it. Even Stalin might not have caused nearly as many casualties if he'd been given an alternative. 

In contrast, people like Hitler and Pol Pot most certainly would. For them the deaths are not a side-effect, but the whole point.

This is why my onetime go-to YouTuber Lindybeige is completely wrong when he says that Napoleon was more evil than Hitler because he (supposedly) killed a higher percentage of people. Napoleon didn't actually care very much. Running away from his army and leaving them to die horribly : of that he's guilty. Actually wanting them dead ? No. He'd have acted differently if he believed he could. He would not have ordered his own men to die out of any belief that they simply deserved it. He would not have acted like a Dalek :

The Doctor : What's the nearest town?
Van Statten : Salt Lake City.
The Doctor : Population?
Van Statten : One million.
The Doctor: All dead. If the Dalek gets out it'll murder every living creature. That's all it needs.
Van Statten : But why would it do that ?!
The Doctor: Because it honestly believes they should die.

The Nazis believed that. Their weapons were ordinary people, and those people were guilty of some of the most horrific crimes ever committed. But it was the leaders who made this happen. The responsibility is theirs. The ordinary people will always exist and always have this tendency to act as they do for good or ill; we can't much change that and it's no use lamenting about fundamental human psychology. The leaders though, they're absolutely and wholly responsible for their own actions. They're the ones we should turn and point to and say "you're a evil bastard". They're the ones we can do something about. Normal people, by and large, are much more like a force of nature.

Two significant caveats. First, this is not to say that we can't change how people on the ground respond to directives from above at all : we can, but this requires huge systemic, societal change. My point is that going after the leaders is much easier and much, much more effective.

Second, none of this means, in any way, that those firing the guns or releasing the gas weren't also immoral – of course they were ! Far, far too many of them simply, like Napoleon, didn't care enough to rebel. But this is still not the same as actually initiating the Holocaust and ensuring that it was carried out to completion. Put the same people with the guns in another life and they'll generally be completely harmless (we know this because this is exactly what did happen after the war); put Nazi High Command in another situation and they'll try and do the whole thing all over again.


This holds for much smaller crimes than genocide or universal domination. Adultery is seldom committed out of a desire to harm anyone. A robber who kills you to steal your TV is obviously immoral, but not as immoral as someone who comes into your home, kills you, and just leaves – even though the former has committed more wrong actions than the latter. The point is that most robberies don't intend violence. The morality of the person is different from the moral status of the actions they commit : someone who tries to kill your for its own sake is a worse person than someone who kills through apathy.

A final caveat is that I'm deliberately not attempting to set forth how we respond to these cases; this exploration as been purely for its own sake. The apathetic villain may well be more dangerous than the abjectly evil, in that they're harder to spot and ignorance/incompetence are easier to excuse. Nevertheless, if we stand in judgement of people, I would always deem those deliberately trying to inflict harm for its own sake as worse than those who aren't.

Lastly, Plato and many ancient philosophers essentially defined malevolence out of existence when they said that no-one deliberately and knowingly does wrong (Davros, in the above clip, similarly defends his insane plot on the grounds that he himself thinks he's doing good). But in my view, this is simply not a sensible definition at all. Someone who does harm to another for its own sake – because they think this person simply must suffer, even/especially when they know they don't actually deserve it, or do so for the sake of their own pleasure, or just inexplicably want this to happen – this person is being malevolent. Only by confronting this dark nature of the soul, acknowledging that the worst of us enjoy causing suffering for its own sake, can we guard against it. 

Monday, 16 February 2026

Review : A Short History Of Byzantium

I've not had much luck with histories of the Byzantine Empire, but then there don't seem to be all that many out there. That seems extremely unfair for a state that lasted over a thousand years, and as a direct successor of the Roman Empire you'd think everyone would go nuts for it. Sure, any documentary will tell you that the Western Empire fell in 476 AD, but you'd think that it might also be worth mentioning that the Eastern half kept going until 1453 (!). And yet often they don't, or dismiss the next thousand year in a single sentence.

I mean, come on... 1453 ! It's not unfair to say that the Roman Empire lasted most of the way through the medieval period, and yet all too often it's reduced to the date of its final fall. All of that millennium of history, of art, politics... all of it is so often completely overlooked. It's weird.


My previous dedicated reads on Byzantium* have been Judith Herrin's Byzantium : The Surprising Life of a Medieval Empire, and Lars Brownworth's Lost To the West. The former was so plodding and pointlessly nonlinear that I gave up less than halfway through, with the claim that the Byzantine court was a place of intrigue and plots being so unbearably unsurprising that boredom nearly did me in. The latter was much better. A genuinely gripping page-turner with powerful rhetoric, it suffered only from being far too short and with some questionable claims here and there.

* Sooner or later someone's going to point out the Byzantines themselves never used the name, but this isn't very interesting, so I won't.

Step forward John Julius Norwich. His Short History is, he says, about one-third the length of his full three-volume epic, and at nearly 400 pages this is already enough to be something of substance.

Norwich attempts to condense the entire history, emperor by emperor, into something readable and accessible. This is quite clearly an introduction, but it's a darn compelling one. Norwich's scope is immediately apparent : this is a thoroughly traditional political history concentrating almost exclusively on the upper echelons of society. It's a story about the powerful, the great, the good, the bad, and the exceedingly ugly, told with flair and clarity. He's unashamedly judgemental, so brazenly so that (unlike Brownworth) I instantly forgave him. He's biased towards his subject, but to my way of thinking he so openly declares his preference that he's honest about it. This is far better than any disingenuous pretence to objectivity.

Byzantine politics is proverbially complex, and it's to Norwich's great credit that he makes this almost perfectly straightforward. Here is someone who knows exactly which strands to pull on and which to leave well alone. The price, of course, is that limited scope. Of what daily life was like we hear next to nothing. Of art we get a smattering here and there, of science and engineering not a word. Theology and philosophy get a little more of an examination, but only when they influence politics. Deeper sociological factors are not covered : the Empire is reduced to the politics of individuals, not the currents of history.

All of this is an entirely sensible choice. Trying to cram in any more than there is would literally fill the book to bursting : Norwich gives remarkable completeness (not a single emperor is skipped) with plenty of insight into how politics worked at the highest levels. It's a good focus and a great read precisely because of its limitations. 

That said, while I would hope for considerably more in the full work, there are still a few things I felt could have been improved here. One is the overall status of the Empire, which waxed and waned considerably before it was eventually extinguished. Brownworth did this better, explicitly describing the overall state of affairs : from total dominance to near extinction, astonishing resurrection to full Empire "("one of the greatest comebacks in military history"), retreat, re-expansion and consolidation into a powerful nation-state, and finally a long, agonising and piecemeal decline to ultimate irrelevance and destruction. 

With Norwich the reader has to draw this out themselves. Here the maps could have been a lot better. Norwich puts them all at the start : what would have helped a lot would be to have a single map of the overall extent of the Empire at the start of each chapter. Norwich instead relies on verbal descriptions of quite precise regions that were controlled or lost. When this is done at the city-level this becomes rather unnecessarily tedious.

Still, these are somewhat minor complaints. It's still a good read, and the only part I found at all hard to follow were the too-brief sections on the Latin occupation. Here the list of politicians became simply too great, but the strength of the text is that you can easily get swept along and pick it up when things start to make sense again. I'm gonna give this one a very solid 8/10.


I don't want to do a detailed breakdown of this one, but there a few themes I want to mention briefly :


Not all theology was useless : Some debates in Byzantine philosophy do feel like proverbial, uselessly complex and esoteric debates of concern solely to those who were devout believers. Others seem to have much broader philosophical ramifications, chief of which was the Chalcedonian formula. This, it seemed, was what finally allowed everyone to shut up and get on with worshipping Jesus. Instead of worrying about whether he was fully human or fully divine or some weird mixture of both, they decided he was... "fully divine and fully human, without confusion, without change, without division, without separation".

The similarities here to property dualism are striking. The mental and the physical are separate but belong the same underlying subject. Neither, as ChatGPT put it in a discussion, reduces to or competes with the other. Whether you think this is actually a sensible interpretation of the mind-body problem isn't as important here as realising that these debates were intelligent, sophisticated, and productive for modern philosophy. They were not purely arcane points of useless trivia.


Not entirely the fault of the patriarchy : If the Eastern Empire literally came up with the very concept of a Patriarch, then it's interesting to see that more than a few women held supreme political power. None, it's true, in the Church, but many in politics, either as regents, wives, or even Empresses in their own right. Strangest of all was a brief period when political power was held by two elderly ladies. It didn't end well, partly due to the sheer proverbial political complexity of the situation, and partly because neither of them was actually any good. Even so, the idea that this could happen at all could scarcely have been imagined in the days of Augustus or even Marcus Aurelius. By no means was this a feminist society, but it could, on occasion, at least be a good deal more tolerant than its predecessor. 


It is possible to make no mistakes but still lose : I don't mean the final conquest here. By that point Byzantium was so beyond saving that even Norwich is actively hastening its demise to put it out of its misery; it had made plenty of catastrophic and contemptible mistakes by the end. No, what comes through in an earlier phase is that competent rulers could be hated. Even when what they did was absolutely necessary for the Empire's well-being (and at times its very survival), there were limits its citizens were not prepared to tolerate. Material increases in prosperity made little difference. A ruler who couldn't persuade their citizens, or was simply too different from them, would never win their acclaim, and some of the most unfortunate were actively and unfairly despised almost because of their efforts rather than in spite of them. There's a lesson here for contemporary British politics, to be sure.


Afterlife of an Empire : The Empire under Justinian was a recognisable evolutionary descendant of that under Augustus. By the time of its zenith under Basil II it was not. It was now very much its own thing, still an Empire of sorts – not on the same scale as in its heydey, to be sure, but powerful, prosperous, and apparently stable. Unlike many periods of the original Roman Empire, this one feels like a state which could have been maintained indefinitely if only they had had but a few more competent rulers and a little more luck; it could even have (re)expanded considerably further than it actually did.

But what exactly it is about it that makes the Empire feel different is hard to say. It reminds me a lot of Marc Morris' book on the Anglo-Saxons : without ever describing how or why, the period at the end of the ninth century is clearly culturally different from that of the sixth. So it is with Byzantium. 

And perhaps this is why it's so frequently overlooked. Stories, as a rule, like endings. When an Empire falls it should stay fallen, but the slow and continuous re-invention of the Byzantine state does not fit this mould at all. To have the original civilisation fall – actually and properly disintegrate – but an extended half of it survive and go on to become its own, highly successful thing is somehow... uncomfortable. It shouldn't do that.

Maybe it's the ambiguity : if it had maintained its Romanness in some fashion, or made a clean break with the past, it might be easier to accept. Byzantium did neither. This was no mere ghost of an Empire but a living, breathing, successful polity that endured and adapted even while remaining a vestige of antiquity. We ignore Byzantium because it just doesn't fit our expectations, being neither quite a Greek Empire nor a Roman Empire Mark II, but both and more and neither all at the same time. Surely, we should see this as a reason for fascination, not forgetfulness. 
The Byzantines were human like the rest of us, victims of the same weaknesses and subject to the same temptations, deserving of praise and of blame much as we are ourselves. What they do not deserve is the obscurity to which for centuries we have condemned them. Their follies were many, as were their sins; but much should surely be forgiven for the beauty they left behind them and the heroism with which they and their last brave Emperor met their end, in one of those glorious epics of world history that has passed into legend and is remembered with equal pride by victors and vanquished alike. 

Tuesday, 3 February 2026

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's optimistic viewpoint that we just need to find the right theory. But an Aeon piece soon had me feeling hopeful once more : not that we could indeed solve everything, but that with a bit of a goal-adjustment, we could examine consciousness in a way that would be both interesting and productive.

This post continues examining the rest of the essay. Since this put me in a very different frame of, err, mind, this is not part two and you don't have to read the previous post at all. Rather, the rest of the essay got me thinking about what we mean by hallucinations, particularly in the context of AI.

So the remainder of the essay is of a similarly high standard to the rest, but is mainly concerned with what sort of "neural correlates" may indicate consciousness and how the brain works : does it perceive reality, act as a prediction engine, or is consciousness something that happens when prediction and observation are in disagreement ? In some circumstances it seems that expectation dominates and that's what gives rise to hallucinations; the interesting bit here is that philosophically, this implies that all conscious experience is a hallucination, not just the difference between expectation and reality. 

Which, of course, raises obvious parallels to LLMs. As I've said before, I don't believe the common claim that to a chatbot everything is a hallucination is particularly helpful any more : it's not baseless, but I think we're going to need some more careful definitions and/or terminology for this. Interestingly, this is underscored by the final point of the article, on the different types of self we experience.

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency – of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. And the social self is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu.

The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.

In that sense it would appear that "hallucination" here simply means "inner awareness" of some sort, an experience not directly connected with reality. If so, then by this definition I would strongly dispute that LLMs ever hallucinate at all, in that I simply don't think they have the same kind of experience as sentient life forms do – not even to the smallest degree. I think they're nothing more than words on a screen, a clever distribution of semantic vectors and elaborate guessing machines... and that's where they end. They exist as pure text alone. Nothing else.

I think this is probably my only point of dispute with the essay. I don't think "hallucinate" as used here is ideal, though I can see why they've used it in this way. It seems that what we mean with the word could be :

  • A felt inner experience of any sort 
  • A mismatch between perception and reality
  • A total fabrication of data.
People do the third sort (mostly) only when asleep; they have a kind if awareness in dreams that's got very little to do with external perception. LLMs clearly do this kind of thing much more routinely, but sporadically. They demonstrably do not do this all the time. They can correctly manipulate complex input data, sometimes highly complex and with impressive accuracy; the idea that this is done by some bizarre happenstance of chance fabrication is clearly false. Yes, sometimes they just make shit up, but for good modern chatbots that's now by far the exception than the norm.

LLMs can also "hallucinate" in the second sense, that they can make the wrong inference from the data they've been input, just as we can. Most chatbots now include at least some visual and web (or uploaded document) search capabilities, so we must allow them a "grounding" of sorts, albeit an imperfect one. Hence they can take external data and interpret it incorrectly. 

This isn't mutually exclusive with the third definition though. An LLM that doesn't know the answer because it doesn't have enough data may well resort to simply inventing a response, ignoring any input completely. This would help explain some of their more outrageous outputs, at least.

Humans and LLMs share these two types of hallucinations, but not the first. LLMs experience literally nothing, so it simply isn't possible for them to have this kind of hallucination – a hallucinatory experience – whatsoever. And that's where the terminology breaks down. Most LLM content is statistical inference and data manipulation, which is, at very high level, not that dissimilar to how humans think. This kind of output is not, by and large, an outright lie, but the similarities to human thinking are ultimately partial at best. It resembles, if anything, the kind of external cognition we do when we use thinking aids (measuring devices, paper and pencil arithmetic), but without any of the actual thought that goes into it. 

Perhaps a better way to express this than the standard, "to an LLM everything is a hallucination", is that to an LLM, all input has the same fundamental validity. Or maybe that to an LLM, everything is processed in the same way. An LLM's grounding is far less robust than ours : they can reach conclusions from input data, even reason after a fashion, but their "thought" processes are fundamentally different. They can be meaningfully said to hallucinate, but only if this is carefully defined. They can and do fabricate data and/or process it incorrectly, but they have no inner experience of what they're doing, and little clue that one input data set is any more important than another. 

To return to the Aeon essay, one key difference between us and them is that LLMs have no kind of "self" whatsoever. So yes, they can be meaningfully said to hallucinate sometimes, but no, they aren't doing this all the time. Fundamentally, at the absolutely most basic level of all, what they're doing is not like what we do. They aren't hallucinating at all in this sense : they are, like all computers, merely processing data. 

Hang on, maybe that's all the phrase we need ? To an LLM, all data processing is the same

Hmm. That might just work. Let's see if that survives mulling it over or if I have another existential crisis instead.

For All, Eternity ? Beyond the Hard Problem

"Should a being which can conceive of eternity be denied it ?"

This rather strange question is one which has plagued Robert Kuhn in his investigations into consciousness. You may remember that I found this to be distinctly odd when I summarised a three hour (!) YouTube video where he mentions it as motivation. I also said that my views had shifted considerably more towards outright uncertainty, and indeed that remains very much the case... if anything, all the more so after mulling it over.

The thing is, Kuhn's question is plaguing me as well, but in a slightly different way. I've long advocated that the key aspect of consciousness is its non-physical nature, there being no such physical phenomena as redness or guilt or ennui. Oh, sure, there are physical brain states corresponding to these, undeniably so. But that we can conceive of the non-physical... that we can imagine such things as numbers which have no actual substance in and of themselves... if we can conceive of the non-physical, does that make it inevitable ? Since we can conceive of, say, numbers, in the purely abstract sense, doesn't that mean the purely abstract itself must exist, in some very broad sense ?

This bothers me because it feels suspiciously like the old Ontological Argument : God is necessarily perfect, and perfection necessarily exists, ergo God exists. It's perfectly circular.

Is the same true of the non-physical ? I honestly don't know. I worry that the problem is simply intractable, an inescapable limitation of being trapped inside our skulls with nought to describe the world but language. Escape from our mental prisons feels like a true impossibility.

In fact my crisis of confidence borders on the outright nihilistic as far as consciousness goes. If we cannot know the nature of consciousness, you might think, then this points towards neutral monism, my close second favourite interpretation after dualism. In saying that mind and matter are unified by by some third unknown substance, neutral monism readily allows for an everyday sort of dualism : yes, ultimately all things are one, but you can't ever know the true nature of the one substance or how it manifests in such different ways, so for all intents and purposes, mind and matter might as well be different things. 

The problem is that this now feels to me like a massive degeneracy. Dualism slides into neutral monism but the reverse is also true, just as idealism and physicalism appear to be two halves of the same coin. Even worse, if you posit that you can't ever know the true nature of the thing unifying these apparently disparate substances, you might as well be postulating magic. This is the very thing I've been at pains to avoid in trying to make dualism (which still seems intuitively far the simplest option to me) palatable to a scientific viewpoint.

But it gets still worse than this. The more you engage with any one position, the more you attempt to pin it down... the more similar each of them seems to be to all the rest. The merest slip of a definition changes physicalism into idealism or illusionism into panpsychism. The whole thing collapses into a spectacular philosophical singularity with no distinguishable positions and no productive insight at all.

Well, that was bleak.

Is there any hope left ? Yes and no. No, perhaps, in that we might have to admit that the basic problem of describing the true nature of mind is an impossible dream. Neither our language nor our fundamental mental faculties are up to the task : we cannot, for instance, truly conceive of the non-physical and we certainly can't describe it. Since we ourselves are mental constructs, we cannot fully grasp our own nature.

But some hope remains. If we can't really know anything with the truest certainty this doesn't mean that we don't all naturally cling to some preferences. For all that I've just said, I still tend more strongly towards my own innate positions as being the best way that I can make sense of the word. In that sense, this may be less of a nihilistic collapse and more a paring back, a relinquishing of ambition to objectively solve the mystery and more of an acceptance that this is only ever going to be a personal perspective.

But a very much more upbeat stance comes from this excellent Aeon essay, which is of the sort which reminds me of just how damn good Aeon can be.

The ‘easy problem’ is to understand how the brain (and body) gives rise to perception, cognition, learning and behaviour. The ‘hard’ problem is to understand why and how any of this should be associated with consciousness at all: why aren’t we just robots, or philosophical zombies, without any inner universe? It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.

But there is an alternative, which I like to call the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).

There are some historical parallels for this approach, for example in the study of life. Once, biochemists doubted that biological mechanisms could ever explain the property of being alive. Today, although our understanding remains incomplete, this initial sense of mystery has largely dissolved. Biologists have simply gotten on with the business of explaining the various properties of living systems in terms of underlying mechanisms: metabolism, homeostasis, reproduction and so on. An important lesson here is that life is not ‘one thing’ – rather, it has many potentially separable aspects.

In essence, shut up and calculate. If we can't understand the fundamental nature of consciousness, or how and why it exists, we can still understand many aspects of it. Instead of trying to explain how the ghost in the biological machine is able to influence physical matter, we can learn what signature of physical matter corresponds to which mental processes. This is pleasingly neutral as it in no way implies anything whatsoever about the non-physical : sure, ennui might correlate to a brain state, but in terms of determining whether the emotion has some Platonic "realness", it means nothing at all. Just as we can't say what is it about a frog that makes it clearly alive but we can study how it jumps and croaks in extreme detail (if you're into that sort of thing), so we can study what consciousness is doing in the brain (or the other way around if you prefer).

But wait, there's more.

A good starting point is to distinguish between conscious level, conscious content, and conscious self. Conscious level has to do with being conscious at all – the difference between being in a dreamless sleep (or under general anaesthesia) and being vividly awake and aware. Conscious contents are what populate your conscious experiences when you are conscious – the sights, sounds, smells, emotions, thoughts and beliefs that make up your inner universe. And among these conscious contents is the specific experience of being you. This is conscious, and is probably the aspect of consciousness that we cling to most tightly.

Ahh, now here it seems like we have scope for real progress after all. We're back in the realm of being able to compartmentalise, reduce, and analyse : we can examine how changing one thing changes others, even quantitatively so in terms of the neurological effects. But more than that, here we have different aspects of consciousness itself, useful things we can discuss without simply resorting to the purely neurological or needing to address the intractable philosophical nature of what consciousness actually is. We can get inside the mind, so to speak, without discussing what it's made of.

Complexity measures of consciousness have already been used to track changing levels of awareness across states of sleep and anaesthesia. They can even be used to check for any persistence of consciousness following brain injury, where diagnoses based on a patient’s behaviour are sometimes misleading. At the Sussex Centre, we are working to improve the practicality of these measures by computing ‘brain complexity’ on the basis of spontaneous neural activity – the brain’s ongoing ‘echo’ – without the need for brain stimulation. The promise is that the ability to measure consciousness, to quantify its comings and goings, will transform our scientific understanding.

This is something I've read about before, but it becomes all the more intriguing when you stop trying to say that "consciousness is correlated with physical phenomena, therefore it must be the same as them" (which I think is a nonsense position). The framework of simply studying the correlations for their own sake, with no need to infer a deeper meaning, transforms this into something much more interesting and rewarding : the goalposts are realistic, achievable, and entirely non-threatening.

Consciousness is informative in the sense that every experience is [slightly] different from every other experience you have ever had, or ever could have... integrated in the sense that every conscious experience appears as a unified scene. We do not experience colours separately from their shapes, nor objects independently of their background. It turns out that the maths that captures this co-existence of information and integration maps onto the emerging measures of brain complexity I described above. This is no accident – it is an application of the ‘real problem’ strategy. We’re taking a description of consciousness at the level of subjective experience, and mapping it to objective descriptions of brain mechanisms.

Some researchers take these ideas much further, to grapple with the hard problem itself. Tononi, who pioneered this approach, argues that consciousness simply is integrated information. This is an intriguing and powerful proposal, but it comes at the cost of admitting that consciousness could be present everywhere and in everything, a philosophical view known as panpsychism. The additional mathematical contortions needed also mean that, in practice, integrated information becomes impossible to measure for any real complex system. This is an instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress.

Yes ! Quick, someone get me a little flag and/or some pom-poms, I want to be the cheerleader for this approach !

The rest of the essay continues in a similarly interesting vein. However as that ended up sending me down more of an AI-based tangent I'll leave that for the next post. Here I'll just end up recalling Kuhn's comments that we can think scientifically about otherwise non-scientific issues. Years ago I tried to set out the most basic assumptions of the scientific world view, noting that if these were undermined we'd be in real trouble. But perhaps not. The requirement to be logical, clear (both in our conclusions and reasoning process), and accountable might see us through even in realms where scientists otherwise fear to tread. The approach here, of setting out the different properties of consciousness – ones we can surely all agree on – gives me some hope that, even if we can't solve the ultimate mystery, we can still find something interesting to talk about.

Wednesday, 28 January 2026

Critically Minded

Here's another short post exploring an idea I'm finding useful. I've used this in a few recent posts, so I just want to set it out on its own.

People seem to think in quite distinctly different ways. The different kinds of thinking are numerous, but there are a couple which I think are often confused : critical thinking and analytical reasoning.

The analytical thinker is generally someone like a scientist, who will pick apart an idea into its component variables and explore in detail what would happen if you change any of them. They'll run with this to the nth degree, examining consequences until they really feel that they've fully understood a concept, or can make a testable prediction to check whether the whole thing holds up to scrutiny. Analytic thinking is often technical : most obviously, perhaps, when it comes to mathematics, but it can apply to other areas too. Anyone who'd ever solved a problem by testing out different aspects to destruction (who hasn't ?) has done at least some analytical thinking.

I would define this as asking the question : what if this is true ? What consequences follow ? How does this one thing affect other things ? That's the essence of analysis.

What I think people often confuse this with is critical thinking. And to be fair, this too is a major component of science. Having decided on a way to test their idea, the scientist should then actually do so, or at least consider the results from someone else's test that attempts to answer the same question. The critical thinker is not concerned with what the consequences are, they're concerned with whether the premise is correct. They don't want to speculate as the analyst does. They want to know if they're right at all.

This mode of thinking I would define as asking the question : is this really true ? Can I verify it ? Could there be another explanation ? That's what I mean by critical thinking.

The analogy I like is from programming, having learned this the hard way through direct personal experience. A really protracted debugging session will go something like this :

  • First, I'll check if I have any typos, any missed commas or wrong equals signs, or something where a variable isn't being set correctly. This would be the case of the code not doing what I thought it was doing.
  • Next, I'll go up a level and look at the code structure – maybe I've got a loop that's nested wrongly, so it's iterating over the wrong variable or not being terminated correctly. In this case, the code might be doing what I thought it was doing, but the way I thought things should be done was itself partially incorrect.
  • Finally, I'll stop and think if the very method I've been using is actually likely to give me the correct result at all, if it's even fundamentally possible for it to work or I've built a horribly complex house of cards and need to start again.

Programming seems like a good analogy for me because it encapsulates both modes of thought. The low-level debugging is analytical : have I got the right variables, what if I change where the loop is run, is my input correct, etc. The high-level stuff is critical : is this method actually going to work if I do it correctly ? This sort of multi-level, or multi-scale, thinking blends quite nicely from one mode of thought to the other. It's something LLMs have become noticeably better at over the last few months, no longer picking over minutiae, but actually stopping to consider the premise of a question.

We can of course imagine in a four-way graph to describe this. We can have (1) those who are both critical and analytical, which is pretty much ideal for a scientist. There are (2) those who are analytical but not so critical, again a trait common in the sciences : these people are fine so long as they're given the right problem to tackle. People who are (3) critical but not analytical are less helpful, something common among loons on the internet : "dark matter doesn't exist because your ugly face, that's why" types. And finally of course we have (4), those poor unfortunates who don't do much of either of these modes of thinking.

Dedicated readers might remember my longer 2015 post about skepticism. Back then I struggled to find a good word to describe this sort of concern for the truth, and perhaps there isn't one : "critical" has the same popular negative connotations as "skeptical" in everyday use. But critical thinking, as a term, does seem to be used in this sense of wanting to find out the truth regardless of the result.

Arguably there's an overlap here with curiosity, which similarly implies wanting to know the truth. The problem is that curiosity can also mean something more like a greed for more and more facts : a desire to travel for the sake of experiences, or to read more and more books to see what they contain, or an urge to run an experiment without any preconceptions as to what will happen at all, rather than to test the validity of a claim. If you follow this blog regularly you'll know I'm intensely curious about mythology, but not because I want to determine if Zeus existed or if I need to defend myself against the afanc on my next trip home. 

Critical thinking, on the other hand, seems to much more specifically capture this sense of a desire for verification. Like everything else, it's not an absolute state of mind. Someone might be extremely critical when they first learn a new fact but far less eager to test something they learned years ago, or their degree of critical reasoning might vary enormously across different subjects (I want to know if, say, dark matter exists, but I don't give a flying crap about whether celebrity X really said statement Y on social media platform Z).

Nor is it realistically possible to hold ourselves to the highest standards of critical thinking at all times. If you go down that way, you end up in a postmodernist Humean nightmare where nothing can ever be truly verified and nothing known, further progress being hampered by intellectual impotence. This is why Ronald Hutton's Pagan Britain annoyed me so, being so resolutely noncommittal that he wouldn't even venture to suggest how we could even test anything, let alone claiming that any one interpretation was actually true. That, in my opinion, is not a productive way of learning anything. Better by far to hold an opinion but be prepared to surrender it rather than never believing anything at all.

Likewise, it's also possible to be analytical to a fault, obsessively examining every detail even when they have no possibility of changing any major result (this is the fault of many a peer reviewer). So our four-way graph would be complicated, with the extreme not necessarily being the place one wants to be. Perhaps this helps with a description of wisdom. Maybe wisdom is knowing what should be done, when to apply critical and analytical reasoning and how much, when to rabidly fixate on an issue and when to let go – where exactly the balance of the different ways of thinking lies to ensure a successful, happy outcome.

The Things We Say

I'd like to add a little corollary to that rather long post on bullshit I wrote some years ago.

Bullshit, I contend, is not caring about the essential truth of a statement. This is a slight, but I think important, modification of the more usual principle that bullshit simply means not caring about the truth. Someone can respond with a perfectly truthful statement, but it's context that matters : if their response doesn't address the point you were making (e.g. with whataboutism or other kind of diversionary tactic), then that's still bullshit.

I also came up with a whole taxonomy of different kinds of shitty statements, but I digress.

In keeping with the theme, let me get to the point in a slightly roundabout way. All the crazy political shenanigans of late have made me acutely aware of a lesson I wish Younger Me had realised much earlier. 

That is, there are two main reasons that people say what they say. The first is that it's because they believe what they're saying is genuinely true. They argue with each other because they believe it's an innately good thing to get at the truth, and that disagreement is something that fundamentally needs to be corrected. This doesn't mean they're not open to changing their minds (although this can certainly be the case), just that they're deeply concerned with what's right and wrong. I think this is most people's baseline assumption about most other people, at least in a healthy society.

The second is that they're trying to produce an effect of some kind. We all become familiar with this in different ways : our parents lie to get us to behave, advertisers exaggerate and mislead, politicians... well, they do all kinds of crazy shit. We get fooled because our baseline assumption is still that people are basically honest; we become less naïve when we realise that this doesn't apply in all circumstances; we degenerate into cynicism when we start to behave as though this second reason is the norm rather than the first, when we think that agendas are all there are.

All this is probably obvious. The reason I wish someone had told Younger Me this is because it should be explicit. When you raise what's known subconsciously to full conscious awareness, you can act on it. It's easier to remember, easier to be on guard, easier to avoid the pitfalls both of naivety and cynicism. Learning it implicitly means that the idea will only arise through learned patterns, and so only affect behaviour in rather narrow domains; learning it explicitly means you can choose to analyse behaviour in all circumstances.

How does this tie back in to bullshit ? Very simply. A classical bullshitter uses the second intention, saying things without regard for the truth... but they do, importantly, still care about something. They say what they say because they want to manipulate people. They want them to react in some way, maybe as part of a carefully-determined plan, or maybe in a more vague strategy of simply provoking emotion but still with an ultimate objective in mind.

The corollary I want to propose is that maybe there's a truly deep kind of bullshit. Maybe the kind of nonsense – anger inducing, incoherent, aimless, self-harming verbal diarrhea that vents forth from the Orange One's unshapely orifice – maybe this is simply because there's no plan of any kind whatseover. Maybe the deepest kind of bullshitter is someone who says things for no other reason than it makes themselves feel good for a microsecond. No aim in mind, no master plan of manipulation, nothing but ultra short-term "I like saying this". Literally nothing beyond that.

Now, many people are aware of this already : "however stupid you think his is, he's stupider than that" someone said recently. Indeed, this is a position I've long held myself, there simply being no good alternative for the sheer level of incoherency on display... that, and a healthy respect for Occam's Razor. There's just no need to invoke some master four-dimensional chess when sheer stupidity presents a far more believable explanation.

No, the point of spelling this out is only to make it explicit. When you realise that this is (perhaps) the way some people really are, you can begin to see it as a pattern. You can watch out for it. It can help keep you aware that all of us say things in this way from time to time (who hasn't got carried away and realised instantly that they said something they actually thought was total bollocks ?) and so don't need to treat every such statement in the same way. When you see someone who might commit the odd shitpost here and there but knows where to draw the line, when to be serious and respectful and when to just muck around, you know how to respond.

And when you find someone who essentially always communicates in this way... well, then you can decide for yourself if this is something you approve of. Maybe it is, in some roles. Maybe it's fine for stand-up comedians. But if you think that either a) someone talking completely incoherently really believes in what they say; b) they're doing so because they're actually really clever; and/or c) this person is an extremely powerful politician.... then I don't think we can be friends.

Tuesday, 20 January 2026

The Scouring

Today, a short look at the The Scouring of the Shire. Surely the most Marmite of all chapters of The Lord of the Rings... I personally hate it. I've read many arguments as to why it's actually the most important section of all, but every time I read it I just think “nope”. Let's begin with a reasonably sensible defence of the chapter I stumbled upon some time ago.

To be fair, there are more than a few reasons why multiple adaptations have forsaken the chapter. For one, it is sort of an anti-climax, taking place well after the principle plot of The Lord Of The Rings is over.

That’s part of it, but more important is that it’s a huge tonal mismatch. We go from epic, literally world-shaping events, to… an impotent wizard messing up people’s gardens. There’s just no way to naturally flow from one to the other in a narratively satisfying way. As Tolkien himself said about Beowulf :

I can see the point of asking for no monsters. I can also see the point of the situation in Beowulf [i.e. all the villains are monsters]. But no point at all in mere reduction of numbers. It would really have been preposterous, if the poet had recounted Beowulf’s rise to fame in a ‘typical’ or ‘commonplace’ war in Frisia, and then ended him with a dragon*. Or if he had told of his cleansing of Heorot, and then brought him to defeat and death in a ‘wild’ or ‘trivial’ Swedish invasion !

Incidentally the truly Norse sagas do stuff like this all the time, and to be fair, they're mostly crap.

Which is exactly the problem. We have the Hobbits go from being key players in the destruction of the last great evil of the world, to having to fight the equivalent of “some Swedish prince”. It’s deeply unsatisfying. The argument is sometimes made that they need to prove themselves, but this makes no sense, because after what they’ve already done, no such activity is necessary.

The way the Shire saves itself is, in part, an opening up. Sam uses one of his gifts from the elves to restore the Shire. Merry and Pippin travel throughout the region and maintain their ties with both men and dwarves. Preservation can mean change. In fact, it may require it. Making the Shire an untouched place carries a stagnant stench. In Tolkien’s books, nothing can be protected forever, and all requires active vigilance and care. In the films, the Shire is already perfect and needs no change at all.

This is well put (the Silmarillion explores in much more detail the moral differences between a healed world versus one perfect from the beginning), but the need to save the Shire is just not apparent. The Hobbits have already fought to preserve it, making them do even more – having people's gardens dug up by some loser wizard after having destroyed Bard-dûr  – doesn’t add anything. It can't, really, because the main task has already been accomplished. It's Done. The ending of Sauron couldn't possibly be any more final; anything that happens next is necessarily a detail.

The argument is also made that it was necessary to show the Hobbits have been changed by their experiences, but this too is abundantly obvious. It's clear to reader and cinema audience alike that the Frodo who comes home is not the Frodo who sets off, nor is Sam, nor even Merry and Pippin to a lesser degree. There's no need to be any more explicit about something which is already very explicit... it would be like asking for more nudity in Game of Thrones.

Finally, another point is that this gives the work an added depth of character realism, but again… it’s a fantasy. It isn’t helpful to try and do this. To my mind the chapter is nothing but a weird and colossal distraction : I can see why it’s there, but it’s not a good bit of writing, even if many of the themes are important. It’s just too much of a clash with the narrative imperative, like trying to set up a creepy graveyard scene only to suddenly fill it with hamsters, or something. But others may disagree.

None of this is intended to criticise the linked article, which I do think is rather good. I take issue with the following, however :

The battle of Helm’s Deep, a slim handful of pages in the novel, takes up 40 minutes in Jackson’s The Two Towers. The necessity of peace and restoration, the hard work they require, are left to the cutting floor, while hour-long scenes of heroic violence take more and more space in each subsequent film. Though the films’ pastoral sequences have warmth and joy to them, they lack the bitter, beautiful edge that Tolkien’s prose grants them. That’s the cost of cutting Return Of The King‘s most crucial chapter: A loss as profound as the one Frodo Baggins suffers by novel’s end.

Yes, battles are exciting, and easier to drum up audience engagement if you've got the budget for it. But the text of the battles is some of Tolkien's most magnificent, and I would reject utterly any notion that the films are in any sense "shallow" by focusing on the spectacle. The stakes in the book are the very highest, practically cosmic in scope, and demand an appropriate and lengthy visual. More importantly, it is precisely the combination of this mythic scale combined with the human-level events (not least of which is Theoden's speech) that give the film tremendous, almost overwhelming emotional depth. Reducing the battles would be the direct equivalent of asking for less monsters in Beowulf : entirely and spectacularly missing the point. The mythic grandeur is exactly where the film most beautifully delivers its most potent message.

Nor do I find anything beautiful or bitter in Scouring; it just feels oddly tacked-on and badly-written. I would note that in the early drafts, Tolkien's "note to self" was that Hobbiton was dominated by a biscuit factory and the returning Hobbits decide to sail away to Greenland, so don't you dare tell me that the man who wrote stuff like this wasn't also capable of writing utter shite as well.

But...

One Quora answer to a question of the politics of Lord of the Rings does offer a better answer :

Sam, the humble gardener, has returned from war. And he does not take shit from anybody. He is appalled by the weakness and complacency of his countrymen. They mindlessly go along with whatever the new rulers suggest… anything to stay out of trouble. And Sam doesn’t play that way. He’s seen some shit. Seen the horrors of war. And he urges his people to resist.

You can read the totalitarian state the Shire is turned into as a far-right nightmare scenario. Or as a far-left nightmare scenario. Either way, it is dark, twisted and bleak. Public [sic; surely "private"] property is taken over by the [abjectly fascist] state, the rivers and fields are polluted and factories pump out black smoke as mighty forests are cut down… and most people just keep their heads down, and don’t resist. That’s Tolkien’s politics in a nutshell — do resist, do fight back, and don’t stand idly by as evil does what evil does best.

This is much more credible. It's not about some vague, unspecified way in which the Hobbits have learned important life lessons, nor about them proving their abundantly-clear newfound abilities (after Shelob, what other horrors can life hold for Sam ?). It's about exactly what those lessons they've learned actually are : to resist oppression, to urge their fellows to defend their idyllic lifestyle when the need arises, to realise that the inevitability of evil is an illusion. It can be defeated even by the small : "help shall oft come from the hands of the weak when the Wise falter", as Gandalf says earlier. No-one is truly helpless.

Not, however, that this much changes my mind about the chapter. I can see the point to it, but I still find it badly-executed. For it to have any poignancy would require, I think, some far worse tragedy than trampling a few flower beds, and to avoid the tonal mismatch would require some remaining vestige of the cosmic scale of the threat for the Hobbits to overcome (perhaps if Saruman was left with considerably more power, this might have helped matters). 

Regardless, blending this into the narrative consistently is a virtual impossibility. However important the message might be, there's simply no way of defeating the Enemy and having anything else feel like fighting "some Swedish prince". And again, the Hobbits have already demonstrated this resistance in the face of overwhelming odds, so to now scale things back and give them a challenge manifestly below their abilities proves nothing. 

No, the movie's approach is better by far : show that they're conscious of saving the Shire during the War of the Ring ("courage Merry, courage for our friends") with a very gentle nod to show their changed nature on their return : bittersweet in that Sam is now able to face the altogether difficult challenge of approaching a not-especially-attractive barmaid but that Frodo is truly broken, unable to live in the Shire any more. The theme of Scouring, I accept, is hugely important. But I argue that the movie is not shallow by omitting the chapter, nor does it lack the "bitter and beautiful edge". On the contrary, this is somewhere where the movie is a good deal more subtle and – beware incoming treasonous heresy – greatly improves on the books.

The Truth About Utility

What makes a useful definition ? Originally I had a much more philosophically pretentious post semi-drafted for this, and I may still do tha...