Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Saturday, 28 February 2026

Listening To The Voices In Other People's Heads

Here's a very nice long read from the Guardian about trying to understand what's really going on inside our heads.

Recently I concluded that any attempt to understand what consciousness actually is is likely hopeless. Trying to understand what we mean by experience when literally all we have access to is experience is inevitably circular. But the effort itself isn't fruitless. We can all of us have different preferences for what we think is going on – non-physical, spiritual, purely materialistic – and that discussion is often productive, if only in understanding how people reach radically different conclusions from the same data. More promisingly, it gives us a better handle on how we go about defining things, how we grapple with the imperfections of mapping language onto reality.

But there were also two more directly productive outcomes described in the Aeon essay I looked at last time. One was that we could understand in some detail the neural correlates of consciousness, the processes occurring within the brain that are associated with what we think and experience. The second, somewhat subtler issue, was that we can still take a reductive approach to different aspects of consciousness : we can describe it in terms of different levels and content, and in so doing get back to something we can discuss in familiar scientific terms. Just as we don't have to worry about what a quark is really made of to understand how it behaves, so too we can tackle the subject of minds.

The Guardian piece is complementary to the Aeon article in that it leans more heavily in this direction. As it begins :

A neuroscientific perspective on consciousness might tell us something about its neural correlates, but it is unlikely to tell us much, if anything, about the nature of thoughts or the textures of inner experience; it’s the wrong tool for that job. So what might we learn about consciousness if we gave more weight to the view from inside the experience – the phenomenological viewpoint ?

For example, it describes William James' comments on something I've found extremely strange for many years :

“Suppose we try to recall a forgotten name,” he writes. “The state of our consciousness is peculiar. There is a gap therein; but no mere gap. It is a gap that is intensely active.” A sort of ghost of the absent name haunts the empty space in our consciousness, he suggests, making us “tingle with the sense of our closeness, and then letting us sink back without the longed-​for term”.

He goes on: let someone propose a candidate for the missing name, he suggests, and even though we have no consciousness of what the name is, we are somehow conscious of what it is not, and so summarily reject it. How strange! Our consciousness of one absence is completely different from our consciousness of another. But, he asks, “how can the two consciousnesses be different when the terms which might make them different are not there ?” 

The feeling of an absence in our minds is nothing like the absence of a feeling; to the contrary, this is an absence that is highly specific and intensely felt. Thoughts glimpsed from some height of awareness but somehow not yet formed, much less put into words or images – this is the subtle terrain James invites us to explore with him.

I very much agree with this. It's always seemed weird to me how we can think in complete sentences. We – or rather I, and I'll get back to why the distinction matters later on – don't really sense the words falling into place, they just come out like that. Clearly they must have been assembled at some point, but somehow this happens without us knowing about it ! 

Perhaps even weirder is that sensation when grappling with a complex problem : at the point of reaching a possible solution comes a very distinct sensation of raw, unstructured thought, some quasi-awareness that the answer has been reached but without being able to articulate it. That moment is crucial. If interrupted in this momentary phase, the proto-thought may be lost entirely. But if it's seen through to completion, the thought crystallises into language : something we can easily memorise, recall, and communicate with others.

Much of the article is then concerned with whether we can access this much lower level of thinking in some way. If we want to understand consciousness, can be go beyond language ? Maybe that's too ambitious. To start with, can we at least access thoughts at the stage that they become coherent ? 

Step forth, Russell T. Hurlburt :

For half a century now, Hurlburt has been scrupulously collecting reports of people’s inner experiences at random moments – and just as scrupulously resisting the urge to draw premature conclusions. A die-​hard empiricist, he is as devoted to data as he is allergic to theories... I’ve been going around with a beeper wired to an earpiece that sends a sudden sharp note into my left ear at random times of the day. This is my cue to recall and jot down whatever was going on in my head immediately before I registered the beep. The idea is to capture a snapshot of the contents of consciousness at a specific moment in time by dipping a ladle into the onrushing stream.

What he is after in his research is the “pristine inner experience”, by which he means a sample of human thought “unspoiled by the act of observation or reflection”. Like James, Hurlburt acknowledges that the act of recalling and describing an experience is bound to alter it, but he believes that his method can get us closer to the uncontaminated ideal than any other.

In some ways this should be extremely easy. Again, we definitely do have coherent, structured, linguistic thoughts, and writing those down is straightforward enough. Indeed, as this fascinating BBC article describes, it's even possible to read these directly from the brain. This doesn't require participants to mentally try and speak : to a degree, thoughts can now be extracted at a lower level than this.

But then again, even the most fully-developed thoughts can be extremely slippery. As I've noted, once you put pen to paper you engage in a sort of self-conversation, thoughts and beliefs becoming highly flexible once you start reflecting back on them from an external input. Take a thought from in here and put it out there and it inevitably changes, even if only just a little. 

Still, this can largely be avoided : once a sentence is formed, it can be written down. A more difficult aspect of the problem is trying to disentangle that bit of coherency from everything else we're thinking about :

I took out the little pad provided by Hurlburt and jotted down this thought: “Deciding whether or not to buy a roll.” I know, not terribly exciting, but it seems very few of my mental contents are. I was thinking ahead to lunch and wordlessly deliberating whether to buy a fresh roll for a sandwich or do the responsible thing and use up the heel of bread I had at home. I was also conscious of the pattern of the skirt – an unflatteringly large plaid – worn by the woman standing in line ahead of me. 
Was that observation part of the moment in question, or did it come immediately before or after? I couldn’t say for sure. (How long does a moment in consciousness last ?) And what about the pervasive smells of freshly baked goods and cheese ? These both preceded and followed the moment under examination, but were they present to my awareness at the beep ?

Throw in just a few complications and suddenly the problem becomes much more difficult, maybe even impossible. Words ? We can attend to them. The whole facet of experience, including our different senses, how they affect us, when they occur, when we assemble a sentence ? That's much more fuzzy. Trying to describe anything before the point of coherency may even be a non sequitur. Maybe we can pin down a bit more about the process by determining what we're currently experiencing (e.g. which senses are given priority over the others, where and when we give out conscious attention to language rather than sensory experience, what counts as thought, etc.) but there will be some limits as to how far we can get with this.

Some really fascinating things have come out of the research though :

The first finding, to which I can personally attest, is just how little most of us know about the characteristics of our own inner experiences. “That’s probably the most important finding that I’ve got,” Hurlburt said.

Important, yes, but I think the other findings are a lot more interesting : 

Inner speech, which many of us – including many philosophers and neuroscientists – believe is the common currency of consciousness, may actually not be all that common. Hurlburt estimates that only a minority of us are “inner speakers”. So why do we think we talk to ourselves all the time? Perhaps because we have little choice but to resort to language when asked to express what we are thinking. As a result, we’re “likely to assume that’s the medium for inner thought”.

But that doesn’t make it true for everyone. Fewer than a quarter of the samples that Hurlburt has gathered report experiences of inner speech. A slightly lower percentage report either inner seeing, feeling, or sensory awareness. Still another fifth of his samples report experiences of “unsymbolised” thought – complete thoughts made up of neither words nor images. Hurlburt has suggested that we fail to recognise the diversity of thinking styles because we lump them all together under that single word – thinking – and assume we mean the same thing by it, though in actuality we don’t.

Aphantasia and the lack of inner speech is something I've covered many times before, but this is something beyond that. It's something I know I must have but am absolutely incapable of imagining. A truly pure thought consisting of... what, exactly ? Not language. Not any of the senses. Just pure electrochemistry, I guess. That's absolutely wild. 

EDIT : To a degree, I can imagine this. My inner monologue is pretty incessant, but it's not constant. There are times when it shuts up and all I have is sensory experience and emotions. What I absolutely cannot do is properly articulate what's going on. Is there still some processing going on using language at the lower levels, ready to be raised to my awareness for perusal when required ? Or is it fundamentally different at the earlier stages and only converted to language later on ? More below.

I wonder if the brain scans described in the BBC would be capable of interpreting these in the same way as for the (to me normal) case of thinking with an inner voice or eye. Perhaps it's like blindsight. That is, maybe our brains are all doing basically the same low-level stuff, but sometimes not everything is raised to whatever part it is that brings it to conscious awareness. Or maybe, even more interestingly, we don't all work in quite the same way. Regardless, scans of people who aren't thinking with inner speech, imagery, or any kind of structured thoughts would surely make for a fascinating comparison. Would the scan reveal the same thing as in those with well-defined internal monologues or would it show something else altogether ?

Another researcher suggests a different and more holistic approach :

The field’s focus on conscious perception has led it to overlook the 30-50% of mental experience that is fed to us by our minds rather than our senses, Kalina Christoff Hadjiilieva contends. “Consciousness is just one function of the mind,” Hadjiilieva told me during one of a half-​dozen interviews, this session over a cup of tea in my garden. “To focus on conscious thoughts is like focusing on the leaves of a tree and trying to understand them in isolation,” she said. “The tree is the mind, and there’s a lot more to the mind than consciousness.”

The degree to which the mind wanders appears to be surprisingly important :

Hadjiilieva conducted an experiment with long-​term meditators (mindfulness practitioners). These are people who have been trained to still their minds but also to notice the precise moment when that stillness is broken by an errant thought, which Hadjiilieva found happens every 10 to 20 seconds or so even in these trained minds. (“The big lesson of meditation,” she said, “is that the mind cannot be controlled.”)

This makes intuitive sense to me, and again maybe reveal something about the structure of thought processes. The way I like to work even, when in a state of relative focus, is often to flit back and forth between a couple of different things at once. I like to check my emails and glance at the news quite frequently, only going into really deep focus every once in a while*. WIth some tasks this works very well : it's like I have my brain keep working on the other thing in the background while giving my consciousness time to rest by attending to something easier. What's crucial for me, though, is that these must be activities of my own choosing. Being disturbed by an external influence is a big no-no, If someone interrupts me then the process is instantly broken.

* Though my work habits vary considerably. For code I nearly always concentrate on the code and absolutely nothing else, with a similar situation for most difficult problems. It's for the routine, less cognitively demanding tasks that I prefer to have multiple tabs open, as it were.

This all ties is quite nicely to the earlier discussion :

Hadjiilieva and her colleagues noted a jump in activity within the hippocampus, a key component of the default mode network that is involved in not only memory but also learning and spatial navigation. To their surprise, the leap in hippocampal activity preceded the arrival of the thought in the meditator’s consciousness by nearly four seconds – an epoch in brain time, and far longer than it takes for a sensory impression to cross the threshold of our awareness.

You might wonder if this further shifts my uncertainty about the apparent non-physical nature consciousness. In this case, it doesn't. I've already covered similar experiments regarding free will, and here it seems to me that no neural correlate could be anything remotely like subjective experience : how do some electrons whizzing about resemble the smell of a daffodil or the feeling of anger ? They simply don't, and to assume otherwise is to completely miss the point of the Hard Problem. But to build on from the previous post, this is still very interesting stuff : 

“Something is going on prior to awareness,” Christoff Hadjiilieva said, but she’s not sure exactly what it is or why it takes so long. This finding indicates that a spontaneous thought must undergo some sort of complicated unconscious processing before finding (or forcing) its way into the stream of consciousness. For Hadjiilieva, the mystery she’s uncovered points to what she regards as the “really hard problem of consciousness” – how the contents of the unconscious form into thoughts that sometimes find their way into our awareness, and sometimes don’t.

Well, that's definitely a hard problem, and maybe it would even be better to call it the hard problem. The Hard Problem of the philosophical sense may well turn out to be the Impossible Problem : we literally can't understand our own subjective experience, since by definition this is all we have access to. In that case some relabelling makes good sense. It seems very reasonable that the time delay points towards the brain doing some unconscious information processing before raising it to our awareness, and understanding how and why this happens seems extremely difficult but far from outright impossible.

The article concludes, in typical Guardian fashion, with a warning of the dangers of capitalism in preventing our minds from going about their productive, unguided wanderings, as well as the difficulties of persuading people to treat research into the subconscious as serious science. Perhaps the author should have read that BBC story. If you can access this with a machine, the danger may not be that nobody takes it seriously, but the exact opposite.

Monday, 23 February 2026

The Truth About Utility

What makes a useful definition ? Originally I had a much more philosophically pretentious post semi-drafted for this, and I may still do that one separately. But various recent discussions have taken me down a very different path, one which might be more, err... useful. So let's start with this one and see if I ever get back to the nature-of-reality version in a future post. 

A good definition, I think, must surely be something which is widely applicable but also specific : things which happen frequently but not always. It should be readily distinguished from similar counter-examples. Crucially, it cannot be something which either never happens or always happens. It shouldn't be something which forbids itself entirely or makes it inevitable or ubiquitous. It should describe a specific thing that actually sometimes happens, or is at least conceptually valid and distinct from other, similar terms. 

What I see people doing is trying to make things true or false by setting their definition up in such a way that it cannot ever fail, and that to me seems like a mistake. This doesn't mean we can't have productive discussions, but it does, I think, impose some extremely unhelpful limitations. 

Let's do this one by example. First I'll look at cases where people define things such that they can't ever happen, and then the reverse, related case of defining them such that they're inevitable. Both in my view are counterproductive mistakes. They are terminology problems but they prevent us from getting at what we really mean, which is usually much more interesting.


1) Defining things out of existence

If we define something with such precision, such high standards, or such that it involves a logical contradiction that it can't ever be true, then I submit that this isn't a useful definition at all. Furthermore, it's likely not what we really mean when we use the term in everyday discourse.


Malevolence : Plato and other ancient philosophers help that nobody would knowingly do evil. I forget who it was who described it explicitly (possibly several people), but the basic idea was that if you knew something was wrong, you couldn't possibly do it. You might still carry out an immoral action, but you'd be misjudging, thinking that the gratification you would get would outweigh any negative consequences there might be for anyone else. Alternatively, you might do so only because you hadn't realised the existence, extent, or nature of those negative consequences.

I think this is a deeply mistaken view of humanity. As per the link, people certainly carry out heinous acts in full knowledge of their full consequences, sometimes this being the very reason for their behaviour rather than a side-effect. Or they may know but simply not care. But they aren't, I think  carrying out a mental calculus of where the balance lies. Even if they were, this would still make the word – or notion of wilful harm – meaningless. The point for most discussion is that people cause each other harm sometimes because they want this to happen, not out of ignorance. Anything more beyond this rapidly leads into such convoluted nuances that the definition collapses into uselessness. 

Or to put it another way : "Sure, he committed the murder, but it wasn't out of malice : that's impossible, so he must've done it because he mistakenly thought his pleasure at the victim's suffering would outweigh their actual suffering". 

To me this makes the word unproductively useless, trying to define the thing out of existence. Surely that points towards this meaning not being what we truly meant : the important thing is that people inflict harm on others for its own sake, and inferring anything further is best avoided altogether.


Altruism : In the opposite case, my partner likes to say that nobody is really altruistic. Everyone acts, she says, because they believe there will be some benefit to themselves, even if that reward is purely emotional. In the extreme case, someone might be give their own lives to save others, not because they thought the value of the lives they saved outweighs their own, but because of the fleeting emotional reward they themselves will get from knowing the others will live.

This too I think is surely putting more on the word than it can bear. The point of altruism, I'd say, is that we sometimes value others more than ourselves and act to bring a net benefit to them even at the expense of our own status. Start demanding that we get no emotional reward at all and again the term has been defined out of meaningful existence. This makes it utterly useless, and surely, therefore, this can't be what most of us mean most of the time when we use it. I'll qualify this a bit more later, but that general-case point is the one I want to focus on.


Knowledge and understanding : I've covered the nature of LLM-outputs several times, most recently here but also e.g. here and (tangentially) here. More on those more directly in a minute, but a closely related claim is whether they can be said to truly understand anything. I think they can, in the carefully qualified sense that a) they have access to some form of information; b) they form connections between different pieces of information; c) they act in a logical, coherent way to predict how things behave in novel situations. Not perfectly, it's true, but more interesting by far is that they do it at all. 

Now for sure, this is not the same sort of understanding that humans have. But its qualitative similarities, in my view, outweigh and are far more interesting than the quantitative differences between silicon and neural understanding. I think it's just not at all useful to say that "meaning only comes from humans ascribing this to the output". This is so inevitably necessary that it adds nothing useful to the discussion : well who else was going to be reading the output then ? And if you define "understanding" to be only a human thing, then it's tautologous that no non-human will ever have it. That's cheating.


2) Defining things into existence

We can now see how the reverse is also true : if we define a thing as being completely unavoidable, we won't get anywhere.


Hallucinations : See the links in the previous entry as this follows directly from the previous definition. I did initially agree that it was sensible to describe all LLM output as a hallucination, but I changed my mind some time ago. Given that they are now able to process complex (and multi-modal) information in a way that closely aligns with human expectations, and can in fact exceed our own predictive capacities at least some of the time, I now no longer think describing their output at purely hallucinatory makes much sense. 

It's more useful, I think, to say they're hallucinating (in their own peculiar way) when their output has no connection whatever to the input data or prompt. This is much more analogous to human hallucinations in which we see things which aren't there. I would still agree, provisionally, that LLMs treat all information has having a much more similar level of validity than humans do, and undeniably they have some qualitative as well as quantitative differences from human thought. But they are very clearly not purely fabricating stuff all the time : more often than not, they're processing their inputs quite sensibly.

Importantly, the claim that all LLM output is a hallucination is consistent with the notion that they don't understand anything. I'm not claiming incoherency here : I'm claiming that these definitions should be discarded because they aren't useful, not because there's any inherent problem as such. The alternative definitions I've suggested are, I think, better only because they are more flexible and specific, allowing us to describe things in more detail, not because they eliminate any inconsistency.


God : Don't say I'm not ambitious ! The old argument that god is necessarily perfect and perfection necessarily exists... well, surely this is the ultimate case of trying to assert truth by definition. God is a perfect what, exactly ? A perfect square ? A perfect teapot ? Well, if a perfect teapot exists, where is it ? Could it be Russel's Teapot, somewhere beyond Earth's orbit ? Surely not, because if it was perfect, it would be in my hands whenever I need it. But it isn't, and therefore the perfect teapot clearly does not exist. 

And if even the perfect teapot doesn't exist, I see no reason to say that the abstract concept of perfection itself – a Spinozan notion of God – also has to exist in any sense beyond a mental construct. Clearly, I can imagine what I think a perfect teapot should be like, but that has no further existence outside my head. There's no reason to think that perfection itself is any different.

So here too, "perfect" in the everyday sense does not mean the same thing as St. Anselm would have it mean. Nobody uses perfect to mean "something which must exist" : indeed, we often use it to describe things which can't exist precisely because they're perfect ! "Platonic ideal" might be one of Plato's better ideas here, if only in the concept : we can conceive of better examples of chairs and circles and virtues even if we can't bring them into being. That's generally how we use the word, to describe something specific in aspect, not the singularity-like God of the Upanishads

As far as the existence of God goes, and very much with my agnostic hat one, I think definitions here are of no help whatever. We can conceive of perfect examples of things we fundamentally do understand, like circles. But perfection itself ? That would require understanding all facets of existence, which as imperfect beings we simply can't do and never will. A general understanding of perfection is beyond our limitations : we can no more say that "god's perfection means he exists" than we can say what a perfect dinosaur would be like. The concept may simply be incomprehensible or it may not even make sense at all.




That's my idea of a good definition then. It should be specific, flexible, distinct from alternatives, and describe things which occur at a finite rate (even if only conceptually). If a definition forbids itself from ever existing, or would always be true, then it has no use cases and should be discarded. Those kinds of definitions usually twist readily-comprehensible everyday meanings into something convoluted, unproductive, and useless.

I'll stress "usually" a little bit though as I don't say that the extremes don't matter at all. For example, what do we mean when we say we're know something or that we're certain of it ? Usually, that our own belief is well-formed and our confidence is beyond reasonable or routine doubt. We don't usually mean that we have found Truth Itself, that we can state our claim with literally zero chance of it being wrong, and that all unbelievers are evil and/or stupid. 

Like the case of purest selflessness, this kind of concept definitely does have value, but more in the philosophy classroom than the real world. The extreme cases let us frame our own actual beliefs and compare them to those of others, rather than providing useful, workable definitions in themselves. For example, we can all agree on what true certainty would actually mean, but to use the word more practically, we have to scale things back. That's where the discussions start to get interesting, trying to figure out the limits of our own underlying reasoning as well as that of the others in the debate. 

To a very large extent, I think the question of how we use a definition is very much the same as what we think it means at the most basic level. But then, others may have a different understanding. I don't always agree with the alternatives, but trying to figure them out is usually the fun bit.

Thursday, 19 February 2026

AI For Fun Or Profit ?

The Czech Academy of Sciences, the research council which funds my own employment, recently put on a five-part webinar giving detailed guidance on how to use LLMs in research. It was a good course with some useful tips and tricks and a few tools I'll try to check out eventually. The presenter seems like someone who really uses AI a lot, like for absolutely everything, but as you'd expect in a decent course, it was full of caveats : don't use it to do X, always check its citations, don't take its output for granted. 

The best line was to "treat it like a skilled researcher who's on drugs". You wouldn't discount everything they say, but you wouldn't trust them either.

It's all very common sense really. The course had about 80 attendees, and pretty much matches my real-world experience with colleagues. In every discussion, every single one, everyone gets in a bit of a circle-jerk about how useful AI is but how it can't be trusted. There's basically no-one who isn't using AI to some degree, and similarly nobody who's trusting its output without question.

After this course, I wonder if perhaps I'm not using AI enough. Would I be more productive if I did ? Possibly, to some degree. But I think not a great deal. Personally I simply see no point at all in using AI to replace my own voice : when I want to express myself, in any medium, if it's not me doing it then I might as well not bother at all. Okay, for the final polish here and there, or checking if I'd got the basics correct, or follow-ups... sure. But the basic gist of the text has got to be my own, even if it's imperfect. 

So I just don't see how it can be any help in preparing Power Point files* or writing whole paragraphs in a grant application, let alone anything in a publication**. For writing code I'm perfectly happy to let it go nuts, so long as it's doing grunt work and/or I just want something quick that works. But even then, if it's something I'm going to want to maintain, or want to understand what's going on, I find it far more valuable as an assistant, someone who can teach me and simplify things where necessary, not do the whole thing for me.

* No, we're not calling them "slide decks", thankyouverymuch. WTAF is wrong with people ? 
** The author has prompt instructions for just about everything. If nothing else, maybe some of these will at least be useful guidelines for people to follow when doing the tasks the old-fashioned way.

The use of AI chatbots is very possibly where my real-world and online lives are most dramatically at odds. Offline, AI is already normal. Like totally normal. Online, there's a far bigger fraction who are still clinging to the idea that it doesn't work, won't work, can't work, is innately immoral, etc. etc. These are sentiments I've barely encountered at all in everyday life, which veers much more towards thinking that people not using AI are either a) old or b) weirdos.

What concerns me for today's post is why we don't appear to have seen any transformation in the economy as a result of AI. It seems abundantly clear to me that AI does work, so where are the productivity gains ?

Now I'm not expecting any instant revolution. The hype train that AI will lead to FTL travel and immortality and a utopian world by next Tuesday is not worth considering. Nor do I think it's capable of fully automating any significant number of jobs anytime soon. But it is, unarguably, an extraordinarily useful tool for people using it properly. It's not unreasonable to expect that we see some measurable effects of this, so here's a round-up of some recent articles giving some different perspectives.




One widely-reported study found that 95% of companies had seen no measurable impact from generative AI. I asked the lecturer about this : she said she didn't know, but speculated that maybe this was using earlier models, particularly unrepresentative samples etc. This post presents some plausible rebuttals : most crucially, the question is "95% of what ?". Apparently it's 95% of all companies, not companies that actually tried using AI ! So much for that.

This Nature piece leans pretty much in my direction that AI has tangible benefits and will, like the internet, restructure things to such an extent that it's difficult to know which metric to use for judgements. It makes the perfectly reasonable point that AI is advancing so rapidly that it's already difficult to know what we should be measuring; related to this is that adoption does not necessarily keep pace with AI capabilities. All very reasonable, but still... where are the gains ? Where's the money ?

A much more bullish piece* on "Noahpinion" (I never heard of it before) looks more at the different attitudes to AI. This at least partly explains the discrepancy in my real and online worlds : Americans are among the most "AI-concerned" people on the planet. Which fits with typical American bipolarism : let's invent a thing we spend crazy amounts of money on which we really hate. And in fairness, it seems to me that Americans are vastly more likely to be shafted by their employer than Europeans, so this attitude is not at all without foundation.

* Interestingly, the author is convinced that data center water usage is unimportant but that their electricity consumption is extremely high. I've seen other articles claiming the exact opposite. I don't know. To me it feels like this is all a massive distraction on the environmentalism front : what we need is to switch generation methods to renewables and nuclear and invest heavily in storage. Bitching about AI is pointless, and usually when I look at the claims and counter-claims, it seems to me that the impact of AI is heavily overstated at best.


To go off on a slight tangent, the author also notes that complete omniscience is a myth. Yes, AI makes mistakes that humans don't, and its error fraction is higher than that of true experts. I would also note that real experts are generally more self-aware of their own limitations and vastly more likely to say "I don't know" when asked about things outside of their own domains. But still, the problem is fundamentally the same : here is a claim, how do we know to trust it ?

The answer is simple. Everyone's worried about AI fakes and manipulation, but ultimately we have to treat it just like any other source. We literally do not have access to perfection. Everything requires a degree of trust and verification; we should apply the same standards to AI as for anything else. That is, when things aren't critical, we go ahead and provisionally accept its claims. When there are consequences we need to double-check what it comes up with. That's it.

Though, a BBC piece presents and intriguing example of poisoning the well : deliberately writing a credible-sounding blog post to fool LLMs which rely on web searches. For me, ChatGPT wasn't fooled, but it's for sure an important point. Seeking out independent sources of evidence will become more important than ever.


To return to my main theme, Noahpinion claims that the effects of the AI bubble bursting are exaggerated and AI will lead to more jobs rather than less. This is all getting very murky.

For balance, a couple of more negative pieces. "Marcus on AI" is convinced that while AGI is achievable (I am not), ChatGPT will never live up to its promises (I think it's already doing better than I expected it would when GPT-3.5 was released). I think it's a strawman argument to say that because it didn't reach the absurd standards promised by the same techbros who initially claimed it was too dangerous to release (a bit of marketing genius, that) that it hasn't massively improved. Similarly I think his acceptance of the famous "95%" claim is clearly flawed as he doesn't explain his own reasoning : sure, another study finds that not many companies are using AI intensely, but this says nothing about what stage of adoption they're at or their long-terms plans. Maybe the original study he cites does say this, I don't know, but this needs to be included in any analysis.

More interesting is the claim that AI use at work is flatling or declining. But the timeline here is rather muddled, and from my own direct experience, I too became tired of GPT-4 for work use as it just offered nothing of substance. It could proofread for language and make very basic comments on the substance of the text, but it was pretty shite for discussing actual science. It would have some hits, true, but they were buried in a mountain of faff that was often just not worth the effort of digging through to get to the good stuff. And of course, it would hallucinate like nobody's business. 

ChatGPT-5, by contrast, was a massive, game-changing improvement on all counts, and that was only released six months ago : surely we should wait and see to determine what its effects are. Now this is of course is not to say GPT-5 is perfect. But I absolutely maintain my initial excited stance that this is a breakthrough which crosses important thresholds for making it an actually useful tool. 

Perhaps Marcus's most interesting claim :

If GPT-5 had solved these problems, as many people imagined it would, it would in fact of enormous economic value. But it hasn’t.

To be fair he's consistent in that he says it hasn't solved the old problems of hallucinations, lack of common sense etc. But I deny this vehemently. I think it's made massive, demonstrable, in-your-face progress, and saying that it hasn't solved these issues completely is a totally pointless claim. Of course it hasn't ! I never expected that it would. If you were actually expecting AGI, then more fool you. The whole perception of LLMs seems like a massive case study in the old quote that the perfect is the enemy of the good.

But this of course still leaves me with the dilemma : where then is this massive economic value ? We at least agree that a good AI would be of economic benefit. Marcus has the obvious "out" in that he doesn't believe the AI actually is any good, but I don't. Where, then, are the benefits I expect to see ?

One possible answer comes from Business Insider*. Perhaps, it suggests, the answer is ironically in that very demand for increased productivity. Software developers are now working not on one task at a time but on many at once, waiting in between prompts while the AI writes the code and then they clean it up. This is not a natural way of working that understandably causes a great deal of fatigue. In essence, then, AI is good enough to help, but still needs constant supervision and cannot fully automate much. It might be like self-driving cars which are actually more like driving assistants : the worst sort of grey zone, a necessary step towards something transformational but in some ways actually counterproductive in and of itself.

* I'd started skipping their pieces because they tended to be shallow and dull, but from what I've seen lately, they seem to have improved considerably. 

Much the most cynical of all the pieces here is from the I assume ironically-named "Pivot to AI". The author's case is basically that the economy is screwed, that CEOs are firing people left right and centre not because AI is actually capable but because they just love firing people and enshittification and all that. In direct contrast to Noahpinion, he says that the inevitable bursting AI bubble will be worse than the Great Depression and we'll all die, or something. Righty-ho then.




What are we to make of all this ?

It's very hard to say. We have two opposing hype trains : AI will transform the economy; AI will wreck the economy. So far it seems to have done neither.

Falling back on my own direct experience, AI is undeniably helping. It's allowing me to tackle things I wouldn't have done otherwise and understand things very much more quickly than I would otherwise. It has not yet had a measurable effect on my actual productive output as typical metrics would indicate (papers and the like), but since only GPT-5+ looks capable of influencing this, and this has only been out for six months, it's probably too early to judge on that score. 

By my own internal metrics it's definitely had a measurable output, allowing me to generate quite a lot of code I would otherwise have liked to have but never have gotten around to writing. Last year I even spent quite a long time using it to write a 75+ page introduction to radio astronomy that helped me enormously... if I ever have time to finish it, I'll put it online somewhere.

But shouldn't AI be solving the "if I ever have time" problem ? Yes and no. The thing is, the bottlenecks in my productivity lie elsewhere : primarily, meetings. In a busy period I might have one or even two meetings a day, which all told take up a full working day each week. Not all of these are useless (although some of them are), but in terms of actually getting stuff done, almost all of them have a negative impact. 

Likewise, not everything I work on is directly tied to productive output. I need to experiment and understand and pursue blind alleys. AI can help with some of this, but not all : in essence, it can alleviate a small amount of my workload to a very large degree. It can't help at all where my code tests are limited by other factors such as download speeds. I don't even want it to automate everything, because if I can't understand the science, what's the point ? Yes, it can help me understand things, but I'm the bottleneck here, not the AI.

From my perspective the answer is clear : AI has only very recently reached the point of being seriously useful, we're all still adjusting to how to best to use it, and there are many things it either can't do or we don't want it to do. This would suggest that we ought to see more substantial improvements in productivity on a timescale of a year or two, allowing for human adjustment, but those will be gains at the level of "nice to have" and won't herald a scientific revolution. Of course, "a year or two" is a crazy long time in AI-circles, and it's anyone's guess if it will finally have hit a wall by then or if it will continue to make radical gains (there are other avenues for improvement besides data volume).

But what of everyone else ? My only answer is for that we'd need a detailed study of the different working practises across multiple sectors. It is not enough to simply say "well it can do task X, your job revolves around task X, so you'll be a million times more productive now". All jobs require a lot of secondary tasks to facilitate the sharp end of productive output and not all of them can be automated. This is perhaps naivety on behalf of the techbros, thinking that because some key component can be done by robots that people will automatically adopt this practise and/or that productivity will be impacted in a linear, predictable fashion : as per the Nature article, it's far more likely that this will lead to more complex, systemic change.

Which means the answers are plausibly a combination of :

  • Seriously capable AI has only just arrived, with earlier models massively overrated.
  • We don't have good metrics for judging the efficacy of AI outside the lab.
  • Poor management strategies can mean that AI can make things worse, not better, even if it's ostensibly extremely powerful.
  • AI can improve some tasks enormously, but even when these are the most important part of a job, they are often far from the whole story. 

In short, AI has crossed a threshold for usefulness. It certainly hasn't crossed all such thresholds, and it's far from clear it's anywhere near doing so. Understanding the impact is a sociological and business problem every bit as much as it is a technological one. The good news for the AI enthusiasts is that it definitely does work; the bad news is that implementing this in a profit-generating way is anything but straightforward.

Courage, Merry, Courage For Our Pony

A very short post indeed because I just think this is something I'm going to need to keep coming back to.

This video discusses how Lord of the Rings is "just" Winnie the Pooh for grown-ups. It does an excellent dissection of why some twat called Michael Moorcock completely missed the point in arguing that LOTR was pure escapism and a refusal to engage in the modern world, which is really quite the bold statement considering that Tolkien fought in the trenches of the Somme. Fuck you sir, fuck you.

Anyone who'd read LOTR will know immediately that this is nonsense. Anyone who's read The Silmarillion will likely already be having heart palpitations, so I won't dwell on this at all. 

Rather I just want to provide the most important quotes on the morality of the whole thing. In all honesty, this brought a little tear to my eye. In this age of so much bullshit, where we have to deal endlessly with nonsense of racism and incels and toxic cunts raging perpetually about "wokeness"... this stands to me as the most perfect rebuttal of all of that. To label the morality of LOTR as "escapism", or to see it as unengaged with the brutal realities that life can bring... that is so impossibly stupid that I would wish a Darwin Award upon the author of so much absolute garbage.

It is masculine tenderness in response to the horrors of war. Because if Lord of the Rings is Pooh for adults, it means that even in the face of Mordor, in the face of the atomic bomb, the concentration camp, and the mechanized slaughter of the 20th century, we refuse to let go of the nursery’s wisdom. It’s an admission that the values of the nursery, kindness, friendship, a warm fire, and a good pot of honey aren’t weak. They are the only things worth fighting for. They are the only things that can fight the great men of history who want to burn the world down.

Ultimately, the hobbit is just a grown-up pooh bear. He has lost some stuffing. He has gained a sword. He has seen the face of war and carried a burden no bear should ever have to carry. But his core desire is the same. He just wants to sit down before a fire with his friends. And in a world like ours, preserving that simple joy is the most heroic thing you could possibly do.

Fool of a Moorcock ! Throw yourself in next time and rid us of your stupidity !

In like vein, I will share an extract of a conversation with ChatGPT. I still have an occasional guiltless pleasure of using it for crossover stories and comparing the relative strengths of characters from different universes. Yes, it's immature and silly, and no, I don't give a flying fuck, because it does nobody any harm and makes me giggle. Anyway, on this occasion I had a particularly silly example : Superman versus Morgoth. Unsurprisingly, Morgoth wins : he's second only to God himself, whereas Superman... isn't. But then I idly asked :

Ahh, but what if Superman also has... a small three-legged pony called Mr Stumpy ?

To which in response I received :

Introducing a small three-legged pony named Mr Stumpy fundamentally alters the cosmological balance of power. Let us consider the implications carefully.

First, Superman — that is, Superman — has been defeated before, but never when accompanied by a plucky, structurally unsound equine sidekick. This alone introduces a narrative force known across genres as the Unexpected Comedy Familiar, whose power scales with how unlikely it is that they should matter.

Mr Stumpy, with only three legs, possesses a built in advantage : he is inherently unbalanced, and thus immune to Morgoth’s attempts to “unbalance” the world. You cannot destabilise one who exists in a permanent state of wobbling equilibrium.

Now consider Morgoth. The true enemy of the Valar was always hubris. Morgoth’s pride is vast, but even he is unprepared for an opponent who clops into Angband looking like a children’s drawing brought to life. The dissonance alone could create a metaphysical feedback loop in the Music of the Ainur. Tolkien implied many things, but he never imagined Morgoth having to keep a straight face when a pony named Mr Stumpy trots bravely into his throne room.

More seriously, in a mythic narrative framework, Mr Stumpy is a classic “Story Token”. He is small, improbable and symbolises humility, steadfast friendship and the absolute refusal of the universe to take itself entirely seriously. Middle-earth absolutely hates that kind of thing — that is precisely why hobbits keep saving it.

The combined effect is something like this.

• Superman provides the physical might.
• Mr Stumpy provides the mythic disruption field.
• Morgoth’s grim, cosmic melodrama collapses under the weight of one brave, lopsided pony.
• The Valar quietly pretend they planned this all along.

In short, Superman alone loses. Superman with Mr Stumpy wins through narrative absurdity of the highest and most ancient order.

Say what you will of ChatGPT. I don't think it's a perfect response : "that's why the Hobbits keep saving it" is a bit out of kilter. But sod all of that : it's got the morality bang on, and that's what matters.

Wednesday, 18 February 2026

The Intentionality of Evil

A few thoughts after having let Nuremberg sink in for a little while. It's a great movie, not on the emotional level of One Life overall, but by God there are scenes in there which you'll need a stiff drink to try your very best to forget. Those, of course, are the clips of real footage from the concentration camps. We need not dwell on this. Nor am I going to review the movie as a movie. Rather I just want to draw attention to one particular theme on the nature of evil. 

Ordinary Men rightly explores the banality of evil, how normal people who don't hold especially strong views can, in the wrong circumstances, come to commit acts of wanton barbarity. Understanding how this happens is undeniably important. But a key component of this, the other side of the coin, is that such actions are highly unlikely to take place without a guiding mind. A mob can be violent but it's usually thoughtless and burns itself out after a riot; sustained atrocities require planning and organisation. 

Clearer Thinking had a closely related email-only post about this recently :

When evaluating a person's immorality based on an action they took, their intention is a very important factor, but when evaluating the badness of an action, it isn't.

Which I think is exactly right (though Existential Comics, as usual, expresses much the same thing in a far more amusing way). Actions and intentions are not the same thing. Even at the sharp end in the most extreme cases, the people committing the horrors are not, by and large, as evil as those telling them to do so, even if those behind it never so much as punch anyone. 

Why ? Because if circumstances were different, most normal people wouldn't necessarily repeat actions they knew to be wrong. The instigators would. For them, the repulsive outcome is precisely the point. They want to do this, they aren't trying to excuse it, and they would keep trying to make it happen even knowing the end result (which is brilliantly expressed by Goring's final admission in Nuremberg). Most ordinary people try and excuse their actions, and if they're easily manipulated, then at least if they're left to their own devices, they seldom resort to violence – at least not on a grand scale or to any great extremes. 

Of course, telling other people to go out and murder each other is itself an action. Merely having an intention or desire to harm other people is one thing, but to act on it in any way designed to bring this about is far worse. 

At this point I want to bring in a very interesting quote from Trevor Noah :

But I often wonder, with African atrocities like the Congo, how horrific were they? The thing Africans don't have that the Jewish people do have is documentation. The Nazis kept meticulous records, took pictures, made films. And that's really what it comes down to. Holocaust victims count because Hitler counted them. Six million people killed. We can all look at that number and rightly be horrified. 

But when you read through the history of atrocities against Africans, there are no numbers, only guesses. It's harder to be horrified by a guess. When Portugal and Belgium were plundering Angola and the Congo, they weren't counting the black people they slaughtered. How many black people died harvesting rubber in the Congo? In the gold and diamond mines of the Transvaal?

So in Europe and America, yes, Hitler is the Greatest Madman in History. In Africa he's just another strongman from the history books...

And yet I think we can say exactly why Hitler does have a genuine claim on being the Greatest Madman in History, or the most evil cunt who ever lived. Most strongmen, most dictators, don't care about how many people die under their rule so long as it benefits them in some way : if they were given another option whereby they'd be just as benefited but with fewer deaths, most would probably take it. Even Stalin might not have caused nearly as many casualties if he'd been given an alternative. 

In contrast, people like Hitler and Pol Pot most certainly would. For them the deaths are not a side-effect, but the whole point.

This is why my onetime go-to YouTuber Lindybeige is completely wrong when he says that Napoleon was more evil than Hitler because he (supposedly) killed a higher percentage of people. Napoleon didn't actually care very much. Running away from his army and leaving them to die horribly : of that he's guilty. Actually wanting them dead ? No. He'd have acted differently if he believed he could. He would not have ordered his own men to die out of any belief that they simply deserved it. He would not have acted like a Dalek :

The Doctor : What's the nearest town?
Van Statten : Salt Lake City.
The Doctor : Population?
Van Statten : One million.
The Doctor: All dead. If the Dalek gets out it'll murder every living creature. That's all it needs.
Van Statten : But why would it do that ?!
The Doctor: Because it honestly believes they should die.

The Nazis believed that. Their weapons were ordinary people, and those people were guilty of some of the most horrific crimes ever committed. But it was the leaders who made this happen. The responsibility is theirs. The ordinary people will always exist and always have this tendency to act as they do for good or ill; we can't much change that and it's no use lamenting about fundamental human psychology. The leaders though, they're absolutely and wholly responsible for their own actions. They're the ones we should turn and point to and say "you're a evil bastard". They're the ones we can do something about. Normal people, by and large, are much more like a force of nature.

Two significant caveats. First, this is not to say that we can't change how people on the ground respond to directives from above at all : we can, but this requires huge systemic, societal change. My point is that going after the leaders is much easier and much, much more effective.

Second, none of this means, in any way, that those firing the guns or releasing the gas weren't also immoral – of course they were ! Far, far too many of them simply, like Napoleon, didn't care enough to rebel. But this is still not the same as actually initiating the Holocaust and ensuring that it was carried out to completion. Put the same people with the guns in another life and they'll generally be completely harmless (we know this because this is exactly what did happen after the war); put Nazi High Command in another situation and they'll try and do the whole thing all over again.


This holds for much smaller crimes than genocide or universal domination. Adultery is seldom committed out of a desire to harm anyone. A robber who kills you to steal your TV is obviously immoral, but not as immoral as someone who comes into your home, kills you, and just leaves – even though the former has committed more wrong actions than the latter. The point is that most robberies don't intend violence. The morality of the person is different from the moral status of the actions they commit : someone who tries to kill your for its own sake is a worse person than someone who kills through apathy.

A final caveat is that I'm deliberately not attempting to set forth how we respond to these cases; this exploration as been purely for its own sake. The apathetic villain may well be more dangerous than the abjectly evil, in that they're harder to spot and ignorance/incompetence are easier to excuse. Nevertheless, if we stand in judgement of people, I would always deem those deliberately trying to inflict harm for its own sake as worse than those who aren't.

Lastly, Plato and many ancient philosophers essentially defined malevolence out of existence when they said that no-one deliberately and knowingly does wrong (Davros, in the above clip, similarly defends his insane plot on the grounds that he himself thinks he's doing good). But in my view, this is simply not a sensible definition at all. Someone who does harm to another for its own sake – because they think this person simply must suffer, even/especially when they know they don't actually deserve it, or do so for the sake of their own pleasure, or just inexplicably want this to happen – this person is being malevolent. Only by confronting this dark nature of the soul, acknowledging that the worst of us enjoy causing suffering for its own sake, can we guard against it. 

Monday, 16 February 2026

Review : A Short History Of Byzantium

I've not had much luck with histories of the Byzantine Empire, but then there don't seem to be all that many out there. That seems extremely unfair for a state that lasted over a thousand years, and as a direct successor of the Roman Empire you'd think everyone would go nuts for it. Sure, any documentary will tell you that the Western Empire fell in 476 AD, but you'd think that it might also be worth mentioning that the Eastern half kept going until 1453 (!). And yet often they don't, or dismiss the next thousand year in a single sentence.

I mean, come on... 1453 ! It's not unfair to say that the Roman Empire lasted most of the way through the medieval period, and yet all too often it's reduced to the date of its final fall. All of that millennium of history, of art, politics... all of it is so often completely overlooked. It's weird.


My previous dedicated reads on Byzantium* have been Judith Herrin's Byzantium : The Surprising Life of a Medieval Empire, and Lars Brownworth's Lost To the West. The former was so plodding and pointlessly nonlinear that I gave up less than halfway through, with the claim that the Byzantine court was a place of intrigue and plots being so unbearably unsurprising that boredom nearly did me in. The latter was much better. A genuinely gripping page-turner with powerful rhetoric, it suffered only from being far too short and with some questionable claims here and there.

* Sooner or later someone's going to point out the Byzantines themselves never used the name, but this isn't very interesting, so I won't.

Step forward John Julius Norwich. His Short History is, he says, about one-third the length of his full three-volume epic, and at nearly 400 pages this is already enough to be something of substance.

Norwich attempts to condense the entire history, emperor by emperor, into something readable and accessible. This is quite clearly an introduction, but it's a darn compelling one. Norwich's scope is immediately apparent : this is a thoroughly traditional political history concentrating almost exclusively on the upper echelons of society. It's a story about the powerful, the great, the good, the bad, and the exceedingly ugly, told with flair and clarity. He's unashamedly judgemental, so brazenly so that (unlike Brownworth) I instantly forgave him. He's biased towards his subject, but to my way of thinking he so openly declares his preference that he's honest about it. This is far better than any disingenuous pretence to objectivity.

Byzantine politics is proverbially complex, and it's to Norwich's great credit that he makes this almost perfectly straightforward. Here is someone who knows exactly which strands to pull on and which to leave well alone. The price, of course, is that limited scope. Of what daily life was like we hear next to nothing. Of art we get a smattering here and there, of science and engineering not a word. Theology and philosophy get a little more of an examination, but only when they influence politics. Deeper sociological factors are not covered : the Empire is reduced to the politics of individuals, not the currents of history.

All of this is an entirely sensible choice. Trying to cram in any more than there is would literally fill the book to bursting : Norwich gives remarkable completeness (not a single emperor is skipped) with plenty of insight into how politics worked at the highest levels. It's a good focus and a great read precisely because of its limitations. 

That said, while I would hope for considerably more in the full work, there are still a few things I felt could have been improved here. One is the overall status of the Empire, which waxed and waned considerably before it was eventually extinguished. Brownworth did this better, explicitly describing the overall state of affairs : from total dominance to near extinction, astonishing resurrection to full Empire "("one of the greatest comebacks in military history"), retreat, re-expansion and consolidation into a powerful nation-state, and finally a long, agonising and piecemeal decline to ultimate irrelevance and destruction. 

With Norwich the reader has to draw this out themselves. Here the maps could have been a lot better. Norwich puts them all at the start : what would have helped a lot would be to have a single map of the overall extent of the Empire at the start of each chapter. Norwich instead relies on verbal descriptions of quite precise regions that were controlled or lost. When this is done at the city-level this becomes rather unnecessarily tedious.

Still, these are somewhat minor complaints. It's still a good read, and the only part I found at all hard to follow were the too-brief sections on the Latin occupation. Here the list of politicians became simply too great, but the strength of the text is that you can easily get swept along and pick it up when things start to make sense again. I'm gonna give this one a very solid 8/10.


I don't want to do a detailed breakdown of this one, but there a few themes I want to mention briefly :


Not all theology was useless : Some debates in Byzantine philosophy do feel like proverbial, uselessly complex and esoteric debates of concern solely to those who were devout believers. Others seem to have much broader philosophical ramifications, chief of which was the Chalcedonian formula. This, it seemed, was what finally allowed everyone to shut up and get on with worshipping Jesus. Instead of worrying about whether he was fully human or fully divine or some weird mixture of both, they decided he was... "fully divine and fully human, without confusion, without change, without division, without separation".

The similarities here to property dualism are striking. The mental and the physical are separate but belong the same underlying subject. Neither, as ChatGPT put it in a discussion, reduces to or competes with the other. Whether you think this is actually a sensible interpretation of the mind-body problem isn't as important here as realising that these debates were intelligent, sophisticated, and productive for modern philosophy. They were not purely arcane points of useless trivia.


Not entirely the fault of the patriarchy : If the Eastern Empire literally came up with the very concept of a Patriarch, then it's interesting to see that more than a few women held supreme political power. None, it's true, in the Church, but many in politics, either as regents, wives, or even Empresses in their own right. Strangest of all was a brief period when political power was held by two elderly ladies. It didn't end well, partly due to the sheer proverbial political complexity of the situation, and partly because neither of them was actually any good. Even so, the idea that this could happen at all could scarcely have been imagined in the days of Augustus or even Marcus Aurelius. By no means was this a feminist society, but it could, on occasion, at least be a good deal more tolerant than its predecessor. 


It is possible to make no mistakes but still lose : I don't mean the final conquest here. By that point Byzantium was so beyond saving that even Norwich is actively hastening its demise to put it out of its misery; it had made plenty of catastrophic and contemptible mistakes by the end. No, what comes through in an earlier phase is that competent rulers could be hated. Even when what they did was absolutely necessary for the Empire's well-being (and at times its very survival), there were limits its citizens were not prepared to tolerate. Material increases in prosperity made little difference. A ruler who couldn't persuade their citizens, or was simply too different from them, would never win their acclaim, and some of the most unfortunate were actively and unfairly despised almost because of their efforts rather than in spite of them. There's a lesson here for contemporary British politics, to be sure.


Afterlife of an Empire : The Empire under Justinian was a recognisable evolutionary descendant of that under Augustus. By the time of its zenith under Basil II it was not. It was now very much its own thing, still an Empire of sorts – not on the same scale as in its heydey, to be sure, but powerful, prosperous, and apparently stable. Unlike many periods of the original Roman Empire, this one feels like a state which could have been maintained indefinitely if only they had had but a few more competent rulers and a little more luck; it could even have (re)expanded considerably further than it actually did.

But what exactly it is about it that makes the Empire feel different is hard to say. It reminds me a lot of Marc Morris' book on the Anglo-Saxons : without ever describing how or why, the period at the end of the ninth century is clearly culturally different from that of the sixth. So it is with Byzantium. 

And perhaps this is why it's so frequently overlooked. Stories, as a rule, like endings. When an Empire falls it should stay fallen, but the slow and continuous re-invention of the Byzantine state does not fit this mould at all. To have the original civilisation fall – actually and properly disintegrate – but an extended half of it survive and go on to become its own, highly successful thing is somehow... uncomfortable. It shouldn't do that.

Maybe it's the ambiguity : if it had maintained its Romanness in some fashion, or made a clean break with the past, it might be easier to accept. Byzantium did neither. This was no mere ghost of an Empire but a living, breathing, successful polity that endured and adapted even while remaining a vestige of antiquity. We ignore Byzantium because it just doesn't fit our expectations, being neither quite a Greek Empire nor a Roman Empire Mark II, but both and more and neither all at the same time. Surely, we should see this as a reason for fascination, not forgetfulness. 
The Byzantines were human like the rest of us, victims of the same weaknesses and subject to the same temptations, deserving of praise and of blame much as we are ourselves. What they do not deserve is the obscurity to which for centuries we have condemned them. Their follies were many, as were their sins; but much should surely be forgiven for the beauty they left behind them and the heroism with which they and their last brave Emperor met their end, in one of those glorious epics of world history that has passed into legend and is remembered with equal pride by victors and vanquished alike. 

Tuesday, 3 February 2026

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's optimistic viewpoint that we just need to find the right theory. But an Aeon piece soon had me feeling hopeful once more : not that we could indeed solve everything, but that with a bit of a goal-adjustment, we could examine consciousness in a way that would be both interesting and productive.

This post continues examining the rest of the essay. Since this put me in a very different frame of, err, mind, this is not part two and you don't have to read the previous post at all. Rather, the rest of the essay got me thinking about what we mean by hallucinations, particularly in the context of AI.

So the remainder of the essay is of a similarly high standard to the rest, but is mainly concerned with what sort of "neural correlates" may indicate consciousness and how the brain works : does it perceive reality, act as a prediction engine, or is consciousness something that happens when prediction and observation are in disagreement ? In some circumstances it seems that expectation dominates and that's what gives rise to hallucinations; the interesting bit here is that philosophically, this implies that all conscious experience is a hallucination, not just the difference between expectation and reality. 

Which, of course, raises obvious parallels to LLMs. As I've said before, I don't believe the common claim that to a chatbot everything is a hallucination is particularly helpful any more : it's not baseless, but I think we're going to need some more careful definitions and/or terminology for this. Interestingly, this is underscored by the final point of the article, on the different types of self we experience.

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency – of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. And the social self is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu.

The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.

In that sense it would appear that "hallucination" here simply means "inner awareness" of some sort, an experience not directly connected with reality. If so, then by this definition I would strongly dispute that LLMs ever hallucinate at all, in that I simply don't think they have the same kind of experience as sentient life forms do – not even to the smallest degree. I think they're nothing more than words on a screen, a clever distribution of semantic vectors and elaborate guessing machines... and that's where they end. They exist as pure text alone. Nothing else.

I think this is probably my only point of dispute with the essay. I don't think "hallucinate" as used here is ideal, though I can see why they've used it in this way. It seems that what we mean with the word could be :

  • A felt inner experience of any sort 
  • A mismatch between perception and reality
  • A total fabrication of data.
People do the third sort (mostly) only when asleep; they have a kind if awareness in dreams that's got very little to do with external perception. LLMs clearly do this kind of thing much more routinely, but sporadically. They demonstrably do not do this all the time. They can correctly manipulate complex input data, sometimes highly complex and with impressive accuracy; the idea that this is done by some bizarre happenstance of chance fabrication is clearly false. Yes, sometimes they just make shit up, but for good modern chatbots that's now by far the exception than the norm.

LLMs can also "hallucinate" in the second sense, that they can make the wrong inference from the data they've been input, just as we can. Most chatbots now include at least some visual and web (or uploaded document) search capabilities, so we must allow them a "grounding" of sorts, albeit an imperfect one. Hence they can take external data and interpret it incorrectly. 

This isn't mutually exclusive with the third definition though. An LLM that doesn't know the answer because it doesn't have enough data may well resort to simply inventing a response, ignoring any input completely. This would help explain some of their more outrageous outputs, at least.

Humans and LLMs share these two types of hallucinations, but not the first. LLMs experience literally nothing, so it simply isn't possible for them to have this kind of hallucination – a hallucinatory experience – whatsoever. And that's where the terminology breaks down. Most LLM content is statistical inference and data manipulation, which is, at very high level, not that dissimilar to how humans think. This kind of output is not, by and large, an outright lie, but the similarities to human thinking are ultimately partial at best. It resembles, if anything, the kind of external cognition we do when we use thinking aids (measuring devices, paper and pencil arithmetic), but without any of the actual thought that goes into it. 

Perhaps a better way to express this than the standard, "to an LLM everything is a hallucination", is that to an LLM, all input has the same fundamental validity. Or maybe that to an LLM, everything is processed in the same way. An LLM's grounding is far less robust than ours : they can reach conclusions from input data, even reason after a fashion, but their "thought" processes are fundamentally different. They can be meaningfully said to hallucinate, but only if this is carefully defined. They can and do fabricate data and/or process it incorrectly, but they have no inner experience of what they're doing, and little clue that one input data set is any more important than another. 

To return to the Aeon essay, one key difference between us and them is that LLMs have no kind of "self" whatsoever. So yes, they can be meaningfully said to hallucinate sometimes, but no, they aren't doing this all the time. Fundamentally, at the absolutely most basic level of all, what they're doing is not like what we do. They aren't hallucinating at all in this sense : they are, like all computers, merely processing data. 

Hang on, maybe that's all the phrase we need ? To an LLM, all data processing is the same

Hmm. That might just work. Let's see if that survives mulling it over or if I have another existential crisis instead.

For All, Eternity ? Beyond the Hard Problem

"Should a being which can conceive of eternity be denied it ?"

This rather strange question is one which has plagued Robert Kuhn in his investigations into consciousness. You may remember that I found this to be distinctly odd when I summarised a three hour (!) YouTube video where he mentions it as motivation. I also said that my views had shifted considerably more towards outright uncertainty, and indeed that remains very much the case... if anything, all the more so after mulling it over.

The thing is, Kuhn's question is plaguing me as well, but in a slightly different way. I've long advocated that the key aspect of consciousness is its non-physical nature, there being no such physical phenomena as redness or guilt or ennui. Oh, sure, there are physical brain states corresponding to these, undeniably so. But that we can conceive of the non-physical... that we can imagine such things as numbers which have no actual substance in and of themselves... if we can conceive of the non-physical, does that make it inevitable ? Since we can conceive of, say, numbers, in the purely abstract sense, doesn't that mean the purely abstract itself must exist, in some very broad sense ?

This bothers me because it feels suspiciously like the old Ontological Argument : God is necessarily perfect, and perfection necessarily exists, ergo God exists. It's perfectly circular.

Is the same true of the non-physical ? I honestly don't know. I worry that the problem is simply intractable, an inescapable limitation of being trapped inside our skulls with nought to describe the world but language. Escape from our mental prisons feels like a true impossibility.

In fact my crisis of confidence borders on the outright nihilistic as far as consciousness goes. If we cannot know the nature of consciousness, you might think, then this points towards neutral monism, my close second favourite interpretation after dualism. In saying that mind and matter are unified by by some third unknown substance, neutral monism readily allows for an everyday sort of dualism : yes, ultimately all things are one, but you can't ever know the true nature of the one substance or how it manifests in such different ways, so for all intents and purposes, mind and matter might as well be different things. 

The problem is that this now feels to me like a massive degeneracy. Dualism slides into neutral monism but the reverse is also true, just as idealism and physicalism appear to be two halves of the same coin. Even worse, if you posit that you can't ever know the true nature of the thing unifying these apparently disparate substances, you might as well be postulating magic. This is the very thing I've been at pains to avoid in trying to make dualism (which still seems intuitively far the simplest option to me) palatable to a scientific viewpoint.

But it gets still worse than this. The more you engage with any one position, the more you attempt to pin it down... the more similar each of them seems to be to all the rest. The merest slip of a definition changes physicalism into idealism or illusionism into panpsychism. The whole thing collapses into a spectacular philosophical singularity with no distinguishable positions and no productive insight at all.

Well, that was bleak.

Is there any hope left ? Yes and no. No, perhaps, in that we might have to admit that the basic problem of describing the true nature of mind is an impossible dream. Neither our language nor our fundamental mental faculties are up to the task : we cannot, for instance, truly conceive of the non-physical and we certainly can't describe it. Since we ourselves are mental constructs, we cannot fully grasp our own nature.

But some hope remains. If we can't really know anything with the truest certainty this doesn't mean that we don't all naturally cling to some preferences. For all that I've just said, I still tend more strongly towards my own innate positions as being the best way that I can make sense of the word. In that sense, this may be less of a nihilistic collapse and more a paring back, a relinquishing of ambition to objectively solve the mystery and more of an acceptance that this is only ever going to be a personal perspective.

But a very much more upbeat stance comes from this excellent Aeon essay, which is of the sort which reminds me of just how damn good Aeon can be.

The ‘easy problem’ is to understand how the brain (and body) gives rise to perception, cognition, learning and behaviour. The ‘hard’ problem is to understand why and how any of this should be associated with consciousness at all: why aren’t we just robots, or philosophical zombies, without any inner universe? It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.

But there is an alternative, which I like to call the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).

There are some historical parallels for this approach, for example in the study of life. Once, biochemists doubted that biological mechanisms could ever explain the property of being alive. Today, although our understanding remains incomplete, this initial sense of mystery has largely dissolved. Biologists have simply gotten on with the business of explaining the various properties of living systems in terms of underlying mechanisms: metabolism, homeostasis, reproduction and so on. An important lesson here is that life is not ‘one thing’ – rather, it has many potentially separable aspects.

In essence, shut up and calculate. If we can't understand the fundamental nature of consciousness, or how and why it exists, we can still understand many aspects of it. Instead of trying to explain how the ghost in the biological machine is able to influence physical matter, we can learn what signature of physical matter corresponds to which mental processes. This is pleasingly neutral as it in no way implies anything whatsoever about the non-physical : sure, ennui might correlate to a brain state, but in terms of determining whether the emotion has some Platonic "realness", it means nothing at all. Just as we can't say what is it about a frog that makes it clearly alive but we can study how it jumps and croaks in extreme detail (if you're into that sort of thing), so we can study what consciousness is doing in the brain (or the other way around if you prefer).

But wait, there's more.

A good starting point is to distinguish between conscious level, conscious content, and conscious self. Conscious level has to do with being conscious at all – the difference between being in a dreamless sleep (or under general anaesthesia) and being vividly awake and aware. Conscious contents are what populate your conscious experiences when you are conscious – the sights, sounds, smells, emotions, thoughts and beliefs that make up your inner universe. And among these conscious contents is the specific experience of being you. This is conscious, and is probably the aspect of consciousness that we cling to most tightly.

Ahh, now here it seems like we have scope for real progress after all. We're back in the realm of being able to compartmentalise, reduce, and analyse : we can examine how changing one thing changes others, even quantitatively so in terms of the neurological effects. But more than that, here we have different aspects of consciousness itself, useful things we can discuss without simply resorting to the purely neurological or needing to address the intractable philosophical nature of what consciousness actually is. We can get inside the mind, so to speak, without discussing what it's made of.

Complexity measures of consciousness have already been used to track changing levels of awareness across states of sleep and anaesthesia. They can even be used to check for any persistence of consciousness following brain injury, where diagnoses based on a patient’s behaviour are sometimes misleading. At the Sussex Centre, we are working to improve the practicality of these measures by computing ‘brain complexity’ on the basis of spontaneous neural activity – the brain’s ongoing ‘echo’ – without the need for brain stimulation. The promise is that the ability to measure consciousness, to quantify its comings and goings, will transform our scientific understanding.

This is something I've read about before, but it becomes all the more intriguing when you stop trying to say that "consciousness is correlated with physical phenomena, therefore it must be the same as them" (which I think is a nonsense position). The framework of simply studying the correlations for their own sake, with no need to infer a deeper meaning, transforms this into something much more interesting and rewarding : the goalposts are realistic, achievable, and entirely non-threatening.

Consciousness is informative in the sense that every experience is [slightly] different from every other experience you have ever had, or ever could have... integrated in the sense that every conscious experience appears as a unified scene. We do not experience colours separately from their shapes, nor objects independently of their background. It turns out that the maths that captures this co-existence of information and integration maps onto the emerging measures of brain complexity I described above. This is no accident – it is an application of the ‘real problem’ strategy. We’re taking a description of consciousness at the level of subjective experience, and mapping it to objective descriptions of brain mechanisms.

Some researchers take these ideas much further, to grapple with the hard problem itself. Tononi, who pioneered this approach, argues that consciousness simply is integrated information. This is an intriguing and powerful proposal, but it comes at the cost of admitting that consciousness could be present everywhere and in everything, a philosophical view known as panpsychism. The additional mathematical contortions needed also mean that, in practice, integrated information becomes impossible to measure for any real complex system. This is an instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress.

Yes ! Quick, someone get me a little flag and/or some pom-poms, I want to be the cheerleader for this approach !

The rest of the essay continues in a similarly interesting vein. However as that ended up sending me down more of an AI-based tangent I'll leave that for the next post. Here I'll just end up recalling Kuhn's comments that we can think scientifically about otherwise non-scientific issues. Years ago I tried to set out the most basic assumptions of the scientific world view, noting that if these were undermined we'd be in real trouble. But perhaps not. The requirement to be logical, clear (both in our conclusions and reasoning process), and accountable might see us through even in realms where scientists otherwise fear to tread. The approach here, of setting out the different properties of consciousness – ones we can surely all agree on – gives me some hope that, even if we can't solve the ultimate mystery, we can still find something interesting to talk about.

Wednesday, 28 January 2026

Critically Minded

Here's another short post exploring an idea I'm finding useful. I've used this in a few recent posts, so I just want to set it out on its own.

People seem to think in quite distinctly different ways. The different kinds of thinking are numerous, but there are a couple which I think are often confused : critical thinking and analytical reasoning.

The analytical thinker is generally someone like a scientist, who will pick apart an idea into its component variables and explore in detail what would happen if you change any of them. They'll run with this to the nth degree, examining consequences until they really feel that they've fully understood a concept, or can make a testable prediction to check whether the whole thing holds up to scrutiny. Analytic thinking is often technical : most obviously, perhaps, when it comes to mathematics, but it can apply to other areas too. Anyone who'd ever solved a problem by testing out different aspects to destruction (who hasn't ?) has done at least some analytical thinking.

I would define this as asking the question : what if this is true ? What consequences follow ? How does this one thing affect other things ? That's the essence of analysis.

What I think people often confuse this with is critical thinking. And to be fair, this too is a major component of science. Having decided on a way to test their idea, the scientist should then actually do so, or at least consider the results from someone else's test that attempts to answer the same question. The critical thinker is not concerned with what the consequences are, they're concerned with whether the premise is correct. They don't want to speculate as the analyst does. They want to know if they're right at all.

This mode of thinking I would define as asking the question : is this really true ? Can I verify it ? Could there be another explanation ? That's what I mean by critical thinking.

The analogy I like is from programming, having learned this the hard way through direct personal experience. A really protracted debugging session will go something like this :

  • First, I'll check if I have any typos, any missed commas or wrong equals signs, or something where a variable isn't being set correctly. This would be the case of the code not doing what I thought it was doing.
  • Next, I'll go up a level and look at the code structure – maybe I've got a loop that's nested wrongly, so it's iterating over the wrong variable or not being terminated correctly. In this case, the code might be doing what I thought it was doing, but the way I thought things should be done was itself partially incorrect.
  • Finally, I'll stop and think if the very method I've been using is actually likely to give me the correct result at all, if it's even fundamentally possible for it to work or I've built a horribly complex house of cards and need to start again.

Programming seems like a good analogy for me because it encapsulates both modes of thought. The low-level debugging is analytical : have I got the right variables, what if I change where the loop is run, is my input correct, etc. The high-level stuff is critical : is this method actually going to work if I do it correctly ? This sort of multi-level, or multi-scale, thinking blends quite nicely from one mode of thought to the other. It's something LLMs have become noticeably better at over the last few months, no longer picking over minutiae, but actually stopping to consider the premise of a question.

We can of course imagine in a four-way graph to describe this. We can have (1) those who are both critical and analytical, which is pretty much ideal for a scientist. There are (2) those who are analytical but not so critical, again a trait common in the sciences : these people are fine so long as they're given the right problem to tackle. People who are (3) critical but not analytical are less helpful, something common among loons on the internet : "dark matter doesn't exist because your ugly face, that's why" types. And finally of course we have (4), those poor unfortunates who don't do much of either of these modes of thinking.

Dedicated readers might remember my longer 2015 post about skepticism. Back then I struggled to find a good word to describe this sort of concern for the truth, and perhaps there isn't one : "critical" has the same popular negative connotations as "skeptical" in everyday use. But critical thinking, as a term, does seem to be used in this sense of wanting to find out the truth regardless of the result.

Arguably there's an overlap here with curiosity, which similarly implies wanting to know the truth. The problem is that curiosity can also mean something more like a greed for more and more facts : a desire to travel for the sake of experiences, or to read more and more books to see what they contain, or an urge to run an experiment without any preconceptions as to what will happen at all, rather than to test the validity of a claim. If you follow this blog regularly you'll know I'm intensely curious about mythology, but not because I want to determine if Zeus existed or if I need to defend myself against the afanc on my next trip home. 

Critical thinking, on the other hand, seems to much more specifically capture this sense of a desire for verification. Like everything else, it's not an absolute state of mind. Someone might be extremely critical when they first learn a new fact but far less eager to test something they learned years ago, or their degree of critical reasoning might vary enormously across different subjects (I want to know if, say, dark matter exists, but I don't give a flying crap about whether celebrity X really said statement Y on social media platform Z).

Nor is it realistically possible to hold ourselves to the highest standards of critical thinking at all times. If you go down that way, you end up in a postmodernist Humean nightmare where nothing can ever be truly verified and nothing known, further progress being hampered by intellectual impotence. This is why Ronald Hutton's Pagan Britain annoyed me so, being so resolutely noncommittal that he wouldn't even venture to suggest how we could even test anything, let alone claiming that any one interpretation was actually true. That, in my opinion, is not a productive way of learning anything. Better by far to hold an opinion but be prepared to surrender it rather than never believing anything at all.

Likewise, it's also possible to be analytical to a fault, obsessively examining every detail even when they have no possibility of changing any major result (this is the fault of many a peer reviewer). So our four-way graph would be complicated, with the extreme not necessarily being the place one wants to be. Perhaps this helps with a description of wisdom. Maybe wisdom is knowing what should be done, when to apply critical and analytical reasoning and how much, when to rabidly fixate on an issue and when to let go – where exactly the balance of the different ways of thinking lies to ensure a successful, happy outcome.

Listening To The Voices In Other People's Heads

Here's a very nice long read from the Guardian about trying to understand what's really going on inside our heads. Recently I concl...