Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday, 3 February 2026

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's optimistic viewpoint that we just need to find the right theory. But an Aeon piece soon had me feeling hopeful once more : not that we could indeed solve everything, but that with a bit of a goal-adjustment, we could examine consciousness in a way that would be both interesting and productive.

This post continues examining the rest of the essay. Since this put me in a very different frame of, err, mind, this is not part two and you don't have to read the previous post at all. Rather, the rest of the essay got me thinking about what we mean by hallucinations, particularly in the context of AI.

So the remainder  of the essay is of a similarly high standard to the rest, but is mainly concerned with what sort of "neural correlates" may indicate consciousness and how the brain works : does it perceive reality, act as a prediction engine, or is consciousness something that happens when prediction and observation are in disagreement ? In some circumstances it seems that expectation dominates and that's what gives rise to hallucinations; the interesting bit here is that philosophically, this implies that all conscious experience is a hallucination, not the difference between expectation and reality. 

Which, of course, raises obvious parallels to LLMs. As I've said before, I don't believe the common claim that to a chatbot everything is a hallucination is particularly helpful any more : it's not baseless, but I think we're going to need some more careful definitions and/or terminology for this. Interestingly, this is underscored by the final point of the article, on the different types of self we experience.

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency – of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. And the social self is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu.

The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.

In that sense it would appear that "hallucination" here simply means "inner awareness" of some sort, an experience not directly connected with reality. If so, then by this definition I would strongly dispute that LLMs ever hallucinate at all, in that I simply don't think they have the same kind of experience as sentient life forms do – not even to the smallest degree. I think they're nothing more than words on a screen, a clever distribution of semantic vectors and elaborate guessing machines... and that's where they end. They exist as pure text alone. Nothing else.

I think this is probably my only point of dispute with the essay. I don't think "hallucinate" as used here is ideal, though I can see why they've used it in this way. It seems that what we mean with the word could be :

  • A felt inner experience of any sort 
  • A mismatch between perception and reality
  • A total fabrication of data.
People do the third sort (mostly) only when asleep; they have awareness (of a sort) in dreams, but it's got very little to do with perception. LLMs clearly do this kind of thing much more routinely, but sporadically. They demonstrably do not do this all the time. They can correctly manipulate complex input data, sometimes highly complex and with impressive accuracy; the idea that this is done by some bizarre happenstance of chance fabrication is clearly false. Yes, sometimes they just make shit up, but for good modern chatbots that's now by far the exception than the norm.

LLMs can also "hallucinate" in the second sense, that they can make the wrong inference from the data they've been input, just as we can. Most chatbots now include at least some visual and web (or uploaded document) search capabilities, so we must allow them a "grounding" of sorts, albeit an imperfect one. This isn't mutually exclusive with the third definition though. An LLM that doesn't know the answer because it doesn't have enough data may well resort to simply inventing a response, ignoring any input completely. This would help explain some of their more outrageous outputs, at least.

Humans and LLMs share these two types of hallucinations, but not the first. LLMs experience literally nothing, so it simply isn't possible for them to have this kind of hallucination – a hallucinatory experience – whatsoever. And that's where the terminology breaks down. Most LLM content is statistical inference and data manipulation, which is, at very high level, not that dissimilar to how humans think. It is not, by and large, outright lie, but the similarities to human thinking are ultimately partial at best. It resembles, if anything, the kind of external cognition we do when we use thinking aids (measuring devices, paper and pencil arithmetic), but without any of the actual thought that goes into it. 

Perhaps a better way to express this than the standard, "to an LLM everything is a hallucination", is that to an LLM, all input has the same fundamental validity. Or maybe that to an LLM, everything is processed in the same way. An LLMs grounding is far less robust than ours : they can reach conclusions from input data, even reason after a fashion, but their "thought" processes are fundamentally different. They can be meaningfully said to hallucinate, but only if this is carefully defined. They can and do fabricate data and/or process it incorrectly, but they have no experience of what they're doing, and little clue that one input data set is any more important than another. 

To return to the Aeon essay, one key difference between us and them is that LLMs have no kind of "self" whatsoever. So yes, they can be meaningfully said to hallucinate at some rate, but no, they aren't doing this all the time. Fundamentally, at the absolutely most basic level of all, what they're doing is not like what we do. They aren't hallucinating at all in this sense : they are, like all computers, merely processing data. Hang on, maybe that's the phrase we need ? To an LLM, all data processing is the same

Hmm. That might just work. Let's see if that survives mulling it over or if I have another existential crisis instead.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's op...