Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday, 3 February 2026

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's optimistic viewpoint that we just need to find the right theory. But an Aeon piece soon had me feeling hopeful once more : not that we could indeed solve everything, but that with a bit of a goal-adjustment, we could examine consciousness in a way that would be both interesting and productive.

This post continues examining the rest of the essay. Since this put me in a very different frame of, err, mind, this is not part two and you don't have to read the previous post at all. Rather, the rest of the essay got me thinking about what we mean by hallucinations, particularly in the context of AI.

So the remainder  of the essay is of a similarly high standard to the rest, but is mainly concerned with what sort of "neural correlates" may indicate consciousness and how the brain works : does it perceive reality, act as a prediction engine, or is consciousness something that happens when prediction and observation are in disagreement ? In some circumstances it seems that expectation dominates and that's what gives rise to hallucinations; the interesting bit here is that philosophically, this implies that all conscious experience is a hallucination, not the difference between expectation and reality. 

Which, of course, raises obvious parallels to LLMs. As I've said before, I don't believe the common claim that to a chatbot everything is a hallucination is particularly helpful any more : it's not baseless, but I think we're going to need some more careful definitions and/or terminology for this. Interestingly, this is underscored by the final point of the article, on the different types of self we experience.

There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency – of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. And the social self is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu.

The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.

In that sense it would appear that "hallucination" here simply means "inner awareness" of some sort, an experience not directly connected with reality. If so, then by this definition I would strongly dispute that LLMs ever hallucinate at all, in that I simply don't think they have the same kind of experience as sentient life forms do – not even to the smallest degree. I think they're nothing more than words on a screen, a clever distribution of semantic vectors and elaborate guessing machines... and that's where they end. They exist as pure text alone. Nothing else.

I think this is probably my only point of dispute with the essay. I don't think "hallucinate" as used here is ideal, though I can see why they've used it in this way. It seems that what we mean with the word could be :

  • A felt inner experience of any sort 
  • A mismatch between perception and reality
  • A total fabrication of data.
People do the third sort (mostly) only when asleep; they have awareness (of a sort) in dreams, but it's got very little to do with perception. LLMs clearly do this kind of thing much more routinely, but sporadically. They demonstrably do not do this all the time. They can correctly manipulate complex input data, sometimes highly complex and with impressive accuracy; the idea that this is done by some bizarre happenstance of chance fabrication is clearly false. Yes, sometimes they just make shit up, but for good modern chatbots that's now by far the exception than the norm.

LLMs can also "hallucinate" in the second sense, that they can make the wrong inference from the data they've been input, just as we can. Most chatbots now include at least some visual and web (or uploaded document) search capabilities, so we must allow them a "grounding" of sorts, albeit an imperfect one. This isn't mutually exclusive with the third definition though. An LLM that doesn't know the answer because it doesn't have enough data may well resort to simply inventing a response, ignoring any input completely. This would help explain some of their more outrageous outputs, at least.

Humans and LLMs share these two types of hallucinations, but not the first. LLMs experience literally nothing, so it simply isn't possible for them to have this kind of hallucination – a hallucinatory experience – whatsoever. And that's where the terminology breaks down. Most LLM content is statistical inference and data manipulation, which is, at very high level, not that dissimilar to how humans think. It is not, by and large, outright lie, but the similarities to human thinking are ultimately partial at best. It resembles, if anything, the kind of external cognition we do when we use thinking aids (measuring devices, paper and pencil arithmetic), but without any of the actual thought that goes into it. 

Perhaps a better way to express this than the standard, "to an LLM everything is a hallucination", is that to an LLM, all input has the same fundamental validity. Or maybe that to an LLM, everything is processed in the same way. An LLMs grounding is far less robust than ours : they can reach conclusions from input data, even reason after a fashion, but their "thought" processes are fundamentally different. They can be meaningfully said to hallucinate, but only if this is carefully defined. They can and do fabricate data and/or process it incorrectly, but they have no experience of what they're doing, and little clue that one input data set is any more important than another. 

To return to the Aeon essay, one key difference between us and them is that LLMs have no kind of "self" whatsoever. So yes, they can be meaningfully said to hallucinate at some rate, but no, they aren't doing this all the time. Fundamentally, at the absolutely most basic level of all, what they're doing is not like what we do. They aren't hallucinating at all in this sense : they are, like all computers, merely processing data. Hang on, maybe that's the phrase we need ? To an LLM, all data processing is the same

Hmm. That might just work. Let's see if that survives mulling it over or if I have another existential crisis instead.

For All, Eternity ? Beyond the Hard Problem

"Should a being which can conceive of eternity be denied it ?"

This rather strange question is one which has plagued Robert Kuhn in his investigations into consciousness. You may remember that I found this to be distinctly odd when I summarised a three hour (!) YouTube video where he mentions it as motivation. I also said that my views had shifted considerably more towards outright uncertainty, and indeed that remains very much the case... if anything, all the more so after mulling it over.

The thing is, Kuhn's question is plaguing me as well, but in a slightly different way. I've long advocated that the key aspect of consciousness is its non-physical nature, there being no such physical phenomena as redness or guilt or ennui. Oh, sure, there are physical brain states corresponding to these, undeniably so. But that we can conceive of the non-physical... that we can imagine such things as numbers which have no actual substance in and of themselves... if we can conceive of the non-physical, does that make it inevitable ? Since we can conceive of, say, numbers, in the purely abstract sense, doesn't that mean the purely abstract itself must exist, in some very broad sense ?

This bothers me because it feels suspiciously like the old Ontological Argument : God is necessarily perfect, and perfection necessarily exists, ergo God exists. It's perfectly circular.

Is the same true of the non-physical ? I honestly don't know. I worry that the problem is simply intractable, an inescapable limitation of being trapped inside our skulls with nought to describe the world but language. Escape from our mental prisons feels like a true impossibility.

In fact my crisis of confidence borders on the outright nihilistic as far as consciousness goes. If we cannot know the nature of consciousness, you might think, then this points towards neutral monism, my close second favourite interpretation after dualism. In saying that mind and matter are unified by by some third unknown substance, neutral monism readily allows for an everyday sort of dualism : yes, ultimately all things are one, but you can't ever know the true nature of the one substance or how it manifests in such different ways, so for all intents and purposes, mind and matter might as well be different things. 

The problem is that this now feels to me like a massive degeneracy. Dualism slides into neutral monism but the reverse is also true, just as idealism and physicalism appear to be two halves of the same coin. Even worse, if you posit that you can't ever know the true nature of the thing unifying these apparently disparate substances, you might as well be postulating magic. This is the very thing I've been at pains to avoid in trying to make dualism (which still seems intuitively far the simplest option to me) palatable to a scientific viewpoint.

But it gets still worse than this. The more you engage with any one position, the more you attempt to pin it down... the more similar each of them seems to be to all the rest. The merest slip of a definition changes physicalism into idealism or illusionism into panpsychism. The whole thing collapses into a spectacular philosophical singularity with no distinguishable positions and no productive insight at all.

Well, that was bleak.

Is there any hope left ? Yes and no. No, perhaps, in that we might have to admit that the basic problem of describing the true nature of mind is an impossible dream. Neither our language nor our fundamental mental faculties are up to the task : we cannot, for instance, truly conceive of the non-physical and we certainly can't describe it. Since we ourselves are mental constructs, we cannot fully grasp our own nature.

But some hope remains. If we can't really know anything with the truest certainty this doesn't mean that we don't all naturally cling to some preferences. For all that I've just said, I still tend more strongly towards my own innate positions as being the best way that I can make sense of the word. In that sense, this may be less of a nihilistic collapse and more a paring back, a relinquishing of ambition to objectively solve the mystery and more of an acceptance that this is only ever going to be a personal perspective.

But a very much more upbeat stance comes from this excellent Aeon essay, which is of the sort which reminds me of just how damn good Aeon can be.

The ‘easy problem’ is to understand how the brain (and body) gives rise to perception, cognition, learning and behaviour. The ‘hard’ problem is to understand why and how any of this should be associated with consciousness at all: why aren’t we just robots, or philosophical zombies, without any inner universe? It’s tempting to think that solving the easy problem (whatever this might mean) would get us nowhere in solving the hard problem, leaving the brain basis of consciousness a total mystery.

But there is an alternative, which I like to call the real problem: how to account for the various properties of consciousness in terms of biological mechanisms; without pretending it doesn’t exist (easy problem) and without worrying too much about explaining its existence in the first place (hard problem).

There are some historical parallels for this approach, for example in the study of life. Once, biochemists doubted that biological mechanisms could ever explain the property of being alive. Today, although our understanding remains incomplete, this initial sense of mystery has largely dissolved. Biologists have simply gotten on with the business of explaining the various properties of living systems in terms of underlying mechanisms: metabolism, homeostasis, reproduction and so on. An important lesson here is that life is not ‘one thing’ – rather, it has many potentially separable aspects.

In essence, shut up and calculate. If we can't understand the fundamental nature of consciousness, or how and why it exists, we can still understand many aspects of it. Instead of trying to explain how the ghost in the biological machine is able to influence physical matter, we can learn what signature of physical matter corresponds to which mental processes. This is pleasingly neutral as it in no way implies anything whatsoever about the non-physical : sure, ennui might correlate to a brain state, but in terms of determining whether the emotion has some Platonic "realness", it means nothing at all. Just as we can't say what is it about a frog that makes it clearly alive but we can study how it jumps and croaks in extreme detail (if you're into that sort of thing), so we can study what consciousness is doing in the brain (or the other way around if you prefer).

But wait, there's more.

A good starting point is to distinguish between conscious level, conscious content, and conscious self. Conscious level has to do with being conscious at all – the difference between being in a dreamless sleep (or under general anaesthesia) and being vividly awake and aware. Conscious contents are what populate your conscious experiences when you are conscious – the sights, sounds, smells, emotions, thoughts and beliefs that make up your inner universe. And among these conscious contents is the specific experience of being you. This is conscious, and is probably the aspect of consciousness that we cling to most tightly.

Ahh, now here it seems like we have scope for real progress after all. We're back in the realm of being able to compartmentalise, reduce, and analyse : we can examine how changing one thing changes others, even quantitatively so in terms of the neurological effects. But more than that, here we have different aspects of consciousness itself, useful things we can discuss without simply resorting to the purely neurological or needing to address the intractable philosophical nature of what consciousness actually is. We can get inside the mind, so to speak, without discussing what it's made of.

Complexity measures of consciousness have already been used to track changing levels of awareness across states of sleep and anaesthesia. They can even be used to check for any persistence of consciousness following brain injury, where diagnoses based on a patient’s behaviour are sometimes misleading. At the Sussex Centre, we are working to improve the practicality of these measures by computing ‘brain complexity’ on the basis of spontaneous neural activity – the brain’s ongoing ‘echo’ – without the need for brain stimulation. The promise is that the ability to measure consciousness, to quantify its comings and goings, will transform our scientific understanding.

This is something I've read about before, but it becomes all the more intriguing when you stop trying to say that "consciousness is correlated with physical phenomena, therefore it must be the same as them" (which I think is a nonsense position). The framework of simply studying the correlations for their own sake, with no need to infer a deeper meaning, transforms this into something much more interesting and rewarding : the goalposts are realistic, achievable, and entirely non-threatening.

Consciousness is informative in the sense that every experience is [slightly] different from every other experience you have ever had, or ever could have... integrated in the sense that every conscious experience appears as a unified scene. We do not experience colours separately from their shapes, nor objects independently of their background. It turns out that the maths that captures this co-existence of information and integration maps onto the emerging measures of brain complexity I described above. This is no accident – it is an application of the ‘real problem’ strategy. We’re taking a description of consciousness at the level of subjective experience, and mapping it to objective descriptions of brain mechanisms.

Some researchers take these ideas much further, to grapple with the hard problem itself. Tononi, who pioneered this approach, argues that consciousness simply is integrated information. This is an intriguing and powerful proposal, but it comes at the cost of admitting that consciousness could be present everywhere and in everything, a philosophical view known as panpsychism. The additional mathematical contortions needed also mean that, in practice, integrated information becomes impossible to measure for any real complex system. This is an instructive example of how targeting the hard problem, rather than the real problem, can slow down or even stop experimental progress.

Yes ! Quick, someone get me a little flag and/or some pom-poms, I want to be the cheerleader for this approach !

The rest of the essay continues in a similarly interesting vein. However as that ended up sending me down more of an AI-based tangent I'll leave that for the next post. Here I'll just end up recalling Kuhn's comments that we can think scientifically about otherwise non-scientific issues. Years ago I tried to set out the most basic assumptions of the scientific world view, noting that if these were undermined we'd be in real trouble. But perhaps not. The requirement to be logical, clear (both in our conclusions and reasoning process), and accountable might see us through even in realms where scientists otherwise fear to tread. The approach here, of setting out the different properties of consciousness – ones we can surely all agree on – gives me some hope that, even if we can't solve the ultimate mystery, we can still find something interesting to talk about.

Do Androids Dream Of Anything Very Much ?

Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's op...