Last time I set out my despondency at being ever able to solve the mystery of what the mind actually is, in contrast to Robert Kuhn's optimistic viewpoint that we just need to find the right theory. But an Aeon piece soon had me feeling hopeful once more : not that we could indeed solve everything, but that with a bit of a goal-adjustment, we could examine consciousness in a way that would be both interesting and productive.
This post continues examining the rest of the essay. Since this put me in a very different frame of, err, mind, this is not part two and you don't have to read the previous post at all. Rather, the rest of the essay got me thinking about what we mean by hallucinations, particularly in the context of AI.
So the remainder of the essay is of a similarly high standard to the rest, but is mainly concerned with what sort of "neural correlates" may indicate consciousness and how the brain works : does it perceive reality, act as a prediction engine, or is consciousness something that happens when prediction and observation are in disagreement ? In some circumstances it seems that expectation dominates and that's what gives rise to hallucinations; the interesting bit here is that philosophically, this implies that all conscious experience is a hallucination, not the difference between expectation and reality.
Which, of course, raises obvious parallels to LLMs. As I've said before, I don't believe the common claim that to a chatbot everything is a hallucination is particularly helpful any more : it's not baseless, but I think we're going to need some more careful definitions and/or terminology for this. Interestingly, this is underscored by the final point of the article, on the different types of self we experience.
There is the bodily self, which is the experience of being a body and of having a particular body. There is the perspectival self, which is the experience of perceiving the world from a particular first-person point of view. The volitional self involves experiences of intention and of agency – of urges to do this or that, and of being the causes of things that happen. At higher levels, we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time, built from a rich set of autobiographical memories. And the social self is that aspect of self-experience that is refracted through the perceived minds of others, shaped by our unique social milieu.
The experience of embodied selfhood depends on predictions about body-related causes of sensory signals across interoceptive and proprioceptive channels, as well as across the classic senses. Our experiences of being and having a body are ‘controlled hallucinations’ of a very distinctive kind.
In that sense it would appear that "hallucination" here simply means "inner awareness" of some sort, an experience not directly connected with reality. If so, then by this definition I would strongly dispute that LLMs ever hallucinate at all, in that I simply don't think they have the same kind of experience as sentient life forms do – not even to the smallest degree. I think they're nothing more than words on a screen, a clever distribution of semantic vectors and elaborate guessing machines... and that's where they end. They exist as pure text alone. Nothing else.
I think this is probably my only point of dispute with the essay. I don't think "hallucinate" as used here is ideal, though I can see why they've used it in this way. It seems that what we mean with the word could be :
- A felt inner experience of any sort
- A mismatch between perception and reality
- A total fabrication of data.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.