Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Tuesday 30 May 2023

The meaningfully meaningless meaning of meaning

I wanted to call this post simply, "Is", but I resisted. It'd make it too difficult to search for in the future, anyway.

All this AI stuff has provoked a lot (1) of discussion (2) as to whether AI can really be said to be "understanding" anything. Does it truly extract the meaning of anything, or is it only a glorified abacus, shuffling the text around in impressive yet ultimately vacuous ways ?

As befits this blog, my stance on this is conflicted. Since I believe very strongly that we have an inner awareness which arises from something that can't be reduced to mere words, in some ways I'm inclined towards the latter. Just rearranging words in the correct order does not constitute a mind of any sort. Likewise, playing all the notes even in the right order doesn't indicate the presence of emotions or intent.

On the other hand, I've long speculated that a purely text-based machine could be said to have a form of understanding, even if necessarily different from our own. It wouldn't necessarily be conscious, but it could still be important - indeed, maybe even preferable to recreating a truly living machine, which would have all kinds of no-doubt horrific emotions and be utterly unfathomable. A ruthlessly unbiased, objective truth engine is in some ways considerably more appealing, though there are of course tremendous difficulties with the very notion.

What is that we mean when we say "meaning" or "understanding" ? In one recent discussion (2), two possibilities were given, which I quote verbatim :

  1. That evocation of sensory-emotional-cognitive content elicited by a bit of language within a language user.
  2. Having an effective representation of one’s little corner of the world. By effective it is meant being competent in using one’s learned model to interact with the world. Meaning is as meaning does.
And a third, closely related question :
  1.  Has a painting or other work of art ever had a powerful effect on you? A natural vista? We can legitimately say that these interacted meaningfully with you though these have no purported understanding of language.

I've done a piece looking at this before, but in a more moral context rather than one about cognition.  To quote myself :

For a thing to be meaningful it has to have a connection to something else. The more or stronger the connections, the more meaningful it is, and vice-versa. An utterly meaningless activity would be something that doesn't affect anything else entirely in any way whatsoever, like a subatomic particle which emerges from the quantum foam for a picosecond and then goes away again (or one of Ricky Gervais' Flanimals, which "does absolutely nothing and dies"). ... If, for some strange reason, he [some random dude] achieves some mental well-being as a result of his pointless polishing, then it's not really pointless or meaningless at all. It's not necessarily deeply meaningful, but it has some meaning.

This could also apply to severing or weakening connections just as well as creating or strengthening them; loss can be some of the most meaningful experiences of all. And where I've speculated* about meaning in the sense of understanding*, I go along similar lines :

* Both of those links are worth re-reading, containing skepticism of GPT3 from 2020 which now looks utterly ridiculous. In two or three years the output has gone from mainly dribble to mainly coherent - borderline mainly correct. The authors of the pieces I quoted in the links pronounced judgement far, far too soon ! See my follow up from shortly afterwards where I first got to play with GPT3 myself.

My working definition is that understanding is knowledge of the connections between a thing and other things. The more complete our knowledge of how things interact with other things, the better we can say we understand them... I noted that this isn't a perfect definition, since my knowledge of mathematical symbols, for example, does not enable me to really understand - let alone solve - an equation.

Sometimes, I can understand all the individual components of a system but not how the whole lot interact. [I might be able to do] each individual calculation myself (just more slowly), but not the whole bunch all together. That suggests there's some numerical limit on how many connections I can hold at once. I could fully understand each one, but not how they all interact at the same time.

Likewise, electronic circuits : in undergraduate studies they gave us a circuit diagram to solve which wasn't all that complex, just four diodes linked in a weird way, but my brain said, "nope, that's completely non-linear, can't do that". I just can't handle more than a few variables in my head simultaneously, which imposes quite hard limits on what I can understand. Processing speed might also be a factor.

This is all well and good, but I got hung up on the limitations of this approach, particularly in regard to knowledge and the base units within the web of facts that could construct an understanding :

But subtleties arise even with this simplest form of misunderstanding. In a straightforwardly linear chain of processes, a computer may only increase the speed at which I can solve a problem. Where the situation is more of a complex interconnected web of processes, however, I may never be able to understand how a system works, because there may just not be any root causes to understand : properties may be emergent from a complex system which I have no way of guessing. The real difficulty is when we don't understand a single, irreducible fact. We can know that fact, but that's not the same as understanding it, not really. How do we program such a state in software ?

Maybe this is too ambitious, however. Perhaps it's even impossible, perhaps there exists some basic atomic unit of knowledge which simply can't be reduced any further (this is very Locke). 

My thinking today though is a bit different. If we simply take it for granted that understanding is knowing how a thing relates to other things, perhaps we can unify all this. Now this is of course a paradox because it's self-referential : knowing is understanding, understanding is knowing, ad infinitium... so I throw myself on the reader's mercy and ask you to accept this most basic limitation. I need you to grant some very much deeper aspect to "knowledge" which I here cannot tackle, that we must proceed without attempting a more reductive approach or we will get nowhere.

Under this system, the two definitions given above can maybe be reconciled. The first would be that "meaning" is knowing how an external thing relates to our internal perception of it. Very similarly, "understanding" in this sense would mean knowledge of how words and text, and language, in general relate to qualia, emotions, and mental concepts (ultimately expressions of the deepest mental processes, electrochemical activities and so on).

In this version, it makes no sense at all to speak of an AI as having any sort of "understanding", because, as from thread (1) :

Their encodings do not represent any real "meaning", even abstractly. The "meaning" is in us, not in the LLM.

A language model cannot possible encode meaning in the sense of our inner awareness because it simply doesn't have one. In this definition, to say an AI "understands" something is completely ludicrous.

The second definition is more about competence, knowing which thing relates to which other things in order to produce useful results. In this sense, there's no problem at all postulating that a language model can be said to have understanding, because knowledge of how verbs and adjectives relate to each other is clearly part of this, even if only a small part of human cognition.

To me all of these seem like different expressions of the same thing. The "moral" sense of meaning, as we might get from a painting or a landscape or a speech, relates to how something affects our own internal processes. Something invokes "meaning" if it affects more of those internal connections ("wow, this picture of a windmill changes both my view of both impressionist art and 16th century agriculture !") or a few of them more strongly ("this destroys my theory of impressionist art altogether, however my views on farming haven't changed"). By contrast the "competence" model is just about different connections, the ones which are based primarily on the stuff "out there" that have little direct subjective effect on our emotional states. Both, I suggest, are fundamentally about how things relate to other things.

And both of these seem completely reasonable to me. As I discuss at some length in (1), I think it's entirely valid to say that large language models can be said to have a type of understanding, though it definitely isn't the human sort. They can operate very successfully, though not perfectly, on complex text; they can deal with the connections between words of different types. As our moral-type of understanding relates to connections in our own internal awareness, and our knowledge-type of understanding concerns connections between external objects*, so does a LLM's understanding consist of connections between elements of text.

* Or, if you insist : our own internal representations of external objects.

This to me seems perfectly straightforward, useful, and self-consistent. I don't think the suggestion in (1) that I'm applying different standards is correct :

You say you do not understand higher maths - You can’t apply it in novel ways, you don’t know why you use it like you’re supposed to use it, it’s just rote learning. From this your internal-use definition of “understanding” would seem to mean “understanding why something works to the point of being able to apply it in novel situations”.

But for ChatGPT, you’re applying something much less as necessitating understanding. ChatGPT is literally just generating words according to pattern prediction. The situations are novel, but only with respect to the topic - it is not novel in terms of word patterns or subject (if you find a subject that the bot has no training material for, it will not do well).

Just like midjourney is mimicking pencil strokes or brush strokes, so are the chat bots mimicking known text. That is something very far from the idea of understanding you use for yourself.

I take the point, but this it not my definition of understanding at all. Because to me this is all about different connections, I do not see any difficulty with claiming that "chatbots make connections between elements of text" as a type of understanding, with human understanding being something which involves connections to our own internal models. It's all about different types of connections, which are of fundamentally different natures in humans and chatbots : but in that crucial way, also fundamentally similar. The strands may be of silk or silicon, but the structure can be similar. 

Understanding in the broadest sense doesn't need a connection to a subjective awareness. And when a human misunderstands something, they fail to make the necessary connections. When a chatbot gets an answer wrong, it fails to make the necessary connections. Sure, it can't apply itself to novel situations, but humans stumble here as well. When we see an observation that doesn't fit the existing paradigm, we run into a horrible mess before we figure it out. A bot's version of understanding is strictly linguistic : it is not reasoning in the sense of coming up with wholly new concepts. I claim nothing more than that it understands text itself, not the subject matter. I don't doubt that a bot trained only on text about lettuce would come up with some... interesting stuff if asked about nuclear fusion*.

* Somebody please do this.

The weird thing here is the apparent belief that people think even this is anthropomorphising. It isn't, and if anything it's the opposite. It's simply broadening the definition of understanding to something I think is self-consistent with existing use. Human understanding just becomes a subset of this, but is not diminished in any way, nor are chatbots elevated to have something they clearly do not have (though, this is quite fun).

Here at last is where I think we have problems with the word "is". Materialism says that consciousness literally is something physical, that the flow of electrons in the brain (say) literally is thought itself. Physicalism says it's not an actual material substance, but still something physical, e.g that electrical fields are thoughts. Functionalism is somewhat broader but still boils down to "consciousness is whatever the brain is doing", that you can literally define things by their purpose... a chair is that which is for sitting; in sociology, society is defined by its goals; thinking is that which arises from the brain. Or in another thread :

Functionalism is just another way of saying that things exist at various levels of abstraction. And still be “real” at each level of abstraction. For example an algorithm is real even though that’s just a functional abstraction or any of many ways to implement the algorithm. Some people say the implementation is real and the abstraction isn’t. Word games.

I think that things do exist at different levels of abstraction, but they cannot be equally or equivalently "real" (the quotes are vital !) at each level. The kind of existence of the concept of a chair is wholly different to the wooden kind you can actually sit on. Even if there was a direct one-to-one relationship between brain waves and mental states, I think it would make not a lick of sense to say that they are the same thing. I think there is an unbridgeable gulf between the two. A thing is not its function; a function can only describe a thing in part, the two cannot literally be equal. To say understanding is the deeper processes within the brain (and only those) seems curiously limited to me.

In like vein, I find it a bit weird to define the supernatural out of even conceptual existence. Saying anything that occurs is natural by definition is to my way of thinking missing the point entirely. Defining understanding to be something only the mind or brain can do similarly doesn't seem useful. It just seems unnecessarily limiting. Grand unified theories seem to be a dream of philosophers as much as physicists. 

As to the problem of knowledge = understanding, perhaps, as in the last link, this is also something that just cannot be reduced to anything fundamental, or maybe we can only define things in relation other things. I don't know. This I will have to leave for another day. Or maybe I'll just ask a chatbot to figure it out for me instead.


EDIT : This post generated a lively discussion here (3). Together especially with (1), I think we can at least properly pin down the point of dispute even if we can't fully resolve it.

The interpretation I favour is that understanding is a process, specifically of forming connections. I actively deny that the type of understanding an AI has is the same as the human sort, but at the same time, I think it perfectly valid to say a pure textual understanding, even if using nonsense words, is a type of understanding. Text can be said to "mean" something to an AI in that (and only in that) it knows how it relates to other text.

The major alternative seems to be that understanding must account not just for the process but what's being processed. Human understanding relates language to both external sensory stimuli and internal, non-linguistic mental processes. In that sense I fully agree that no AI could be said to have anything remotely comparable to human ideas of meaning. This latter sense of the word has the major advantage in that it opens an avenue to what we mean when say we understand an irreducible fact, allowing connections to that internal, non-computational reality.

Both of these definitions have value. The point of contention is whether "understanding is processing connections" is too broad a definition, and if in fact "understanding is the processing of connections between language and mental states" (say) is already a fait accompli. To me the process is more important than what's being processed; whether one is boiling water or boiling milk, the important thing is that something is boiling. 

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Ordinary Men

As promised last time  I'm going to do a more thorough review of Christopher Browning's Ordinary Men . I already mentioned the Netf...