Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday 27 August 2020

Sophisticated gibberish is still gibberish

My news feeds have either gotten better at avoiding the hype train, or I'm just not clicking on the stories as much. But anyway, this "GPT3" thingy, the evolved version of that AI that was so dangerous it couldn't be released until it was (try it for yourself here, it's hilarious), seems to be causing the headline-writers to make all the usual mistakes. Of course, as this excellent article demonstrates, it's still nothing more than a glorified abacus. Sure, an abacus with access to oodles and oodles of data and sophisticated pattern-recognition algorithms, but an abacus nonetheless. Here's my favourite example of input (italic) generating truly ridiculous output (regular) :
At the party, I poured myself a glass of lemonade, but it turned out to be too sour, so I added a little sugar. I didn’t see a spoon handy, so I stirred it with a cigarette. But that turned out to be a bad idea because it kept falling on the floor. That’s when he decided to start the Cremation Association of North America, which has become a major cremation provider with 145 locations.
Gosh. You can find more examples in the article. And worryingly :
... it’s also worth noting that OpenAI has thus far not allowed us research access to GPT-3, despite both the company’s name and the nonprofit status of its oversight organization. Instead, OpenAI put us off indefinitely despite repeated requests — even as it made access widely available to the media. Fortunately, our colleague Douglas Summers-Stay, who had access, generously offered to run the experiments for us... OpenAI’s striking lack of openness seems to us to be a serious breach of scientific ethics, and a distortion of the goals of the associated nonprofit. 
Calling themselves "OpenAI" would seem to be a serious misnomer at best. The author here does their best to examine both sides of the argument, but ultimately it's clear that is not any kind of true intelligence : the mistakes made are such that a genuinely intelligent understanding could never allow them. Ever.
Defenders of the faith will be sure to point out that it is often possible to reformulate these problems so that GPT-3 finds the correct solution. For instance, you can get GPT-3 to give the correct answer to the cranberry/grape juice problem if you give it the following long-winded frame as a prompt: [prompt is rephrased to a multiple choice question]
The trouble is that you have no way of knowing in advance which formulations will or won’t give you the right answer... the problem is not with GPT-3’s syntax (which is perfectly fluent) but with its semantics: it can produce words in perfect English, but it has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world. 
As we were putting together this essay, Summers-Stay wrote to one of us, saying this: "GPT is odd because it doesn’t 'care' about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice."
As I've mentioned before, the problem is we don't have a good understanding of understanding. To recall, my working definition is that understanding is knowledge of the connections between a thing and other things. The more complete our knowledge of how things interact with other things, the better we can say we understand them. Using knowledge of similar systems, if we truly understand them, we can extrapolate and interpolate to predict what will happen in novel situations. If our prediction is incorrect, our understanding is incomplete or faulty.

I noted that this isn't a perfect definition, since my knowledge of mathematical symbols, for example, does not enable me to really understand - let alone solve - an equation. Perhaps it's worth considering misunderstandings a bit more.

Sometimes, I can understand all the individual components of a system but not how the whole lot interact. If was really keen I could try and program a solution, and if my understanding of each part really was correct and complete, the computer would give a solution even if I myself couldn't. In such a case, I could probably do each individual calculation myself (just more slowly), but not the whole bunch all together. That suggests there's some numerical limit on how many connections I can hold at once. I could fully understand each one, but not how they all interact at the same time. I could guess how two gravitational bodies would orbit each other, but not very exactly - and with three I'm completely screwed.

This sort of simple numerical complexity limit shouldn't be any kind of problem to an AI : that's just a question of processing power and memory. Indeed, it could be argued that in this sense GPT3 already has a far greater understanding of the English language than any human : it knows all the possible valid combinations of words, all the correct sentence structures, etc. Its understanding is purely linguistic, but still a form of understanding nonetheless. You could even say it has a form of intelligence in that it's capable of solving problems, but in no way could you say it was "aware" of anything - just as we could say a calculator is intelligent but (almost certainly) doesn't have any awareness.

But subtleties arise even with this simplest form of misunderstanding. In a straightforwardly linear chain of processes, a computer may only increase the speed at which I can solve a problem. Where the situation is more of a complex interconnected web of processes, however, I may never be able to understand how a system works, because there may just not be any root causes to understand : properties may be emergent from a complex system which I have no way of guessing. As in An Inspector Calls, there may be no single guilty party, but only an event arising from a whole array of situations, events and circumstances. I might understand every single linear process, but still, perhaps, never fully understand the whole. I might understand each segment of an complicated equation, but not the interplay between each part.

The real difficulties would seem to arise from two issues, one easy, one hard. The easy issue is that GPT3 has no sensory input. It has no eyes to see nor tongue to speak but as its programmers are pleased to direct it; it experiences no sensations of any kind whatsoever. It will never know pain, or fear, or have an embarrassing orgasm. What do the words "cuddly kitten" mean if you've never cuddled a kitten ? Very little. It's trapped forever inside the ultimate Mary's Room.

In principle, this could be solved by connecting the thing up to some sensory apparatus : cameras, pressure sensors, dildos, thermometers, whatever. And this might help to some degree. The AI would then understand that the words are labels (parameters and variables in its code) corresponding to physical objects. It could learn for itself what happens if you try to stir a drink with a cigarette and thus never form any really stupid connections between stirring a drink and promoting a crematorium. It would surely be a lot harder to fool. And that would clearly improve matters considerably.

But this may not help as much as it might appear. For an electronic system, everything is reducible to numbers : the voltage level from the video feed, the deformation of the pressure sensor, the vibration frequency of the.... item. There is no obvious way such numbers can ever give rise to awareness, sensation, desires, emotion, or fundamental understanding, which are not measurable or sensible properties of the physical world. This is the Hard Problem both of AI and of consciousness in general.

So never mind the problems of comprehending a web of processes. The real difficulty is when we don't understand a single, irreducible fact. We can know that fact, but that's not the same as understanding it, not really. How do we program such a state in software ? Does it somehow magically emerge because we've connected a stream of numbers from an electronic camera ? It's bloody hard to see how.

Take "one". I know what "one" is. So do you and everyone else. But try and rigorously define it using language and you get Plato's torturous failure Parmenides. Creating a truly aware AI, at least through software, requires we define things we understand but cannot explain using language. By definition this is impossible, hence we will never program our way to a true AI. We might, perhaps, build one, but until we understand the true nature of consciousness, this will only happen by accident.

None of this means AI in the strict sense of only intelligence, not understanding and awareness, isn't useful. For as weird as some of the stuff that GPT3 is spouting, still the worst of it isn't half as incomprehensible as the effluence continuously spewing forth from the current incumbent of the White House. Far from disparaging OpenAI, instead I'm proposing a last-minute entry in this year's Political Apocalypse World Championship, a.k.a. the US elections. Good luck to 'em.

GPT-3, Bloviator: OpenAI's language generator has no idea what it's talking about

Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities. Twitter has been abuzz about its power and potential. The New York Times published an op-ed about it.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Positive effects from negative history

Most books I read tend to be text-heavy. I tend to like stuff which is analytical but lively, preferably chronological and focused on eithe...