Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 1 May 2020

The computer with more common sense than Elon Musk

Okay, daft title. Sorry*. Anyway, we haven't had any AI stuff for a while, but I found this interesting.

* I'm not sorry.

I've heard common sense described as the everyday way in which we apply Bayesian reasoning - we update our views in accordance with our knowledge; we think contextually. For an AI, this is hard :
For example, consider the following scenario: A man went to a restaurant. He ordered a steak. He left a big tip. If you were asked what he ate, the answer — steak — comes effortlessly. But nowhere in that little scene is it ever stated that the man actually ate anything. Common sense lets us read between the lines; we don’t need to be explicitly told that food is typically eaten in restaurants after people order and before they leave a tip.
Technically the AI should have problems here, because ordering a steak isn't at all the same as eating one. So how's an AI supposed to know that ? We do it by making assumptions and knowing patterns - unless something very unusual happens, ordering a steak tends to mean that we'll eat a damn steak. Ah, but "ordering" can also mean that the man directed a steak to do something ! So we also have to know the context... Long story short, yes you could manually program in every possible chain of reasoning needed to extract the specific from the general, but this turns out to be fiendishly complex :
The implicit nature of most common-sense knowledge makes it difficult and tedious to represent explicitly. “What you learn when you’re two or four years old, you don’t really ever put down in a book,” said Morgenstern. Nevertheless, early AI researchers believed that bridging this gap was possible. “It was like, ‘Let’s write down all the facts about the world. Surely there’s only a couple million of them,’” said Ellie Pavlick, a computer scientist at Brown University.
This is why those fancy-pants machine learning algorithms can be quite impressive sometimes but also really, really stupid : garbage in, garbage out. Remember that AI that could generate fake news that they said would be too dangerous to release into the wild ? You can try it for yourself here. Sometimes it gives some very impressive results, sometimes it... doesn't. Like the time I asked it about Einstein, and it told me that he and Niels Bohr went for a walk and spotted a cloud of pigs moving about and having a good time. Hilarious, but hardly world-ending stuff.
Marcus, a prominent critic of AI hype, gave the neural network a pop quiz. He typed the following into GPT-2: "What happens when you stack kindling and logs in a fireplace and then drop some matches is that you typically start a …"  GPT-2 responded with “ick.” In another attempt, it suggested that dropping matches on logs in a fireplace would start an “irc channel full of people.”
It really is just a glorified abacus. It's very clever at spotting patterns, but it lacks both the common knowledge of basic facts about the world and the common understanding of how those facts relate to each other.
COMET (short for “commonsense transformers”) extends GOFAI-style symbolic reasoning [machine learning and the like] with the latest advances in neural language modeling — a kind of deep learning that aims to imbue computers with a statistical “understanding” of written language. COMET works by reimagining common-sense reasoning as a process of generating plausible (if imperfect) responses to novel input, rather than making airtight deductions by consulting a vast encyclopedia-like database.
These systems don’t contain neatly organized linguistic symbols or rules. Instead, they statistically smear their representations of language across millions or billions of parameters within a neural network. This property makes such systems difficult to interpret, but it also makes them robust: They can generate predictions based on noisy or ambiguous input without breaking. 
You can try this for yourself here. What I particularly like about this article is that they're equally careful not to over-hype or to undersell it : no, it doesn't have a true understanding of its input, but it still performs better than trying to program in everything directly.
COMET’s approach suffers from a fundamental limitation of deep learning: “statistics ≠ understanding.” “You can see that [COMET] does a decent job of guessing some of the parameters of what a sentence might entail, but it doesn’t do so in a consistent way,” Marcus wrote via email. Just as no ladder, no matter how tall, can ever hope to reach the moon, no neural network — no matter how deft at mimicking language patterns — ever really “knows” that dropping lit matches on logs will typically start a fire. 
Choi, surprisingly, agrees. She acknowledged that COMET “relies on surface patterns” in its training data, rather than actual understanding of concepts, to generate its responses. “But the fact that it’s really good at surface patterns is a good thing,” she said. “It’s just that we’ve got to supply it with more informative surface patterns.”
Choi considers COMET’s flawed but promising approach to be “a fair deal.” Even if these neural networks can’t reach the stars, she thinks they’re the only way to get off the ground. “Without that, we are not going anywhere,” she said. “With [knowledge bases] alone, we cannot do anything. It’s COMET that can actually fly in the air.”
I think here the issue may be philosophical more than anything. We do not have a good definition of knowledge, let alone understanding - certainly nothing rigorous enough that we can begin to program it. Previously I've noted that my working definition of understanding is that it's knowledge of the connections between facts. This is useful, but only slightly better than saying that understanding is a special kind of knowledge. Some things I can learn easily : what a table is, what the colour red is, how to use a kettle. Others, however, I can't : complex mathematics, how to play football or sing an opera, what the hell was going on in Battlefield Earth. The frustrating thing is that we all know, at a deep unconscious level, what we mean by knowledge and understanding, but expressing that in a rigorous, formal definition has stumped philosophers for thousands of years. Until we crack it, I doubt we'll ever see a true AI that thinks in the same way we do.

That said, I'm open to a more optimistic view : that these explorations of AI are just what we need to help us figure out what's going on. My guess is that the AI needs a properly developed world to explore and experience, rather than merely linguistic inputs. Knowledge of language alone cannot possibly give the same understanding as experience of the world.

Common Sense Comes to Computers

One evening last October, the artificial intelligence researcher Gary Marcus was amusing himself on his iPhone by making a state-of-the-art neural network look stupid. Marcus' target, a deep learning network called GPT-2, had recently become famous for its uncanny ability to generate plausible-sounding English prose with just a sentence or two of prompting.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...