Anyway, back then naive younger me thought that maybe (naive younger me wasn't stupid) if you could construct a suitably complex web of deductions, you'd get something like proper causal reasoning. Yes, this would have to be extremely elaborate in order to do anything useful, but human reasoning doubtless is fantastically complex. Often we rely strongly on unconscious assumptions we're barely even aware of : only when a light switch fails to work to we dig into its operations. In that sense our reasoning could be described as Bayesian, with our conclusions and thought processes being dependent on our priors. We assume by default the light switch will work and therefore neglect completely how it works, unless it fails.
I wondered if, with enough input to such a network, a true intelligence would eventually emerge. I imagined that if you did, you wouldn't get a human-like intelligence, but a pure logic engine. It could see patterns that you'd otherwise miss. It would need some rock-solid axioms on which to proceed, but would be incapable of emotional bias and other fallacies that plague us. You could give it information and it would be able to evaluate the truth of it (and its implications) with ruthless logic.
Since then I've realised that such a truth engine might be fundamentally impossible, especially for an A.I. bound inside a silicon shell. We do not yet have a foolproof method to determining the absolute truth of anything. An A.I. will therefore be strongly dependent on what people tell it, and if two statements are in blunt contradiction, what is it to do ? In order to get anywhere, it's going to need some biases, some priors against which to evaluate information but which themselves can be altered. Even in the case of it being able to go and check the results first-hand, it could potentially be biased by observational techniques, imperfect statistics or lack of a full understanding of logic.
That's not to say that this means an A.I. with a fundamentally different understanding of the world from us isn't possible, or that it wouldn't be without some of the flaws we have to endure. It doesn't mean that an A.I. couldn't be better than us at evaluating information, only that a perfect truth engine will probably never be a thing. We might well still be able to create something very useful though. This article describes a much more elaborate and sophisticated program than poor old Web Hal. So far it shows no signs of doing anything more sinister than commenting on Shakespeare, which is pretty impressive :
Winston and his team decided to call their machine Genesis. They started to think about commonsense rules it would need to function. The first rule they created was deduction—the ability to derive a conclusion by reasoning. “We knew about deduction but didn’t have anything else until we tried to create Genesis,” Winston told me. “So far we have learned we need seven kinds of rules to handle the stories.” For example, Genesis needs something they call the “censor rule” that means: if something is true, then something else can’t be true. For instance, if a character is dead, the person cannot become happy.I'd like to know what the other rules are but they don't say.
When given a story, Genesis creates what is called a representational foundation: a graph that breaks the story down and connects its pieces through classification threads and case frames and expresses properties like relations, actions, and sequences. Then Genesis uses a simple search function to identify concept patterns that emerge from causal connections, in a sense reflecting on its first reading. Based on this process and the seven rule types, the program starts identifying themes and concepts that aren’t explicitly stated in the text of the story.
He typed a sentence into a text window in the program: “A bird flew to a tree.” Below the text window I saw case frames listed. Genesis had identified the actor of the story as the bird, the action as fly, and the destination as tree. There was even a “trajectory” frame illustrating the sequence of action pictorially by showing an arrow hitting a vertical line. Then Winston changed the description to “A bird flew toward a tree.” Now the arrow stopped short of the line.
“Now let’s try Macbeth,” Winston said. He opened up a written version of Macbeth, translated from Shakespearean language to simple English. Gone were the quotations and metaphors; the summarised storyline had been shrunk to about 100 sentences and included only the character types and the sequence of events. In just a few seconds Genesis read the summary and then presented us with a visualisation of the story. Winston calls such visualisations “elaboration graphs.” At the top were some 20 boxes containing information such as “Lady Macbeth is Macbeth’s wife” and “Macbeth murders Duncan.” Below that were lines connecting to other boxes, connecting explicit and inferred elements of the story. What did Genesis think Macbeth was about? “Pyrrhic victory and revenge,” it told us. None of these words appeared in the text of the story.100 sentences is shorter even than anything the Reduced Shakespeare Company produces. Still, I wonder if it could do the reverse : ask it for a story about revenge and have it construct something.
The use of stories is similar to a description in the first Science of Discworld book of humans as the "storytelling ape". There's also evidence that animals have episodic memories, so, as I've said before, animal intelligence might be fundamentally different from ours, but it might be that we're basically the same but with a greater degree of complexity. Also, while storyelling may be an important component of intelligence, it doesn't strike me as necessary for being self aware, having a will or desires or other aspects of sentience. The property of a "story" here is interesting too :
Their idea is that humans were the only species who evolved the cognitive ability to do something called “Merge.” This linguistic “operation” is when a person takes two elements from a conceptual system—say “ate” and “apples”—and merges them into a single new object, which can then be merged with another object—say “Patrick,” to form “Patrick ate apples”—and so on in an almost endlessly complex nesting of hierarchical concepts. This, they believe, is the central and universal characteristic of human language, present in almost everything we do.Finally, this brings me back to the notion of intelligence. Defining knowledge is hard enough, but what do we mean when we say we understand something ? My working definition is that understanding is knowledge of how a thing relates to other things. The more knowledge of this we have, the greater our understanding. So in that sense the Genesis program can be said to have a sort of understanding, albeit at a purely linguistic level. It has no knowledge of sensation or perception, so its understanding is incomplete.
But this is hardly a perfect definition. In my undergraduate days I found that once the maths exceeded a certain level, then even being given complete knowledge up to that point I simply wasn't able to understand anything. Telling me a mathematical formula would, if sufficiently complex, impart me no further knowledge or understanding whatsoever. And I can have full, complete knowledge of a cube, but I won't necessarily understand all its interactions. Does that mean I don't understand the cube or the external systems ?
I don't know. In short, there are tonnes of interesting things in the world and too little time to study them all.
The Storytelling Computer - Issue 75: Story - Nautilus
What is it exactly that makes humans so smart? In his seminal 1950 paper, "Computer Machinery and Intelligence," Alan Turing argued human intelligence was the result of complex symbolic reasoning. Philosopher Marvin Minsky, cofounder of the artificial intelligence lab at the Massachusetts Institute of Technology, also maintained that reasoning-the ability to think in a multiplicity of ways that are hierarchical-was what made humans human.
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.