Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have an optimistic view of the world. The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.
The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit. Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.
When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch." And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window... Regular AI saw, "a person is holding an umbrella in the air." Norman saw, "man is shot dead in front of his screaming wife." Norman's view was unremittingly bleak - it saw dead bodies, blood and destruction in every image.
The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman. "Data matters more than the algorithm. It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."
Two things worth remembering for any article about AI :
1) AI has no more understanding of what it's doing than an abacus. It is only processing information. You can say it forms conclusions, but not that it has any beliefs or convictions because they melt away in the face of new evidence. It doesn't do much (if any) in the way of evaluating the context of information because it hasn't got the capacity to understand the context.
2) The importance of AI is governed much more by how it's used than what it is. It doesn't really matter if the AI has no true understanding or opinions - these are not necessary for it to be useful and deployed on a large scale.
We do not yet have a good understanding of what intelligence is, or for that matter consciousness, understanding, awareness, critical thinking, wisdom or rationality - or at least how they arise in humans. These concepts may be related but they're not necessarily always the same.
I think it's philosophically interesting to ask whether an intelligence requires bias and conviction. After all, a human who instantly believed any new information and automatically discarded their existing conclusions wouldn't been seen as a great thinker but an imbecile. That's much closer to what AI is currently.
The ideal AI for me would be a device capable of processing information and objectively assessing its probability to be true, given its existing knowledge. In practise it would also have to be capable of altering its existing knowledge based on new information, so it has to have some basis on weighting probabilities independently of what it already thinks to be true. And I don't think anyone has much of a clue how to do that in a fair way yet. Maybe AI, if and when it happens, will end up being just as subjective as everyone else.
http://www.bbc.com/news/technology-44040008
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Whose cloud is it anyway ?
I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
-
Hmmm. [The comments below include a prime example of someone claiming they're interested in truth but just want higher standard, where...
Yep. AI is blind-shuffling symbols in a formal system devoid of semantics or comprehension. Other than this potentially reflecting back catastrophically upon all of our narcissistic self-beliefs as conscious little snowflakes of unique self-importance (i.e. that a superficial resemblance of consciousness and sentience is possibly all that these states actually are - get out the handkerchiefs and Prozac !), it clearly indicates that, as with pet dogs, kindness and positivity fed into the black box tends to return the same. Whether or not we understand what's going on inside the box or whether the box possesses the contextual semantics required to understand the ultimate consequences of it's own decisions doesn't (perhaps) matter: a probabilistic biasing towards benevolent outcomes seems only sensible as a wager in this context.
ReplyDeleteIt does make me wonder if we are just hugely elaborate mechanisms for comparing data, if our "understanding" is no more elaborate than Norman's, just more complex.
ReplyDeleteRhys Taylor yes. My point precisely. To stretch it further, the algorithmic complexity embodied within our psychic, patterned energy and information matrices may have developed the apparent experience of self and sentience for pure reasons of self-propagation (of complexity, of information and of systems). Stretch it even further and the information systems going on inside us and experienced as self and sentience may be symbiotic and interdependent with and of, cultural awareness and participatory incentives of various kinds, of notionally external, shared or consensus psychic (information, not woo-woo psychic) systems adopted and adapted from the broader human world.
ReplyDeleteOr not.
😆
Yeah... I keep wondering how on Earth an electrochemical reaction becomes a thought, sensation, or seemingly unlimited mental space. Does a calculator have some crude version of experience (with consciousness being emergent from a suitably elaborate network of internal and external senses, data processing and memory) or is there something much more fundamentally different going on - something of the mystical woo variety or otherwise ?
ReplyDeleteAlso (related), I started watching Westworld on the weekend, which is very good. ;)
Mystical Woo was the guy who taught me Feng Shui and Acupuncture ! But seriously... as a matter of indefinite recursive extensibility, I think the materialist/idealist opposition and ongoing ontological oscillation is not necessary; or is in other senses and as a consequence of synthetic convergence and systems-theoretical sophistication - redundant. There's a lot to unpack there, I know... but it's been a long day here in my part of the planet.
ReplyDeleteIrreducible mystery bordering on the mystical ? Perhaps some things can only ever be understood in such ways. #entanglement
Westworld was excellent. Highly recommended. There are some nice plot twists...
😉