Very interesting though extremely long article (and I say that without fear of hypocrisy) with a lot of thought-provoking content. Via Dan Weese.
Cognitive science addresses philosophical questions—What is a mind? What is the mind’s relationship to the body? How do we perceive and make sense of the outside world?—but through empirical research rather than through reasoning alone.
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren’t in the head, but that would be to beg the question. So are they or aren’t they?
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that the mind extends into the world and is regularly entangled with a whole range of devices. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.
Clark started musing about the ways in which even adult thought was often scaffolded by things outside the head. There were many kinds of thinking that weren’t possible without a pen and paper, or the digital equivalent—complex mathematical calculations, for instance. They were, in fact, integral components of certain kinds of thought. And so, if thinking extended outside the brain, then the mind did, too.
I, for one, am crap at mental arithmetic. Can't do it, never been able to get much of a grip on it. Not much good at numerical maths with a pen and paper either, but I can understand the concept of a differential equation or a geodesic in curved space, say. Which raises the question of just how far these mental tricks can take us. Do we actually really need to understand trigonometry, for example, or is it enough for us to remember that it's a way of calculating angles in a triangle, and let a computer handle the details ? This is the way machines are traditionally used, though anything that's seen as reducing the mental burden has long been accused of dumbing down (in a way that physical manipulation tools haven't). "What if you find yourself without a calculator ?", they said. Never going to happen. Maybe we should embrace this reality and focus on the things we really do have to remember, rather than trying to make us into better adding machines when we already have fantastically advanced computing equipment more or less everywhere we go. Maybe we'd come up with better, more sophisticated ideas if we dedicated ourselves to that instead of spending years learning how to do number-crunching. Or perhaps not, I don't know.
He thinks, we are all cyborgs, in the most natural way. Without the stimulus of the world, an infant could not learn to hear or see, and a brain develops and rewires itself in response to its environment throughout its life. Any human who uses language to think with has already incorporated an external device into his most intimate self, and the connections only proliferate from there. In Clark’s opinion, this is an excellent thing. The more devices and objects there are available to foster better ways of thinking, the happier he is.
How is it that human thought is so deeply different from that of other animals, even though our brains can be quite similar? The difference is due, he believes, to our heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise. If we do not see this, he writes, it is only because we are in the grip of a prejudice—“that whatever matters about my mind must depend solely on what goes on inside my own biological skin-bag, inside the ancient fortress of skin and skull.”
A friend of mine is pretty convinced that it's due to language, which is perhaps the ultimate expression of mental tool use. I'm not so sure. I find it increasingly unlikely that animals don't have a comparable language, albeit quantitatively and perhaps qualitatively different. Or, as written in a Backreaction post (http://backreaction.blogspot.cz/2016/03/can-we-get-some-sympathy-for-nerdy.html) :
Still, there is a stage of research that remains lonely. That phase in which you don’t really know just what you know, when you have an idea but you can’t put into words, a problem so diffuse you’re not sure what the problem is... You will feel stupid and you will feel lonely and you will feel like nobody can understand you – because nobody can understand you.
There is an interesting phase of thought which cannot be expressed linguistically - a feeling that the key mental leaps have been made, but the capacity to articulate the results verbally has not been achieved. This sensation is by definition impossible to describe, but it's most common for me when thinking about something complicated. The nearest thing to it is that feeling when a memory is on the tip of the tongue but the words won't come out. So something deeper than linguistic processing may be going on, though another interpretation is that these mental tools are being utilised at a subconscious level and it's only the leap to the conscious level that's hard to make (rather than from the deeper mode of thought into the crystallised form of language). To return to the NY article :
When the paper first circulated, in 1995, many found it outlandish. But, as the years passed, and better devices became available, and people started relying on their smartphones to bolster or replace more and more mental functions, Clark noticed that the idea of an extended mind had come to seem almost obvious.
There's much to be said for the CTRL+F search function. User interfaces should not be taken lightly.
He came to believe that if you were going to figure out how intelligence worked you had always to remember the particular tasks for which it had evolved in the first place: running away from predators and toward mates and food. A mind’s first task, in other words, was to control a body. The idea of pure thought was biologically incoherent: cognition was always embodied.. Cognition was a network of partly independent tricks and strategies that had evolved one by one to address various bodily needs. The line between action and thought was more blurry than it seemed. A creature didn’t think in order to move: it just moved, and by moving it discovered the world that then formed the content of its thoughts.
It seems reasonable to suppose that you can't have pure intelligence devoid of anything else : you have to have something for it to be intelligent about. But why should that have to be an external environment ? Why not just pure data ? And if, after some development, you then construct a mechanism that acts intelligently, you strip away its information, does it cease to be intelligent (and if so why ?) or can you then just copy it endlessly and produce unlimited intelligent devices ? Is the external data necessary to the development of an intellect, its actual functioning, or both ?
Perception did not, then, simply work from the bottom up; it worked first from the top down. What you saw was not just a signal from the eye, say, but a combination of that signal and the brain’s own ideas about what it expected to see, and sometimes the brain’s expectations took over altogether. Perception, then, was not passive and objective but active and subjective. It was, in a way, a brain-generated hallucination: one influenced by reality, but a hallucination nonetheless. This top-down account of perception had, in fact, been around for more than two hundred years. Immanuel Kant suggested that the mind made sense of the complicated sensory world by means of innate mental concepts.
I'll have more on this later, if only people will stop bombarding me with interesting links.
He had come far enough that he had now to confront a question: If cognition was a deeply animal business, then how far could artificial intelligence go? “There’s something very interesting about life,” Clark says, “which is that we do seem to be built of system upon system upon system. The smallest systems are the individual cells, which have an awful lot of their own little intelligence, if you like—they take care of themselves, they have their own things to do. Maybe there’s a great flexibility in being built out of all these little bits of stuff that have their own capacities to protect and organize themselves. I’ve become more and more open to the idea that some of the fundamental features of life really are important to understanding how our mind is possible. I didn’t use to think that. I used to think that you could start about halfway up and get everything you needed.”
As I wrote elsewhere recently, I would have expected the goal of an AI to be human-like only in a loose sense of being able to analyse and understand data logically. There's little or no point in producing a human that shares all our emotional biases and perceptive difficulties but happens to be made of metal, with the major exception being transhumanism. I think a robo-Vulcan would be far more useful than a robo-human : a device which you can feed data, it will analyse, check for incompletenesses, underlying assumptions and do a whole other bunch of processing on, and eventually spit out a conclusion with caveats. A device that you can feed data but refuses to work until it has another cup of tea and complains that it's bored and oh by the way the weather today is terrible isn't it, seems less appealing. But it's fascinating to the consider that maybe all these underlying quirks aren't a side-effect of intelligence, but actually necessary for its construction.
Originally shared by Emmanuel Florac
There were many kinds of thinking that weren’t possible without a pen and paper, or the digital equivalent—complex mathematical calculations, for instance. Writing prose was usually a matter of looping back and forth between screen or paper and mind: writing something down, reading it over, thinking again, writing again. The process of drawing a picture was similar. The more he thought about these examples, the more it seemed to him that to call such external devices “scaffolding” was to underestimate their importance. They were, in fact, integral components of certain kinds of thought. And so, if thinking extended outside the brain, then the mind did, too.
https://www.newyorker.com/magazine/2018/04/02/the-mind-expanding-ideas-of-andy-clark
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Review : Pagan Britain
Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
I've noticed that some people care deeply about the truth, but come up with batshit crazy statements. And I've caught myself rationa...
-
"The price quoted by Tesla does not include installation of the unit. To this needs to be added the cost of installing solar panels to ...
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.