Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 6 July 2018

The gorilla test : why what's obvious is not what's obvious

Aeon may have the occasional duff article about Aztecs being a wonderful and charming people with a group morality we should emulate, but it still has just about the highest ratio of good/bad articles out there. This is definitely one of the good ones.

I've been thinking for a while that, contrary to what's often reported, the vast majority of the time human behaviour is actually very rational and logical. If it wasn't, people would be constantly stabbing themselves in the eye and walking into walls. They don't. Problems with logic mostly only occur with processes of higher reasoning, and while it's interesting to see that they can happen at a more basic level, the importance of this is prone to exaggeration. It would also, as the article points out, perhaps be more useful to study under what situations we perceive things more directly.

It’s a fact that most people who watch the clip miss the gorilla. But it does not necessarily follow that this illustrates – as both the study’s authors and Kahneman argue – that humans are ‘blind to the obvious’. A completely different interpretation of the gorilla experiment is possible.

Imagine you were asked to watch the clip again, but this time without receiving any instructions. You might report that you saw two teams passing a basketball. You are very likely to have observed the gorilla. But having noticed these things, you are unlikely to have simultaneously recorded any number of other things. The clip features a large number of other obvious things that one could potentially pay attention to and report: the total number of basketball passes, the overall gender or racial composition of the individuals passing the ball, the number of steps taken by the participants.

In short, the list of obvious things in the gorilla clip is extremely long. And that’s the problem: we might call it the fallacy of obviousness. There’s a fallacy of obviousness because all kinds of things are readily evident in the clip. But missing any one of these things isn’t a basis for saying that humans are blind. The experiment is set up in such a way that people miss the gorilla because they are distracted by counting basketball passes. Preoccupied with the task of counting, missing the gorilla is hardly surprising. In retrospect, the gorilla is prominent and obvious.

But the very notion of visual prominence or obviousness is extremely tricky to define scientifically, as one needs to consider relevance or, to put differently, obviousness to whom and for what purpose?

The alternative interpretation says that what people are looking for – rather than what people are merely looking at – determines what is obvious. Obviousness is not self-evident. Or as Sherlock Holmes said: ‘There is nothing more deceptive than an obvious fact.’

In other words, there is no neutral observation. The world doesn’t tell us what is relevant. Instead, it responds to questions. When looking and observing, we are usually directed toward something, toward answering specific questions or satisfying some curiosities or problems. ‘All observation must be for or against a point of view,’ is how Charles Darwin put it in 1861. Similarly, the art historian Ernst Gombrich in 1956 emphasised the role of the ‘beholder’s share’ in observation and perception

Quite simply, this human-centric and question-driven view of perception cannot be reconciled with Kahneman’s presumptions concerning obviousness, which make it a function of the actual characteristics of the things in front of us (their size, contrast or colour) without acknowledging the Suchbild, the cognitive orientation or nature of the perceiver.

In essence, Kahneman deems the gorilla to be obvious, but doesn't provide any reason as to why this should be obvious as opposed to any other aspect of the scene. I think he's right to point out that we don't realise our biases, however. But how you interpret data is, as I make it a point to tell students, something that happens in your brain and nowhere else. Data doesn't speak for itself.

The problem, as Sherlock Holmes put it, ‘lay in the fact of there being too much evidence. What was vital was overlaid and hidden by what was irrelevant.’ So, given the problem of too much evidence – again, think of all the things that are evident in the gorilla clip – humans try to hone in on what might be relevant for answering particular questions. We attend to what might be meaningful and useful.

Knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task – it’s a human one. The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception. The scientific method illustrates this well. And so does the history of science. After all, many of the most significant scientific discoveries resulted not from reams of data or large amounts of computational power, but from a question or theory.

Any number of apples, and other objects for that matter, had undoubtedly been observed to have fallen before Newton’s observation. But it was only with Newton’s question and theory that this mundane observation took on new relevance and meaning... Similarly, theories of a heliocentric Universe caused the ‘obvious’ observations of the Sun circling the Earth – or the retrograde loops of planets (as observed from Earth) – to take on completely new meaning and relevance.

Highlighting bias and blindness is certainly catchy and fun. And the argument that humans are blind to the obvious is admittedly far more memorable than an interpretation that simply says that humans respond to questions. But scientists’ own preoccupation with blindness risks driving the type of experiments scientists construct, and what they then observe, look for, focus on, and say. And looking for validations of blindness and bias, they are sure to find them.

Intelligence and rationality are more than just calculation or computation, and have more to do with the human ability to attend to and identify what is most relevant... The human ability to ask new questions, to generate hypotheses, and to identify and find novelty is unique and not programmable. No statistical procedure allows one to somehow see a mundane, taken-for-granted observation in a radically different and new way. That’s where humans come in.

Oh, I bet you could program a computer to generate new hypothesis, but not to invent concepts. Maybe you could do this one day if we understood how we do it ourselves, but at present it's a case of "show me one atom of justice", as a wise man once put it. I don't believe computers will be capable of that kind of thought for a long time yet, at best.

https://aeon.co/essays/are-humans-really-blind-to-the-gorilla-on-the-basketball-court?utm_medium=feed&utm_source=atom-feed

18 comments:

  1. I saw the "Gorilla in our Midst" video: the gorilla suit guy was the most obvious thing in it. I suppose this puts me in a very small percentage of people. Most legerdemain tricks and close-in magic tricks don't work on me either.

    I have come to believe this Consciousness Thingummy is nothing more than an elaborate set of bandpass filters and routing mechanisms. We're learning a great deal about these filters and routers, not by studying the brain but rather the sensory nervous system, especially the eye, though we're learning a hell of a lot from the ear as well.

    Form follows function: there's a whole layer in the retina, dedicated to movement. Your eyes face forward because you're a predator, a prey animal's eyes face sideways. A predator trades a wide field of vision for enhanced stereoscopic vision. There are hundreds, thousands of these little adaptations throughout our bodies.

    But this much seems certain, humans are capable of rational, logical thinking. But that requires another set of filters and routers, which aren't exercised very often, if at all in most people - and only for about as long as anyone can hold their breath. I'm not one of those tiresome contrarians who adjust their spectacles and loudly enunciate "You are wrong" but honestly, to think people are rational requires more faith than St. Augustine trying to squeeze the glass slipper of Reason onto the calloused old foot of Belief. It's not going to happen.

    It could happen, okay - I'm one of those people for whom the Gorilla in our Midst dropout phenomenon doesn't work. And I do believe machines are capable of hypothesis construction: every neural network composes one. And furthermore, if that hypothesis is overtrained, it will refuse to learn anything more, another irritating trait of actual human beings.

    ReplyDelete
  2. In Terry Pratchett's Diskworld novels, he often mentions how the inhabitants do not see what is in front of them because they know it is impossible.

    The anthromorphic personification of Death and his step granddaughter Susan Sto Hilit are practically invisible.

    Except to wizards. They get special training to see what is actually there.

    ReplyDelete
  3. We filter the insignificant. We have to, we don't have the capacity to process all stimuli equally, at least not in real-time.

    The question being what, at any given moment, is given significance.

    ReplyDelete
  4. The various costs of information, cognition, of both model formation and abandonment, and what Mark Blyth has noted as the power of even bad models (his domain is economics) to produce predictable behaviours, are all factors I find under-accounted for.

    Chris C: another large part of our ongoing ramble.

    John Hummel: of possible interest.

    ReplyDelete
  5. Dan Weese I'm one of the people who didn't spot the gorilla and was extremely confused by a second viewing. :)

    There's routine behaviour and then there's higher reasoning. People seem to be almost entirely logical and rational about the former, otherwise they'd quickly die from trying to inhale forks or something, but much less for the latter. Exceptions to both are interesting. Personally though, I'd like to hear a lot more about the conditions under which people think rationally rather than the cases where the brain can be fooled.

    In terms of raw perception about the world, it seems to me that in almost instances the brain does a good job. If it didn't, we'd be constantly trying to avoid walls that weren't there and running in terror from invisible vampire badgers and so on. Our senses and perception can't be all that bad, assorted optical illusions and absent gorillas notwithstaninding.

    Higher reasoning, now that's where things get interesting. That's where logic more frequently goes haywire and people start believing that wotsit Hitler is going to bring back beautiful clean coal and suchlike. And we seem to have plenty of research on the exceptions to our logical everyday behaviour/perception which are cool and all, but precious little about the rare exceptions where we make logical decisions about complex issues. That would be an interesting study indeed. A correlation (or lack thereof) between susceptibility to optical illusions/gorilla spotting with complex reasoning would be rather fun. I've long wondered about if and how visual pattern recognition is connected to logical reasoning.


    I'll grant that computers can formulate hypotheses but not that they can form concepts. Show me a jumped-up abacus that understands "justice, duty, mercy, that sort of thing..." :)

    ReplyDelete
  6. Give it time. The concepts are straightforward and technology is in exponential advancement mode.

    ReplyDelete
  7. Given my experience with the inaptly named "justice system", I would have preferred an AI on the bench.

    ReplyDelete
  8. Thanks for the heads-up, Edward Morbius. This is indeed right up my alley.

    I have a few comments, for what they're worth:

    From the article:

    "The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception."

    That's for damned sure, and as the author suggests, they do so at their great peril. Without knowing it, AI is participating in a very old debate in science and philosophy between nativism and empiricism, and without knowing it, they are coming down very squarely on the side of empiricism, with all the good and bad that implies.

    And then, "After all, many of the most significant scientific discoveries resulted not from reams of data or large amounts of computational power, but from a question or theory."

    Yes!

    "Intelligence and rationality are more than just calculation or computation, and have more to do with the human ability to attend to and identify what is most relevant..."

    This is the second time the author has said that bias is non-computational. I think I understand what (s)he is getting at, but I also think it's a mistake. Biases come from constraints and goals, and the exploitation of the former in the service of the latter is decidedly a computational exercise.

    "The human ability to ask new questions, to generate hypotheses, and to identify and find novelty is unique and not programmable."

    This is just wrong. However, "No statistical procedure allows one to somehow see a mundane, taken-for-granted observation in a radically different and new way," is absolutely right.

    Conclusion: Human computations are not merely statistical. (This goes back to the deep, if unrecognized, commitment to hard-core empiricism.)

    And then from Rhys Taylor:

    "Oh, I bet you could program a computer to generate new hypothesis, but not to invent concepts."

    I agree on the hypothesis generation, not on the "no new concepts." Humans do it, so unless it's magic, then it can be done computationally.

    "... but at present it's a case of "show me one atom of justice", as a wise man once put it."

    I'm working on it. I think Rhys and the author are right that the secret lies in stepping away fro purely statistical approaches to learning, into explicitly relational approaches.

    "I don't believe computers will be capable of that kind of thought for a long time yet, at best."

    Not as long as AI remains enamored with purely statistical learning algorithms, they won't. But in AI's "defense", it took natural selection billions of years to wean itself from statistical learning those as well.

    I take a kind of sad comfort in knowing that AI, at least as most commonly practiced, is still nowhere near developing anything as destructively creative as SkyNet.

    ReplyDelete
  9. John Hummel Based on what people are like, it isn't magical. If it was, then it would be a really weird, shitty kind of magic and a waste of perfectly decent mystical woo.

    On the other hand, it doesn't seem to be a purely mathematical process either, and the only safe conclusion is that consciousness is annoying.

    ReplyDelete
  10. Cognition is difficult to characterize mathematically at present, Rhys Taylor, but I think that's just because we haven't yet figured out what the algorithms are.

    And you're right about consciousness. Cognition must ultimately lie within the purview of science; but "consciousness" (whatever the hell that is, if anything at all), not so much.

    ReplyDelete
  11. Rhys Taylor John H. Holland had notions on creativity as largely modular recombination: duplicatin, insderting, or deleting often extant conceptual blocks. That's potentially automateable.

    https://fs.blog/2013/01/building-blocks-and-innovation/

    https://en.wikipedia.org/wiki/John_Henry_Holland

    See also J. Doyne Farmer, now at Oxford.

    ReplyDelete
  12. Well, I'm pretty sure a form of general AI will emerge that's extremely useful and will have profound effects in the not too distant future. But I'm not quite enough of a reductionist to suppose that everything done by an organic brain (human or animal) can be eventually approximated by an algorithm. We'll see.

    ReplyDelete
  13. Rhys Taylor I tend to share your view.

    Holland and what I like to call the "Santa Fe Mafia" -- humourously and with deep respect -- are showing a possible path to an algorithmic creativity. Mind, they're not even claiming it as such, only trying to clearly answer the questiion of "what is creativity".

    Holland, Farmer, W. Brian Arthur, Geoffrey West, Sander van der Leeuw, David Krakauer, others. An immensely impressive group. See: http://www.santafe.edu

    The process is quite similar to evolution, has diect analogues, and in fact the processes and mechanics of genetic mutation and selection are explicitly referenced. The key hurdle, to use another apt metaphor, is in leaping past local minima to find some yet lower minimum, whether global or regional.

    Human intelligence, modelling, experimentation, research, and trial and error processes, all offer mechanisms for avoiding the stickiness of local minima. As Krakauer posits: "Intelligence is search". I'm not fully convinced of this, but it's a quite intriguing notion.

    http://nautil.us/issue/23/dominoes/ingenious-david-krakauer

    And suggests that reserving such creativity to the human rather than algorithmic domain may be inaccurate.
    nautil.us - Ingenious: David Krakauer - Issue 23: Dominoes - Nautilus

    ReplyDelete
  14. Rhys Taylor My own personal horror is where we don't bother smartening up a computer and just start wetwiring animals

    ReplyDelete
  15. On the other hand, if we had that level of technology, it might well be an avenue for human life extension, or at least enhancement.

    ReplyDelete
  16. Life is full of irrelevant information. We can identify some of the most relevant "signals" only by removing "the noise". Unfortunately the brain will do this without consulting our higher thought processes. Martin Gardner called this the "Default Assumption".

    ========================

    A variation of the "invisible gorilla" experiment would be to replace the gorilla with a man pulling out a shotgun and shooting one of the basketball players (with a blank). I would be curious about when the observer registered "the man", "the gun", or "the shot". Because at some point the relevancy of the "background" event ceases to be "noise" and becomes a signal important to survival.

    And another variation would be to just pull the shot gun out & use it as a cane vs a more threatening pose.

    Other variations could be designed to test cultural bias, personal bias, genetic bias. I.e., show that our sub-conscious will only interrupt higher thought if there's something worth interrupting our higher thought with.

    ReplyDelete
  17. Great article, great comment thread. Welcome to Plato's Cave; "Abandon hope all ye who enter here." Bias and saturation are not the same things, curiosity is for those of us who don't take perceptions lightly. I didn't see the clip, Didn't need to. I wanted to know the trick even before I would have had to sit though the gimmick. I have crippling neurological challenges that only enhance my impatience with puzzles, tests, symbolic language codes and pretenses of educated authority in general. The "Everyone has blind spots and everyones blindspots are different" paradox, {Mine} defeats the expectation of a procedural memory process. Intelligence is not limited to IQ alone, but in fact it is just as often manifest in the cultivation of perceivable subtlety.

    Bias is most often only a result of the extremely limited reference frames that we as individuals experience in life. That need to blame other for what it is that they don't see is called a Reverse Referential Index in NeuroLinguistics. (It takes one to know one.) I've always hated magicians and I find them creepy bottom feeders. But then we do live in a modern society where petty things masquerade as real mystique. Inquiry is a cultivated appetite, and the hardest muscle to exercise is patience.

    Maybe because I do see the world backwards, up side down and through a pinhole, I've learned to not to waste my time pretending that I can see much of anything clearly at all. Ever. In Gestalt we perceive the difference between the experiential/eidetic awarenesses contrasted with the narrative/categorical analytic. Were we to decipher the actual biases of perception rendering blindspots we might discover that subtle distinctions between actual biases are often obscured by mere saturation which can render even the most intelligent amongst us to behaviors quite dull. (Personally I've always thought Plato was very conflicted about the issues of self awareness and autonomy.)

    ReplyDelete
  18. The consciousness of creativity historically has syncretism as it's coalescing product. Organic super algorithms are too fast to be reduced to mere code languages crunching through modern and soon to be outdate technologies.

    ppireader.blogspot.com - Everyone has blindspots BIAS IS POLARITY

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...