This first article is - spoiler alert - shite. Absolutely shite. It's self-inconsistent to a degree seldom found outside of presidential tweets, and I'm more than a little skeptical of some of its claims. I'm pretty confident I did a lot more background reading to write this post than the authors did, because I actually read the links they cite in support of their claims and I don't believe they did even this.
What's the big problem ? Well, the article says not merely that implicit bias is not much of a thing, a notion I'd be prepared to at least entertain, but even that our explicit biases don't matter very much. And that's just... odd. First, the idea that implicit bias isn't much of a thing :
Contrary to what unconscious bias training programs would suggest, people are largely aware of their biases, attitudes, and beliefs, particularly when they concern stereotypes and prejudices. Such biases are an integral part of their self and social identity... Generally speaking, people are not just conscious of their biases, but also quite proud of them. They have nurtured these beliefs through many years, often starting in childhood, when their parents, family, friends, and other well-meaning adults socialized the dominant cultural stereotypes into them. We are what we believe, and our identity and self-concept are ingrained in our deepest personal biases. (The euphemism for these is core values.)Well, surely everyone is aware of their explicit bias by definition. But the whole point of implicit bias is that it's unconscious. It doesn't arise out of choice, or an active desire to discriminate. This is why people can have implicit biases even against their own social groups. So I don't think the first sentence makes much sense : nothing is "contrary", the two biases are wholly different. And the article has a subheading that "most biases are conscious rather than unconscious", but nothing is offered to support this claim (how would you even measure how many unconscious biases someone has anyway ?). Not a great start.
Contrary to popular belief, our beliefs and attitudes are not strongly related to our behaviours. Psychologists have known this for over a century, but businesses seem largely unaware of it. Organizations care a great deal about employee attitudes both good and bad. That’s only because they assume attitudes are strong predictors of actual behaviors, notably job performance.
However, there is rarely more than 16% overlap (correlation of r = 0.4) between attitudes and behavior, and even lower for engagement and performance, or prejudice and discrimination. This means that the majority of racist or sexist behaviors that take place at work would not have been predicted from a person’s attitudes or beliefs. The majority of employees and managers who hold prejudiced beliefs, including racist and sexist views, will never engage in discriminatory behaviors at work. In fact, the overlap between unconscious attitudes and behavior is even smaller (merely 4%). Accordingly, even if we succeeded in changing people’s views—conscious or not—there is no reason to expect that to change their behavior.Wait... what ? Doesn't this flatly contradict the first passage that "we are what we believe" ? This all feels highly suspicious. In fact, on reflection I think it's a deliberate attempt to sow confusion.
Intuitively, the stronger a belief someone holds, the more likely they are to engage in the corresponding behaviour. But that doesn't mean we therefore expect the correlation to be extremely strong, because lord knows there are a bunch of things I'd like to do but physically can't*. I firmly believe that flying like Superman would be a jolly good thing, but I don't even enjoy air travel. Hell, I find driving a terrifying experience and would go to extreme lengths to avoid ever having to do it again. Does that stop me wanting a jetpack ? No. And I'd quite like to be an astronaut, but my belief that it would be worthwhile isn't enough to motivate to drop everything and switch career. That'd be silly.
* Also on statistical grounds, causation doesn't necessarily equal correlation.
Some beliefs are pure fantasy : there's a genuine difference between belief, desire, and behaviour. Sometimes we just can't act on our beliefs, either because they're physically impossible or because we're subject to other forces like social pressure. We might not want to smoke but give in to peer pressure*, or vice-versa. We use metadata of who believes what just as much as we do raw evidence, and such is the power of this metadata-based reasoning** that people even insert their genitals into people with whom their own sexual orientation is in direct disagreement. The blunt-force manifestation of this is the law, preventing us from doing (or wanting to do) things we might be otherwise inclined to attempt. Law both constrains our actions directly but also maintains us culturally from even having certain desires*** in the first place.
* I'm hoping this example is hugely dated and no longer much of a thing.
** Not sure what else to call it. "Groupthink" has a more specific reasoning of a false consensus, "peer pressure" is too direct, and "thinking in groups" just doesn't cut it. See these posts for an overview.
*** Hugely simplifying. I'm not saying we'd all become murderers without laws, but these footnotes are already excessive.
Then there are conflicting beliefs. People may believe, say, that all ginger people are evil, but also that it's important to be respectful, in which case it should be no surprise that their behaviour doesn't always reflect one of those beliefs. If, on the other hand, they believe ginger people are evil and that they should always express themselves honestly, then it should come as no surprise to find them saying nasty things about redheads. Personality plays a huge role.
All this undermines the direct implication of the above quote that we should expect a very strong correlation between beliefs and behaviours. Even if beliefs do drive behaviours, other constraints are also at work. Conversely, this also lends some support to the idea that those who do engage in discriminatory behaviours do not necessarily hold prejudiced views : "I was only following orders" and all that. You don't even necessarily need to dislike them to be convinced you need to murder them.
But I think it's a complete nonsense (literally, it does not make sense) to go that extra step and suggest that changing beliefs won't change behaviour. As the old saying goes, "not everyone who voted for Brexit was a racist, but everyone who was a racist voted for Brexit" : things aren't always neatly reversible (granted it's more subtle than that). Changing beliefs can change behaviour either directly or by changing the pervading culture of laws, rules, and social norms.
So that there's only a modest correlation between belief and behaviour, in my view, in no way whatsoever implies that changing beliefs won't change behaviour. Just because belief isn't the dominant driver of belief doesn't imply that it isn't a driver at all. Indeed, it could well be the single biggest individual contributing factor, just smaller than the sum of all the rest. As we'll see, there's some evidence for that.
I'm also not at all convinced by the claim that most racist behaviour couldn't have been predicted from their attitudes - the weak correlation alone doesn't feel like good evidence for this. Yes, this could happen to a degree, due to social forces etc. But at some level, someone consciously engaging in discriminatory practises (unlike implicit bias) must by definition hold discriminatory beliefs. And how are they defining "racist or sexist" behaviour here anyway ? This matters. I'll quote myself here because I think the concept is useful :
The "angry people in pubs", on on the internet, are often what we might call armchair bigots. They won't, unless strongly pressured, ever take physical action against people they dislike. But they are all too willing to vote on policies which harm other people. They care just enough about the truth to have the decency to be embarrassed (even if only unconsciously) by their opinions, but they're all too happy to absolve themselves of responsibility and let other people do their dirty work. This is a very dangerous aspect of democracy, in that it makes villainy much easier by making it far less personal.So this claim about how attitudes and behaviour correlate in a strange way is just too weird to reduce it to a few casual sentences. More explanation is required.
The article links three papers in this passage. The first, which they cite in support of the "weak" correlation, says in the first sentence of its abstract that it's social pressures which weaken or moderate the belief/behaviour trend, as already discussed. It also notes that the correlation value of 0.4 is pretty strong by other psychological standards. It's a meta study of almost 800 investigations, and doesn't mention racism or sexism (which is what the Fast Company article is concerned with). It explicitly says that beliefs predict some behaviours better than others, which is hardly unexpected.
So I call foul on the Fast Company article, which is using a general study to support specific claims not backed up by the original research, which does not say that the trend is weak in all cases. Sure, social pressure is important, but it's completely wrong to say this means beliefs don't matter.
The second paper is also concerned with measuring the correlation but has nothing to do with prejudice. It's a bit odd to even mention it. The third paper, from 1996, is concerned with prejudice and discrimination and does indeed find a weaker correlation than the general case (r=0.3) - provisionally.
It's a good read. It's another meta study and describes the limitations and inhomogeneities of the various samples, but straightaway even the Fast Company claim for a weaker correlation looks to be on shaky ground. Correcting for the different sample sizes, the paper shows the correlation rises from 0.286 to 0.364, barely lower than the average. And exactly which parameters are used affects this further, with the highest average (sample-size corrected) correlation being 0.416 when examining support for racial segregation. Some individual studies reported even higher correlations, in excess of 0.5 or even 0.6. Overall, correlations were stronger when looking at survey data than experimental data, perhaps because - they suggest - people don't want to openly appear to be prejudiced.
(What I'm not clear about is what the authors mean by "discriminatory intention". They define prejudice as merely holding views, whereas discrimination is putting those views into practice. Discriminatory intention, as far as I can tell, is another type of prejudice.)
Two of their concluding points deserve to be quoted :
Though the prejudice-discrimination relationship is not very strong in general, it varies, to a considerable degree, across specific behavioral categories, and it seems that the relationship is stronger in those cases where the behavior is under volitional control of the subjects.
In sum, we conclude: only rarely is prejudice a valid predictor for social discrimination, but there are only very few other candidates, and all of these are less useful than prejudice.I call bollocks on the Fast Company article. They twist the conclusions to mean something very different from what the original authors said.
What of their claim that there's only a "4% overlap" between unconscious attitudes and behaviour ? For this they cite a Guardian article. But this is cherry picking in extremis - the article doesn't say implicit bias is weak (far from it, quoting a number of other studies showing it's a strong effect), it only notes the flaws in certain testing. Fast Company continue with their willful stupidity :
Furthermore, if the test tells you what you already knew, then what is the point of measuring your implicit or unconscious biases? And if it tells you something you didn’t know, and do not agree with, what next? Suppose you see yourself as open-minded (non-racist and nonsexist), but the test determines that you are prejudiced. What should you do? For instance, estimates for the race IAT suggest that 50% of black respondents come up as racially biased against blacks.That's the whole frickin' point of implicit bias : it's different from explicit bias. Why is this complicated ?
While their conclusion that "bias doesn't matter" is clearly rubbish, they are right to point out, though, that overcoming this bias is difficult. What they don't offer, despite their clickbaity headline, is any hint at at all on what better method is available to overcome it. Although they do conclude that bias is a problem and we should do more (or something differently) to improve how we deal with it, this frankly feels schizophrenic after they just spent the whole article veering wildly back and forth between "bias being how we identify ourselves" and "bias being totally unimportant".
This is a daft article. It doesn't make any sense. It mangles the conclusions of academic papers in order to support a conclusion it doesn't believe in. Aaargh.
Take-home message : bias does matter. That's what the evidence says, very clearly, despite numerous and interesting complications.
Time for the second article. Don't worry, this one can be much shorter, because it's self-consistent and sensible. It fully accepts that implicit bias is important and doesn't say anything stupid about explicit bias being something we can ignore.
That particular implicit bias, the one involving black-white race, shows up in about 70 percent to 75 percent of all Americans who try the test. It shows up more strongly in white Americans and Asian Americans than in mixed-race or African Americans. African Americans, you’d think, might show just the reverse effect — that it would be easy for them to put African American together with pleasant and white American together with unpleasant. But no, African Americans show, on average, neither direction of bias on that task. Most people have multiple implicit biases they aren’t aware of. It is much more widespread than is generally assumed.However, the author of this piece is also skeptical as to whether implicit bias training is actually improving things or not.
I’m at the moment very skeptical about most of what’s offered under the label of implicit bias training, because the methods being used have not been tested scientifically to indicate that they are effective. And they’re using it without trying to assess whether the training they do is achieving the desired results.
I see most implicit bias training as window dressing that looks good both internally to an organization and externally, as if you’re concerned and trying to do something. But it can be deployed without actually achieving anything, which makes it in fact counterproductive. After 10 years of doing this stuff and nobody reporting data, I think the logical conclusion is that if it was working, we would have heard about it.Training methods apparently either don't work or their effects are extremely short-lived :
One is exposure to counter-stereotypic examples, like seeing examples of admirable scientists or entertainers or others who are African American alongside examples of whites who are mass murderers. And that produces an immediate effect. You can show that it will actually affect a test result if you measure it within about a half-hour. But it was recently found that when people started to do these tests with longer delays, a day or more, any beneficial effect appears to be gone... It’s surprising to me that making people aware of their bias doesn’t do anything to mitigate it.I'm not surprised at all. Dedicated training sessions take people out of their usual information network and temporarily embed them in a new one. Once you take them out, they're assaulted once again by their old sources of bias. Maintaining awareness from one particular session, especially if its conclusions run counter to everyday sources, is not at all easy. Raising issues to awareness does help you take control, but i) it only helps and ii) you need to maintain that awareness. Still, the author does come up with a concrete plan that might address this :
Once you know what’s happening, the next step is what I call discretion elimination. This can be applied when people are making decisions that involve subjective judgment about a person. This could be police officers, employers making hiring or promotion decisions, doctors deciding on a patient’s treatment, or teachers making decisions about students’ performance. When those decisions are made with discretion, they are likely to result in unintended disparities. But when those decisions are made based on predetermined, objective criteria that are rigorously applied, they are much less likely to produce disparities.As noted, the effect of continually asking, "Am I being racist ?" can be counter-productive. But asking instead, "Am I being fair ?" may be better. It forces you to focus on the key criteria of how you're making your decision. It stops you from focusing on the very issue you want to avoid - prejudice - and deal with the problem at hand fairly.
What we know comes from the rare occasions in which the effects of discretion elimination have been recorded and reported. The classic example of this is when major symphony orchestras in the United States started using blind auditions in the 1970s... as auditions started to be made behind screens so the performer could not be seen, the share of women hired as instrumentalists in major symphony orchestras rose from around 10 percent or 20 percent before 1970 to about 40 percent.Also telescope proposals. Forcing people to focus on what matters, while at the same time excluding what doesn't matter, actually works. People are actually capable of being fair and rational under certain conditions. Hurrah !
No comments:
Post a Comment
Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.