Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Friday, 7 December 2018

Controlling information : part III

Part three. On the spread of ideas in groups and how to stop them, the backfire effect, and why this is much more complicated than the idealised case of arguing with someone one-on-one. I look at why sometimes if you want to stop an idea from spreading it's better to talk to people rather than censor them, but also why sometimes shutting them down is absolutely the best option. Some problems we can tackle as individuals. Some we can't : if you want to stop, say, social media from whipping people into a frenzy, you have to treat it as a network problem. As individuals, there's only so much we can do about that. The main thing is not to give up, or even to try and be nicer to everyone, because that probably won't work. Yes, this is a long summary, but it's an even longer post, so I can (maybe) finally stop blabbing on about speech regulations for a good long while.

(The original, very long post can be found here.)


A single source of an idea, it turns out, is not much good : we've all got that one friend who believes in magical pixies or whatnot but we don't go around being so silly. But if we have multiple people [around 25% of those we know, it seems] telling us the same thing, we're far more likely to be persuaded.

The obvious route to stopping an idea would be to cut the links, and this would indeed work in this model, but adding more connections can have the same effect. The key is that it's about how much supporting information people receive : or rather the fraction of sources they're all exposed to that support an idea.

Anything that contradicts trusted information will, so long as it only makes up a small fraction ( <10% or so, as we've seen) of the information flow, be distrusted. We seldom if ever evaluate evidence solely on its own merit, and for good reason : to the brain, things that trusted people say are at least as good as facts, if not treated as facts themselves. Which means that a source can become distrusted and its argument backfire simply by saying something radically different to what everyone else is saying.

In real life, we have multiple sources competing for trust, multiple answers to choose from, and some issues are discussed frequently (cognitive ease again !) while others rarely get a mention. It's this highly complex mix, I think, of multiple options, competing sources, varying trust levels, and varying issue discussion rates that might explain why sometimes the backfire effect seems to happen very easily yet sometimes not at all. So yes, it does help to use the persuasive techniques discussed last time - but complex network effects can override them.

Educating people to be more critical and whatnot is fine, but in terms of changing already established ideas there's a major problem :

If ideology [here meaning methodological reasoning as well as the moral sort] is shaped by environment at all, the only real arena for changing this is in schools. It's only in schools and other educational institutions that we have a common environment where we all go with the expectation of learning analytical and critical methods. Beyond that, there are very few venues indeed where the entire populace expect and accept that they're going to be taught how to think.

And yet... changing the system can and does happen. The paradox of environment is that it appears both strong and weak. One way to square that particular circle - if fucking with geometry is your thing - might be that our social institutions exploit the relative, recent nature of our default comparisons. They do have a role in making change genuinely difficult, and therefore (whatever Kennedy might say) undesirable, but what they mainly do is make change appear more difficult than we tend to believe. They make the battles harder to fight, but more importantly, they influence which battles we fight at all. If we can break through that, then there are other factors we can exploit : social norms are not the only thing shaping our ideas. Don't get me wrong here. Major social change is extraordinarily difficult, otherwise every nutcase under the Sun would be remaking society every five minutes; cultural norms do change, but some institutions are astonishingly resilient.

And on groupthink...

...if you tell someone in such a situation not to believe in UFOs [or whatever] you're not merely asking them to give up a single cherished belief, which would be bad enough. You're also asking them to admit that their trust in a large number of people has been mistaken, that their whole basis of evaluating information is flawed, that nothing they thought they could trust is correct. You're inevitably not just talking about the issue, but their faith in and friendships with other people and their own capacity for rationality.

We can't break this through actions as individuals on social media. We have to do something much more radical.

In a new environment, people are forced to make entirely new connections and are praised and (socially) punished for entirely different actions. If that environment favours different views to what they were used to, we can expect at least some of their ideas to change (this is in part how rehabilitation works, after all, though obviously there are some major caveats to that). Not all though, because strongly contrasting viewpoints can persist in [but, crucially, do not spread] even extremely hostile environments. Indeed we could expect some individuals to hold even more strongly to some of their beliefs. But most, the theory suggests, ought to change many of their ideas as they become integrated into the new setting.

Just throwing a bunch of diehard vegans together with a bunch of fox hunting enthusiasts is hardly likely to result in anything other than a bloodbath... Yes, we can succeed in changing people's minds if we make more connections and give them enough different information that contradicts their view, but no, we definitely can't do this just by whacking 'em together or bombarding them with different arguments. As with the other parameters, flow rate may backfire above a certain threshold - if someone never shuts up about the same boring issue, we stop listening, and if it doesn't actually affect our own belief directly, we may well see them as biased.

Thresholds, I think, are key. I thought about using the phrase "non monotonic behavioural responses", but I resisted.

An idea held by few people may be harder to spread because its low acceptance rate causes us to label it as fringe, so anyone believing it is weird, and we're clearly not weird so we shouldn't believe that. Whereas if lots of people believe it, well, that's mainstream, socially acceptable, so there's less of a psychological barrier to acceptance. So while techniques that strengthen belief - praise and shame - might not be able to change someone's stance, they might be able to maintain it by keeping everyone in agreement. Sustaining hearts and minds, if you like.

Most fake news isn't driven by a belief in "alternative facts" at all; its goal is not a misguided attempt at enlightenment in the way that people who genuinely believe in Bigfoot try to do. Rather, it is an attempt to confuse and sow mistrust - not to convince the viewers that anything in particular is true so much as it is to persuade them that they can't trust particular sources (or worse, that they can't trust particular claims and therefore any source making them). It aims to replace dispassionate facts, those pesky, emotionless bits of data that can't be bent, with emotion-driven ideology, which can. It thrives on the very polarisation that it's designed to promote, as well as the erosion of both critical and analytical thinking that it exacerbates. Removing it is likely, in the long term, to do far more good than harm. Fake news isn't about encouraging rational debate, it's about shutting it down.

Sometimes people claim that things like fake news or certain so-called politicians are symptoms, not causes. We see here that just like with a disease, sometimes a symptom can also itself be a direct cause. After all, the very reason viruses give us symptoms is to spread themselves around. If you had a perfect cure for the common cold but couldn't give it to everyone at once, you'd never eliminate the cold virus because it would continue to re-infect people. Similarly, even if you could devise a perfect way to disinfect victims of propaganda, you'd still have to stop it in order to both prevent its continued spread and its damaging effect that makes victims harder to treat. Cures, treatments and vaccinations are different and as such they must be applied differently.

And if you're not convinced by fake news, you might still believe that other social groups are. What it's doing here is handing people ready-made straw men : arguments which are much more absurd than anything that someone really believes (yes, some people believe in higher taxation; no, no-one believes the rich should be hunted down and eaten). The fact that they're easy to debunk actually becomes a very powerful asset. Instead of discussing the details of opposing but equally sophisticated (and boring) fiscal policies, we end up seeing the other side as believing in exciting but incredibly dumb things like flying squirrel monsters from Uranus, or whatever. It wrecks the credibility of the other side and makes them appear far, far stupider than most of them really are. The other side become self-evidently pantomime villains and therefore anyone agreeing with them is obviously stupid... again, as we've seen previously, this is a route to crude, absolute thinking, the polarisation the fake news creates also driving its own spread.

Finally, there are some general conditions under which regulations on controlling information (I use this in the broadest sense, I'm not distinguishing lies from truth here) might work. There are some situations where it definitely won't, and in those cases it would be foolish to even try. But there are some which are much more plausible, which depend on both the nature of the information and the manner in which it's restricted :

According to what we think we know, it follows that a ban will be in general effective if :
The information is subjective, overtly promotional, already disliked by a large majority of people, hard to understand, difficult to arrive at independently, appears to contradict other knowledge, and doesn't excite curiosity (especially if the gist of it is well-known and only specific details are lacking).
The ban is applied uniformly to (or better yet by) all media outlets and not just one or two, the resulting inaccessibility of the information is sufficient to be discouraging rather than challenging, the lack of reporting itself goes unreported, the restrictions are not so harsh that they cause a dislike of whoever instigated the ban (e.g. tolerating minor infractions and only enforcing more flagrant violations), corrective measures are used to persuade those who still believe in the idea which account for their personal situations, and any attempts at replacing the banned information or source are dealt with in the same way.

We as individuals don't have full control over the networks we find ourselves in... but we can damn well talk to those who do. We can and should give them advice, to tell them what isn't working. We can also demand that ethics training be taken seriously and be mandatory for company executives, not just a totally uninteresting lecture that the grunts have to endure. Moral philosophy can be explored intelligently and engagingly; it cannot be something that executives of companies that thrive on information are allowed to brush under the carpet.

And that, I hope, is the last I'll be saying about this until at least the end of the year. :)

6 comments:

  1. Well argued. This feels like the kind of article that could be more widely published, if you're so inclined.

    ReplyDelete
  2. Well I'd love to, but I have no idea how to go about doing that.

    ReplyDelete
  3. Well written, and well said. Excellent

    ReplyDelete
  4. Rhys Taylor I don't have direct experience either, but I've bookmarked a number of comparable articles (in my mind, anyway :-) ) at home and work, and some of those sites might well take it up. Things like Forbes, The Atlantic, The New Yorker, Washington Post, etc., come to mind...not that some of them are likely to pay.

    Anyway, let me gather some links and email 'em to you, then you can decide if it's worth the bother.

    ReplyDelete
  5. I appreciate the vote of confidence, if nothing else. :)

    ReplyDelete
  6. it seems to me as long as we are yelling across each other no one hears anything -- but that is exactly what we like to do.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : GladIIator

I seem to make a habit of reviewing Ridley Scott movies so I suppose I'd better give Gladiator II a go. This review is spoiler-free. In...