Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Thursday 16 April 2020

Troll hunting

This is a very interesting and informative video on how misinformation works on Reddit. It's part of a series looking at other platforms, but this is the only one I've watched so far. Here's my summary if you prefer to read things or just don't have time for a 20 minute video.

Reddit's Chief Technology Officer begins by noting that the basic unit of Reddit is the community, not the site itself. Moderators have control over communities and it's up to them to set and enforce their own rules. The company is the last line of defence against spam and misinformation and copyright violations, and it's pretty unusual for them to get involved. Misinformation does sometimes end up on the front page (and so accumulate enormous numbers of views) and it's a struggle to deal with it.

The Stanford Internet Observatory note that this variable approach to moderation leads to highly variable standards, but it has some benefits. No-one in a cat community complains of censorship if their dog picture is removed. By having a local feel for out-of-place accounts, community moderators are potentially better placed to identify suspicious activity.

The iDrama Lab is an organisation trying to examine the internet holistically. They say that there is unambiguously large-scale, coordinated, inauthentic activity on Reddit. Some people are indeed just weird (and/or inexperienced) but some are absolutely state-sponsored trolls participating in deliberate campaigns of misinformation. Similar tactics are being used today as they were a few years ago, because they still work.

SIO note that unlike Facebook, when Reddit suspends an account it doesn't delete it. This has the advantage that you can then go and study their behaviour (though I have to wonder what happens to the threads they create - can users still interact there and access the misinformation ?). SED says that they display a clear pattern of behaviour. They alternate between showing viral content (cute cat pictures and the like) and deliberately starting arguments on both sides of a debate. They are trying to make everyone fight each other, not win a debate. They also spend some time simply trying to convince people that they're authentic.

One of the really interesting findings comes from the Oxford Internet Institute. They've analysed Reddit threads and can quantify the discussion's cognitive complexity (the toleration of different viewpoints), the identity attacks (use of irrelevant ad hominem arguments) and toxicity (aggressive, confrontational behaviour that cases users to leave the thread). From a single comment by a troll, they see immediate and sustained loss of cognitive complexity (far in excess of the baseline variation), immediate and sustained increases in toxicity, and an immediate but temporary rise in identity attacks.

SED goes on to describe what I found the most surprising results of all : just how carefully prepared these attacks are. Troll activity increased to a sustained level before diminishing until just before the 2016 US election. But the activity varied not only in time, but in type. The 2015 threads were mainly all about amassing a good reputation and followers, posting mainly viral and utterly inoffensive content (e.g. cats, science gifs). Only in 2016 did they start to abuse their position to start divisive arguments.

The IDL say that the goal is chaos, to drive people apart, to prevent them from having common objectives, to increase polarisation. Worryingly, no-one has a good map of the true interconnectivity between the major social media sites. You can't win a war without a map.

Reddit's CTO says that if you see something suspicious, you should absolutely report it. He'd rather have the neighbourhood watch model than a system of mass surveillance (it would have been interesting to hear more about these approaches).

SED finishes off by noting that contrary to his expectations, Reddit is extremely pro-active at taking down accounts : they've removed six times more than what was reported to them. And he notes that the presence of trolls makes users paranoid that other users are trolls, which is exactly what the trolls want : first they make you hate the other, then they make you think the other is a troll. Reasonable discourse becomes impossible in such a situation.
He suggests fighting back by trying to do the opposite of what the trolls attempt. If they reduce cognitive complexity, add nuance to the discussion to make things less black and white (though I wonder here about the "merchants of doubt" problem). To fight identity attacks, remember that everyone is a real person and call it out - but kindly, with compassion. Make the rhetoric less aggressive.


No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Ordinary Men

As promised last time  I'm going to do a more thorough review of Christopher Browning's Ordinary Men . I already mentioned the Netf...