Instead of focusing on isolated words and phrases, they taught machine learning software to spot hate speech by learning how members of hateful communities speak. They trained their system on a data dump that contains most of the posts made to Reddit between 2006 and 2016. They focused on three groups who are often the target of abuse: African Americans, overweight people and women. For each of these, they chose the most active support and abuse groups on Reddit to train their software. They also took comments from Voat -- a forum site similar to Reddit -- as well as individual websites dedicated to hate speech.
The team found that their approach contained fewer false positives than a keyword-based detector. For example, it was able to flag comments that contained no offensive keyword, such as 'I don't see the problem here. Animals attack other animals all the time,' in which the term 'animals' was being used as a racist slur.
Interesting. But can it differentiate between hatred and momentary outrage ? What about hate that's been provoked and/or is justified, i.e. hating Nazis ? What about in countries where violent political action might actually be required, e.g. those under the rule of a brutal dictator ?
I'm no free speech absolutist - totally free speech is a fundamental impossibility - but only a fool would pretend that the situation isn't horribly messy.
https://www.newscientist.com/article/2149562-this-ai-can-tell-true-hate-speech-from-harmless-banter/
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Whose cloud is it anyway ?
I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
-
Hmmm. [The comments below include a prime example of someone claiming they're interested in truth but just want higher standard, where...
What about sarcasm?
ReplyDeleteHere's where I think this is going: Facebook have thousands of Indians and Filipinos on the job, doing content moderation. It's serious business already.
ReplyDeletewired.com - The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed | WIRED
Betteridge's Law of Headlines here: all the situation needs is a better gold sluice box. Just filter off the non-problematic posts and comments, the humans can handle the rest
There's always a danger of human bias in creating the training dataset in situations like this.
ReplyDeleteBeing firm is NOT being rude (its telling you what others are afraid of telling you straight away)
ReplyDelete#carbonemissions
#climatechange 2day in South Africa :;