Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 22 June 2020

I think, therefore I experiment

First, obligatory Existential Comics link. Second, the real trolley problem is obviously why the hell are they calling a train a trolley. Look, this is a train. This is a trolley. Killing one person with a trolley would be extremely difficult, never mind five. And third, we do actually know the answer to this one in real life.

That's the essential stuff out of the way. On to the article, which is about the use of thought experiments in general rather than the "trolley" problem specifically.
While thought experiments are as old as philosophy itself, the weight placed on them in recent philosophy is distinctive. Even when scenarios are highly unrealistic, judgments about them are thought to have wide-ranging implications for what should be done in the real world. The assumption is that, if you can show that a point of ethical principle holds in one artfully designed case, however bizarre, then this tells us something significant. Many non-philosophers baulk at this suggestion. 
Consider ‘The Violinist’, a much-discussed case from Judith Jarvis Thomson’s 1971 defence of abortion:
You wake up in the morning and find yourself back-to-back in bed with an unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you: ‘Look, we’re sorry the Society of Music Lovers did this to you – we would never have permitted it if we had known. But still, they did it, and the violinist now is plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.' 
Readers are supposed to judge that the violinist, despite having as much right to life as anyone else, doesn’t thereby have the right to use the body and organs of someone who hasn’t consented to this – even if this is the only way for him to remain alive. This is supposed to imply that, even if it is admitted that the foetus has a right to life, it doesn’t yet follow that it has a right to the means to survive where that involves the use of an unconsenting other’s body.
Wow. That's insane. Now, I'm a big fan of indirect analogies - they encourage thinking carefully about a situation, whereas direct analogies are largely pointless. But this is so indirect as to be utterly irrelevant. I don't even know where to begin with the moral differences between the situations intended for comparison here, so I won't try.

What I've always liked about the Trolley Train Problem is its attempt to reduce things to the core issue : is it better to kill one person than five, given no other knowledge of the situation ? Obviously the answer to that is yes, it is. But as soon as you introduce even anything else at all, complications spiral. Why can't I stop the train ? How the hell is pushing a fat guy off a bridge going to help ? (Seriously, FFS on that one, what were they thinking drinking ?) Those aspects make the whole thing feel distinctly uncomfortable, in a different way to the moral question itself :
In the few instances I tried to use this thought experiment in teaching ethics to clinicians, they mostly found it a bad and confusing example. Their problem is that they know too much. For them, the example is physiologically and institutionally implausible, and problematically vague in relevant details of what happened and how. (Why does the Society of Music Lovers have access to confidential medical records? Is the operation supposed to have taken place in hospital, or do they have their own private operating facility?) Moreover, clinicians find this thought experiment bizarre in its complete lack of attention to other plausible real-world alternatives, such as dialysis or transplant. As a result, excellent clinicians might fail to even see the analogy with pregnancy, let alone find it helpful in their ethical reasoning about abortion.
What's never really occurred to me is the root issue of what makes the more complicated situation feel different, which the article does an excellent job of putting its finger on : the nuances of the situation matter because they change the ethical problem. Say there was some way of stopping the train instead of diverting it but with some risk - that's a different moral question. Someone having to endure being tied to a violinist experiences different moral questions to one wrestling with abortion. The detailed aspects to each situation allow for different options, and ignoring those is unavoidably immoral. Fundamentally, sometimes you can't reduce it to a simple case. The details are essential and cannot be separated from the moral issue at hand. So the question, "is it better to kill one person than five ?" may have literally no context-independent meaning at all - it would be better in some situations but not in others. The key for addressing a moral dilemma by analogy is to a construct a relevant analogy in which the answer already seems clearer.
Faced with people who don’t ‘get’ a thought experiment, the temptation for philosophers is to say that these people aren’t sufficiently good at isolating what is ethically relevant. Obviously, such a response risks being self-serving, and tends to gloss over an important question: how should we determine what are the ethically relevant features of a situation? Why, for example, should a philosopher sitting in an armchair be in a better position to determine the ethically relevant features of ‘The Violinist’ than someone who’s worked with thousands of patients?
 All this makes reasoning about thought experiments strikingly unlike good ethical reasoning about real-life cases. In real life, the skill and creativity in ethical thinking about complex cases are in finding the right way of framing the problem. Imaginative ethical thinkers look beyond the small menu of obvious options to uncover novel approaches that better allow competing values to be reconciled. The more contextual knowledge and experience a thinker has, the more they have to draw on in coming to a wise decision.
The greater one’s contextual expertise, the more likely one is to suffer the problem of ‘too much knowledge’ when faced with thought experiments stipulating facts and circumstances that make little sense given one’s domain-specific experience. So, while philosophers tend to assume that they make ethical choices clearer and more rigorous by moving them on to abstract and context-free territory, such gains are likely to be experienced as losses in clarity by those with relevant situational expertise.
Clearly thought experiments do have value. By attacking the problem in different ways, they help us identify the similarities and differences, looking for what's extraneous (the colour of the train, say) and what's really important (who the people are). In this way we try and seek out the appropriate principle on which to act. Quite often I find myself profoundly disagreeing with the conclusion the author intended, and it's in those disagreements it becomes possible to learn something.

The Aeon piece then goes into a lengthy discussion about whether thought experiments are literally experiments or merely appeals to the imagination. It's decent enough, but to be honest I'm not really sure what the point of it is : it seems clear that they're not literally experiments but "intuition pumps", as the author calls it, for getting to the heart of the matter. This is not at all easy. Clearly it's very difficult to illuminate the underlying ethical principles, but that there are many unique and complicating factors to every situation doesn't mean that underlying principles don't exist. We just have to keep examining and revising in accordance with new evidence. Much like science, really... which means we've come full circle and thought experiments are experiments after all. Whoops.

What is the problem with ethical trolley problems? - James Wilson | Aeon Essays

Much recent work in analytic philosophy pins its hopes on learning from imaginary cases. Starting from seminal contributions by philosophers such as Robert Nozick and Derek Parfit, this work champions the use of thought experiments - short hypothetical scenarios designed to probe or persuade on a point of ethical principle.

Wednesday, 17 June 2020

Apparently, some people don't believe bias affects behaviour

Two very different takes on the nature of bias and how to overcome it - in particular, implicit racial bias. Trying to raise issues to consciousness is important and worthwhile, but how much influence this really has is subject to a host of caveats. My feeling is that it does help, but it awareness alone rarely bestows instantaneous control. Being aware that you should or shouldn't do something isn't enough by itself to stop you from wanting to do it (or not do it).


This first article is - spoiler alert - shite. Absolutely shite. It's self-inconsistent to a degree seldom found outside of presidential tweets, and I'm more than a little skeptical of some of its claims. I'm pretty confident I did a lot more background reading to write this post than the authors did, because I actually read the links they cite in support of their claims and I don't believe they did even this.

What's the big problem ? Well, the article says not merely that implicit bias is not much of a thing, a notion I'd be prepared to at least entertain, but even that our explicit biases don't matter very much. And that's just... odd. First, the idea that implicit bias isn't much of a thing :
Contrary to what unconscious bias training programs would suggest, people are largely aware of their biases, attitudes, and beliefs, particularly when they concern stereotypes and prejudices. Such biases are an integral part of their self and social identity... Generally speaking, people are not just conscious of their biases, but also quite proud of them. They have nurtured these beliefs through many years, often starting in childhood, when their parents, family, friends, and other well-meaning adults socialized the dominant cultural stereotypes into them. We are what we believe, and our identity and self-concept are ingrained in our deepest personal biases. (The euphemism for these is core values.)
Well, surely everyone is aware of their explicit bias by definition. But the whole point of implicit bias is that it's unconscious. It doesn't arise out of choice, or an active desire to discriminate. This is why people can have implicit biases even against their own social groups. So I don't think the first sentence makes much sense : nothing is "contrary", the two biases are wholly different. And the article has a subheading that "most biases are conscious rather than unconscious", but nothing is offered to support this claim (how would you even measure how many unconscious biases someone has anyway ?). Not a great start.
Contrary to popular belief, our beliefs and attitudes are not strongly related to our behaviours. Psychologists have known this for over a century, but businesses seem largely unaware of it. Organizations care a great deal about employee attitudes both good and bad. That’s only because they assume attitudes are strong predictors of actual behaviors, notably job performance. 
However, there is rarely more than 16% overlap (correlation of r = 0.4) between attitudes and behavior, and even lower for engagement and performance, or prejudice and discrimination. This means that the majority of racist or sexist behaviors that take place at work would not have been predicted from a person’s attitudes or beliefs. The majority of employees and managers who hold prejudiced beliefs, including racist and sexist views, will never engage in discriminatory behaviors at work. In fact, the overlap between unconscious attitudes and behavior is even smaller (merely 4%). Accordingly, even if we succeeded in changing people’s views—conscious or not—there is no reason to expect that to change their behavior.
Wait... what ? Doesn't this flatly contradict the first passage that "we are what we believe" ? This all feels highly suspicious. In fact, on reflection I think it's a deliberate attempt to sow confusion.

Intuitively, the stronger a belief someone holds, the more likely they are to engage in the corresponding behaviour. But that doesn't mean we therefore expect the correlation to be extremely strong, because lord knows there are a bunch of things I'd like to do but physically can't*. I firmly believe that flying like Superman would be a jolly good thing, but I don't even enjoy air travel. Hell, I find driving a terrifying experience and would go to extreme lengths to avoid ever having to do it again. Does that stop me wanting a jetpack ? No. And I'd quite like to be an astronaut, but my belief that it would be worthwhile isn't enough to motivate to drop everything and switch career. That'd be silly.

* Also on statistical grounds, causation doesn't necessarily equal correlation.

Some beliefs are pure fantasy : there's a genuine difference between belief, desire, and behaviour. Sometimes we just can't act on our beliefs, either because they're physically impossible or because we're subject to other forces like social pressure. We might not want to smoke but give in to peer pressure*, or vice-versa. We use metadata of who believes what just as much as we do raw evidence, and such is the power of this metadata-based reasoning** that people even insert their genitals into people with whom their own sexual orientation is in direct disagreement. The blunt-force manifestation of this is the law, preventing us from doing (or wanting to do) things we might be otherwise inclined to attempt. Law both constrains our actions directly but also maintains us culturally from even having certain desires*** in the first place.

* I'm hoping this example is hugely dated and no longer much of a thing.
** Not sure what else to call it. "Groupthink" has a more specific reasoning of a false consensus, "peer pressure" is too direct, and "thinking in groups" just doesn't cut it. See these posts for an overview.
*** Hugely simplifying. I'm not saying we'd all become murderers without laws, but these footnotes are already excessive.

Then there are conflicting beliefs. People may believe, say, that all ginger people are evil, but also that it's important to be respectful, in which case it should be no surprise that their behaviour doesn't always reflect one of those beliefs. If, on the other hand, they believe ginger people are evil and that they should always express themselves honestly, then it should come as no surprise to find them saying nasty things about redheads. Personality plays a huge role.

All this undermines the direct implication of the above quote that we should expect a very strong correlation between beliefs and behaviours. Even if beliefs do drive behaviours, other constraints are also at work. Conversely, this also lends some support to the idea that those who do engage in discriminatory behaviours do not necessarily hold prejudiced views : "I was only following orders" and all that. You don't even necessarily need to dislike them to be convinced you need to murder them.

But I think it's a complete nonsense (literally, it does not make sense) to go that extra step and suggest that changing beliefs won't change behaviour. As the old saying goes, "not everyone who voted for Brexit was a racist, but everyone who was a racist voted for Brexit" : things aren't always neatly reversible (granted it's more subtle than that). Changing beliefs can change behaviour either directly or by changing the pervading culture of laws, rules, and social norms.

So that there's only a modest correlation between belief and behaviour, in my view, in no way whatsoever implies that changing beliefs won't change behaviour. Just because belief isn't the dominant driver of belief doesn't imply that it isn't a driver at all. Indeed, it could well be the single biggest individual contributing factor, just smaller than the sum of all the rest. As we'll see, there's some evidence for that.

I'm also not at all convinced by the claim that most racist behaviour couldn't have been predicted from their attitudes - the weak correlation alone doesn't feel like good evidence for this. Yes, this could happen to a degree, due to social forces etc. But at some level, someone consciously engaging in discriminatory practises (unlike implicit bias) must by definition hold discriminatory beliefs. And how are they defining "racist or sexist" behaviour here anyway ? This matters. I'll quote myself here because I think the concept is useful :
The "angry people in pubs", on on the internet, are often what we might call armchair bigots. They won't, unless strongly pressured, ever take physical action against people they dislike. But they are all too willing to vote on policies which harm other people. They care just enough about the truth to have the decency to be embarrassed (even if only unconsciously) by their opinions, but they're all too happy to absolve themselves of responsibility and let other people do their dirty work. This is a very dangerous aspect of democracy, in that it makes villainy much easier by making it far less personal.
So this claim about how attitudes and behaviour correlate in a strange way is just too weird to reduce it to a few casual sentences. More explanation is required.

The article links three papers in this passage. The first, which they cite in support of the "weak" correlation, says in the first sentence of its abstract that it's social pressures which weaken or moderate the belief/behaviour trend, as already discussed. It also notes that the correlation value of 0.4 is pretty strong by other psychological standards. It's a meta study of almost 800 investigations, and doesn't mention racism or sexism (which is what the Fast Company article is concerned with). It explicitly says that beliefs predict some behaviours better than others, which is hardly unexpected.

So I call foul on the Fast Company article, which is using a general study to support specific claims not backed up by the original research, which does not say that the trend is weak in all cases. Sure, social pressure is important, but it's completely wrong to say this means beliefs don't matter.

The second paper is also concerned with measuring the correlation but has nothing to do with prejudice. It's a bit odd to even mention it. The third paper, from 1996, is concerned with prejudice and discrimination and does indeed find a weaker correlation than the general case (r=0.3) - provisionally.

It's a good read. It's another meta study and describes the limitations and inhomogeneities of the various samples, but straightaway even the Fast Company claim for a weaker correlation looks to be on shaky ground. Correcting for the different sample sizes, the paper shows the correlation rises from 0.286 to 0.364, barely lower than the average. And exactly which parameters are used affects this further, with the highest average (sample-size corrected) correlation being 0.416 when examining support for racial segregation. Some individual studies reported even higher correlations, in excess of 0.5 or even 0.6. Overall, correlations were stronger when looking at survey data than experimental data, perhaps because - they suggest - people don't want to openly appear to be prejudiced.

(What I'm not clear about is what the authors mean by "discriminatory intention". They define prejudice as merely holding views, whereas discrimination is putting those views into practice. Discriminatory intention, as far as I can tell, is another type of prejudice.)

Two of their concluding points deserve to be quoted :
Though the prejudice-discrimination relationship is not very strong in general, it varies, to a considerable degree, across specific behavioral categories, and it seems that the relationship is stronger in those cases where the behavior is under volitional control of the subjects. 
In sum, we conclude: only rarely is prejudice a valid predictor for social discrimination, but there are only very few other candidates, and all of these are less useful than prejudice.
I call bollocks on the Fast Company article. They twist the conclusions to mean something very different from what the original authors said.

What of their claim that there's only a "4% overlap" between unconscious attitudes and behaviour ? For this they cite a Guardian article. But this is cherry picking in extremis - the article doesn't say implicit bias is weak (far from it, quoting a number of other studies showing it's a strong effect), it only notes the flaws in certain testing. Fast Company continue with their willful stupidity :
Furthermore, if the test tells you what you already knew, then what is the point of measuring your implicit or unconscious biases? And if it tells you something you didn’t know, and do not agree with, what next? Suppose you see yourself as open-minded (non-racist and nonsexist), but the test determines that you are prejudiced. What should you do? For instance, estimates for the race IAT suggest that 50% of black respondents come up as racially biased against blacks.
That's the whole frickin' point of implicit bias : it's different from explicit bias. Why is this complicated ?

While their conclusion that "bias doesn't matter" is clearly rubbish, they are right to point out, though, that overcoming this bias is difficult. What they don't offer, despite their clickbaity headline, is any hint at at all on what better method is available to overcome it. Although they do conclude that bias is a problem and we should do more (or something differently) to improve how we deal with it, this frankly feels schizophrenic after they just spent the whole article veering wildly back and forth between "bias being how we identify ourselves" and "bias being totally unimportant".

This is a daft article. It doesn't make any sense. It mangles the conclusions of academic papers in order to support a conclusion it doesn't believe in. Aaargh.

Take-home message : bias does matter. That's what the evidence says, very clearly, despite numerous and interesting complications.


Time for the second article. Don't worry, this one can be much shorter, because it's self-consistent and sensible. It fully accepts that implicit bias is important and doesn't say anything stupid about explicit bias being something we can ignore.
That particular implicit bias, the one involving black-white race, shows up in about 70 percent to 75 percent of all Americans who try the test. It shows up more strongly in white Americans and Asian Americans than in mixed-race or African Americans. African Americans, you’d think, might show just the reverse effect — that it would be easy for them to put African American together with pleasant and white American together with unpleasant. But no, African Americans show, on average, neither direction of bias on that task. Most people have multiple implicit biases they aren’t aware of. It is much more widespread than is generally assumed.
However, the author of this piece is also skeptical as to whether implicit bias training is actually improving things or not.
I’m at the moment very skeptical about most of what’s offered under the label of implicit bias training, because the methods being used have not been tested scientifically to indicate that they are effective. And they’re using it without trying to assess whether the training they do is achieving the desired results. 
I see most implicit bias training as window dressing that looks good both internally to an organization and externally, as if you’re concerned and trying to do something. But it can be deployed without actually achieving anything, which makes it in fact counterproductive. After 10 years of doing this stuff and nobody reporting data, I think the logical conclusion is that if it was working, we would have heard about it.
Training methods apparently either don't work or their effects are extremely short-lived :
One is exposure to counter-stereotypic examples, like seeing examples of admirable scientists or entertainers or others who are African American alongside examples of whites who are mass murderers. And that produces an immediate effect. You can show that it will actually affect a test result if you measure it within about a half-hour. But it was recently found that when people started to do these tests with longer delays, a day or more, any beneficial effect appears to be gone... It’s surprising to me that making people aware of their bias doesn’t do anything to mitigate it.
I'm not surprised at all. Dedicated training sessions take people out of their usual information network and temporarily embed them in a new one. Once you take them out, they're assaulted once again by their old sources of bias. Maintaining awareness from one particular session, especially if its conclusions run counter to everyday sources, is not at all easy. Raising issues to awareness does help you take control, but i) it only helps and ii) you need to maintain that awareness. Still, the author does come up with a concrete plan that might address this :
Once you know what’s happening, the next step is what I call discretion elimination. This can be applied when people are making decisions that involve subjective judgment about a person. This could be police officers, employers making hiring or promotion decisions, doctors deciding on a patient’s treatment, or teachers making decisions about students’ performance. When those decisions are made with discretion, they are likely to result in unintended disparities. But when those decisions are made based on predetermined, objective criteria that are rigorously applied, they are much less likely to produce disparities.
As noted, the effect of continually asking, "Am I being racist ?" can be counter-productive. But asking instead, "Am I being fair ?" may be better. It forces you to focus on the key criteria of how you're making your decision. It stops you from focusing on the very issue you want to avoid - prejudice - and deal with the problem at hand fairly.
What we know comes from the rare occasions in which the effects of discretion elimination have been recorded and reported. The classic example of this is when major symphony orchestras in the United States started using blind auditions in the 1970s... as auditions started to be made behind screens so the performer could not be seen, the share of women hired as instrumentalists in major symphony orchestras rose from around 10 percent or 20 percent before 1970 to about 40 percent.
Also telescope proposals. Forcing people to focus on what matters, while at the same time excluding what doesn't matter, actually works. People are actually capable of being fair and rational under certain conditions. Hurrah !

Monday, 15 June 2020

Leave the damn statues alone

... at least, until you can come up with a general principle of which ones you object to. Cue angry rant, in no small part repeating the previous post.

I think we can certainly all agree on two extremes. First, that we shouldn't be building any new statues to people whose entire aspect was offensive to modern eyes, except perhaps in ways intending to denigrate them. No-one wants a statue of Pol Pot staring at them in a public space, or wants passing schoolchildren to see a huge statue of Stalin depicted as a happy jolly fellow with a big beaming smile while he crushes opponents with a literal iron fist*. Second, existing statues of people who led blameless lives should certainly stay.

* Or maybe they do. Sounds kinda cool, actually.

The trouble is that the middle ground between these two extremes is an absolute minefield of ghastly complications.

As far as I can tell, protestors are objecting mainly to the racist attitudes of historical figures, considering it no longer appropriate to glorify them with a public statue. No-one, as far as I know, wants to forget the past : indeed, there are calls for more history lessons, not less. And quite rightly so. As I've said before, British high school education was absolutely shite. It was a plodding, humdrum affair that utterly neglected both the spectacular advances and the atrocities of the not-so-distant British past. It was, I'd go so far as to say, shamefully bad.

So the protestors don't want to censor or forget anything. Far from it. But they seem awfully confused about the artistic nature of statues and the different moral attitudes that prevailed in previous eras.

First, does a statue in general automatically glorify its subject ? Clearly not. There is no more artistic necessity for a sculpture to glorify its subject than a painting. Does the painting of Kronos eating his children glorify cannibalism ? Hardly !

What, though, if the statue was in an open public space ? Again I have to say no. The Prague TV tower is covered in giant deformed babies with barcodes for faces, and I doubt very much this is intended to glorify mutilation. Or what about this bizarre collection ? Did the artist intend to encourage child beating ? I don't think so*.

* As to what he was encouraging, that is left to an exercise for the reader. Drugs, presumably.

A statue, to my way of thinking, does not automatically glorify or even honour its subject. The one thing it unavoidably does do is draw attention to them. Anything else is context dependent : it can glorify their whole character, or it can commemorate only their specific achievement(s) - then again it can even go the other way and raise awareness of what horrible people they were. Sculptures of the Greek myths don't automatically tell us to adopt "What Would Zeus Do ?" as a motto, because the answer would be, "turn into a swan and rape people", which is pretty much about the worst advice possible in any situation*. Static visual art can play the same function as a play, inviting the audience only to consider its subject, not attempting to persuade them of anything. Art can, and should, invite its audience to think.

* One million internet points for anyone who comes up with a situation in which this would actually be an appropriate thing to do.

The intent of the artists does not matter all that much to the the perception of the audience. For example, that time an old lady "restored" a painting of Jesus into "a very hairy monkey in an ill-fitting tunic". The moral message intended by the artist, if any, doesn't necessarily have any impact on how the art is actually perceived (e.g. a Ted Hughes poem about a hawk that was bizarrely interpreted to be about the Nazis). It's not that the interpretations are wrong, it's that they're subjective. You see a statue glorifying a racist; I see a statue honouring the man who saved Britain from the Nazis. Both are true, in no small part because human beings are fantastically complicated beasts.

Historical figures exemplify how perception changes, especially as pain fades with time. We no longer view statues of Caesar as honouring the man who boasted that he'd killed a million Gauls and enslaved a million more : we can look at these distant atrocities dispassionately as interesting historical events. Similar arguments could be applied to a myriad of historical figures. Their crimes were as bad or worse than any of the more recent figures, yet tearing down Trajan's Column would be criminally insane. If we uncover a statue of an Aztec emperor, we say, "how interesting !" despite their acts of bloody slaughter being as bad as any others in human history. An Aztec atrocity just doesn't have the same emotional resonance as a more recent, closer horrors. The statues and monuments no longer have the same impact or meaning as they once did, just as castles are now seen as fun places to visit and not terrifying instruments of oppression and torture. So even when the intent of the artist was to glorify their subject, this doesn't always or permanently succeed.

No-one has a monopoly on what an artwork means. Even at the moment a statue is created,  what to one person can feel horrendous can to others, rightly or wrongly, seem worthy of celebration. How do we decide whose enjoyment outweighs whose offence, and vice-versa ?

Frankly I've no idea. More fundamentally, no-one but no-one is perfect. As the protestors point out, Churchill was indeed a racist - pretty much everyone was back then - but he was hardly a white supremacist. Martin Luther was an extreme anti-Semite. Many suffragettes were eugenicists; so was H. G. Wells (though the concept didn't have the racial overtones it does today). Aneurin Bevan, founder of the National Health Service, called the Tories "lower than vermin". The ancient Athenian founders of democracy were proud to call themselves warlike and thought nothing amiss in owning slaves. Richard the Lionheart, Henry V and Boudicca - some of Britain's most notable military commanders all committed what today would be considered war crimes. Thomas Jefferson owned slaves. Reality has few true heroes and villains, but it has an abundance of people capable of heroic and villainous acts alike.

If you want to tear down a statue, you should be able to explain four things. First, you need to justify what's special about this particular figure as opposed to a plethora of other, deeply flawed yet accomplished historical figures who held similar or identical views. Second, you need to explain what it about that particular trait that makes it different from other offensive characteristics. Third, you also need to explain how this applies uniquely to a statue and not some other form of artwork. If a statue is so offensive, why not books ? Why not TV shows ? Fourth, you should explain why removal is the best action rather than alternative measures.

Don't misunderstand - this isn't a rhetorical point. It's absolutely possible to answer these issues. For example, toppling statues of Saddam Hussein made complete sense : those were erected by a violent dictator for the sole purpose of his own aggrandizement, and no-one thought they had any other cultural value; there was a highly limited scope for interpretation. Similarly, Confederate flags or Swastikas - in virtually all instances - are open declarations of hostility, not talking points for polite discourse. I don't mean to say for a moment that just because a statue isn't necessarily aggressive that it follows that it automatically isn't, because that's clearly very stupid.

But historical figures of the distant past, to my mind, are in general altogether more complex than figures of living history. Thomas Jefferson was a pro-emancipation slave-owning racist, a juxtaposition which simply doesn't make sense to modern ideologies. Christopher Columbus, despite a number of previous attempts, was the one who made the New World really matter to the Old, but, like Vasco de Gama, was tremendously violent towards the people he encountered. Should we tear down his statues, or can we use them to mark the undeniable magnitude of his achievement without glorifying the associated death and destruction ? Surely if we can recognise the debt owed to Churchill without endorsing his bigotry, we can do the same for other, much older figures.

Protestors are not beholden to produce a detailed list of every statue or other artwork they want removed. Nor is any burden put upon to demand a perfect set of criteria that must be perfect on the first iteration and subject to no further changes : this can be a conversation, in which we revise and adjust the criteria according to the arguments set forth. But protestors do need to at least try and establish some broad principle behind their actions, otherwise their efforts are arbitrary, potentially hypocritical, and counter-productive. Many of the figures targeted are not even especially famous, thus attacks on their statues only inflame curiosity. Never mind that a few days ago this was not an issue at all, except in a few cases, which gives the whole thing a distinct whiff of, at best, stemming from different underlying factors, and at worst of being completely manufactured. How many people really feel such bitterness towards centuries-old statues of figures most people have never heard of ? Oh, and they've been there for 200 years but only became offensive this week ? Hmm.

When it is clear that a statue glorifies and is perceived to glorify an injustice, there can be grounds for removal. Alternative actions include adding explanatory plaques, which are far more educational than removal, or counter-statues. Sure, sometimes taking them down is the best option. But scrawling "was a racist" on Winston Churchill ? Nope. That's an action born of pure anger, not any righteous or informed sense of tackling injustice. Calling for the removal of statues of people based on their violent actions or bigoted views ? Nope - that would lead to the removal of practically everyone, even those perceived as heroes to those most fiercely opposed to injustice. People are just so much more complicated than that. So too is their art.

Thursday, 11 June 2020

Great and terrible things

Or, "The Renaissance : A Warning From History".

A very interesting but overly-long read, despite the author's protestations that it needs to be a book. Oh, I don't doubt that a book would be worthwhile, but for the sake of getting to the main point, this can be condensed a great deal. I shall attempt to do so below.

I was attracted by the mention of COVID and in what might happen next according to history. Sometimes shock events lead to massive systemic change, sometimes they don't. But the history of how the Renaissance came to be distinguished from the Middle Ages, despite there being no clear pan-European marker to demarcate the boundary, is fascinating in itself. In fact COVID barely gets a mention, because there are more interesting and broader lessons from the history of history : namely, that the Renaissance wasn't especially golden compared to the previous era, and its astonishing achievements weren't fostered by what you might think
If we read treatises, orations, dedicatory prefaces, writings on art or courtly conduct, we see what Johan Burkhardt described a self-conscious golden age bursting with culture, art, discovery, and vying with the ancients for the title of Europe’s most glorious age. If instead we read the private letters which flew back and forth between Machiavelli and his correspondents, we see terror, invasion, plague deaths, a desperate man scrambling to even keep track of the ever-moving threats which hem his fragile homeland in from every side, as friends and family beg for frequent letters, since every patch of silence makes them fear the loved one might be dead.
In Italy, average life expectancies in the solidly Medieval 1200s were 35-40, while by the year 1500 (definitely Renaissance) life expectancy in Italian city states had dropped to 18. Kids died more in the Renaissance, adults died more, men died more, we have the numbers, but I find it telling how often people who hear these numbers try to discredit them, search for a loophole, because these facts rub against our expectations.  We didn’t want a wretched golden age. [See original article for breakdown of the numbers]
Why did life expectancy drop?  Counter-intuitively the answer is, largely, progress. 
War got worse, for one. While both the Middle Ages and Renaissance had lots of wars, Renaissance wars were larger and deadlier, involving more troops and claiming more lives, military and civilian—this wasn’t a sudden change, it was a gradual one, but it made a difference. 
Economic growth also made the life expectancy go down. Europe was becoming more interconnected, trade increasing. This let merchants grow rich, prosperity for some, but when people move around more, diseases move more too. Cities were also growing denser, more manufacturing jobs and urban employment drawing people to crowd inside tight city walls, and urban spaces always have higher mortality rates than rural.  Malaria, typhoid, dysentery, deadly influenza, measles, the classic pox, these old constants of Medieval life grew fiercer in the Renaissance, with more frequent outbreaks claiming more lives... While the 1348 pandemic was Medieval, most of the Middle Ages did not have the plague—it’s the Renaissance which has the plague every single day as an apocalyptic lived reality.
Economic growth also made non-military violence worse. In Italy especially, new avenues for economic growth (banking and mercenary work) quickly made families grow wealthy enough to raise forces far larger than the governments of their little city states, which made states powerless to stop the violence, and vulnerable to frequent, bloody coups... In the 1400s most cities in Italy saw at least four violent regime changes, some of them as many as ten or twelve, commixed with bloody civil wars and factional massacres.
While the Medieval Inquisition started in 1184, it didn’t ramp up its book burnings, censorship, and executions to a massive scale until the Spanish Inquisition in the 1470s and then the printing press and Martin Luther in the 1500s (Renaissance); similarly witchcraft persecution surges to scales unseen in the Middle Ages after the publication of the Malleus Maleficarum in 1486 (Renaissance); and the variety of ingenious tortures being used in prisons increased, rather than decreasing, over time. If you want corrupt popes, they too can be more terrible as they get richer. 
When I try to articulate the real difference between Renaissance and Medieval, I find myself thinking of the humorous story “Ever-So-Much-More-So” from Centerburg Tales (1951). A traveling peddler comes to town selling a powder called Ever-So-Much-More-So. If you sprinkle it on something, it enhances all its qualities good and bad. Sprinkle it on a comfy mattress and you get mattress paradise, but if it had a squeaky spring you’ll never sleep again for the noise. Sprinkle it on a radio and you’ll get better reception, but agonizing squeals when signal flares. Sprinkle it on the Middle Ages and you get the Renaissance. 
I'm going to digress slightly, so feel free to skip to the next quote section if you want.

I can't help but add that this range of extremes applies not just to societies but individuals as well. There was a widely-disputed paper a while back claiming that there are more male geniuses because there are also more male idiots, with something about the Y-chromosome causing more variation in attributes. I find that idea to be pretty stupid, but what's undeniable is that individuals are capable of both great and terrible things. Martin Luther was a raging anti-semite. So was Richard Wagner. Florence Nightingale was a eugenicist. Winston Churchill was a racist. Go looking for genuine heroes and you will find few indeed. Far more often though, you'll find people who committed heroic acts - and villainous ones too. Individuals and societies alike are capable of extremes, they can't be pigeonholed into neat categories.

Right now there's a lot of anger sloshing around, the current focus of which is against statues. Now, there are obviously extremes on which we can (or should) all agree : greedy, unrepentant slave traders are probably not the sort who deserve public statues, while inoffensive geologists are probably okay. But where exactly do we draw the line ? I don't mean this as a rhetorical question but an actual one : clearly the line can be drawn somewhere, but where ? What is the general principle at work here ? Presumably no-one wants to tear down historical statues of Caesar or Alexander or Napoleon, but all of these leaders committed atrocities. They were great and terrible indeed.

For my part I think statues can venerate highly specific, context-dependent accomplishments, not the whole character of an individual. They have to, for two reasons : one is that great achievements usually require a good deal of luck, and the second is that everyone is flawed and imperfect. Racism is an incredibly common flaw (but full-on hatred is not); sexism perhaps even more so. Churchill was a racist, but no-one celebrates that aspect : we celebrate defeating the Nazis. Wagner hated the Jews, but humming along to Ride of the Valkyries doesn't make one a white supremacist. We no longer celebrate Caesar's invasion of Gaul, but we can be interested in what he did. Owning a bust of Napoleon probably doesn't indicate that the owner harbours the desire to lead half a million men into the Russian winter for shits and giggles. Having a statue of Boudicca outside Parliament doesn't mean that the British government secretly wants to massacre the Italians. Well, probably... my point is that if we're going to remove statues, as we should, we ought to think very carefully about each and every case. The great figures of history are often also the terrible ones.

This neatly brings us back to the main narrative. If the Renaissance was the Middle Ages on steroids, why is it perceived as something distinct and better ? One answer is that this extreme turbulence led to extreme achievements, good at and bad. There were indeed amazing achievements in the Renaissance, it's just that the reason for them isn't what we usually think.
The radical oversimplification is that, when times get desperate, those in power pour money into art, architecture, grandeur, even science, because such things can provide legitimacy and thus aid stability. Intimidating palaces, grand oratory, epics about the great deeds of a conqueror, expensive tutors so the prince and princess have rare skills like Greek and music, even a chemical treatise whose dedication praises the Duke of Such-and-such, these were all investments in legitimacy, not fruits of peace but symptoms of a desperate time.
Culture is a form of political competition—if war is politics by other means, culture is too, but lower risk.  This too happened throughout the Middle Ages, but the Renaissance was ever-so-much-more-so in comparison, and whenever you get a combination of increasing wealth and increasing instability, that’s a recipe for increasing art and innovation, not because people are at peace and have the leisure to do art, but because they’re desperate after three consecutive civil wars and hope they can avoid a fourth one if they can shore up the regime with a display of cultural grandeur. The fruits fill our museums and libraries, but they aren’t relics of an age of prosperous peace, they’re relics of a lived experience which was, as I said, terrible but great.
The thing about golden ages—and this is precisely what Petrarch and Bruni tapped into—is that they’re incredibly useful to later regimes and peoples who want to make glorifying claims about themselves. If you present yourself, your movement, your epoch, as similar to a golden age, as the return of a golden age, as the successor to a golden age, those claims are immensely effective in making you seem important, powerful, trustworthy. Legitimate.
Any place (past or present) that calls itself a new Jerusalem, new Rome, or new Athens is doing this, usually accompanied by a narrative about how the original has been ruined by something: “Greece today is stifled by [insert flaw here: conquest, superstition, socialism, lack of socialism, a backwards Church, whatever], but the true spirit of Plato, Socrates and the Examined Life flourish in [Whateverplace]!” The same is true of claiming Renaissance.  If you can make a claim about what made the Renaissance a golden age, and claim that you are the true successor of that feature of the Renaissance, then you can claim the Renaissance as a whole.  This is made easier by the fact that “the Renaissance” is incredibly vague.
After the Renaissance, in the period vaguely from 1700 to 1850, everyone in Europe agreed the Renaissance had been a golden age of art, music, and literature specifically.  Any nation that wanted to be seen as powerful had to have a national gallery showing off Renaissance (mainly Italian) art treasures... Renaissance art treasures were protected and preserved more than Medieval ones—if you’re valorizing the Renaissance you’re usually criticizing the Middle Ages in contrast. The Renaissance became a self-fulfilling source base: go to a museum today and you see much more splendid Renaissance art than Medieval.
So a survivorship bias that makes the "Renaissance = good, Medieval = bad" a self-fulfilling story. But did the Black Death have any transformative effects at all ? It would seem not, since no transformation occurred. Oh, there was evolution, yes, but that was driven by deeper factors than a transitory disease. There was no one "X factor" that "caused" the Renaissance - the author lists capitalism, secularism, nationalism and proto-democracy as all having been claimed as possible factors, when in reality there is no clear difference between the Renaissance and medieval eras at all.
To this day, every time someone proposes a new X-Factor for the Renaissance—even if it’s a well-researched and plausible suggestion—it immediately gets appropriated by a wave of people & powers who want to claim they are the torch-bearers of that great light that makes the human spirit modern. And every time someone invokes a Renaissance X-Factor, the corresponding myth of the bad Middle Ages becomes newly useful as a way to smear rivals and enemies.  As a result, for 160 years and counting, an endless stream of people, kingdoms, political parties, art movements, tech firms, banks, all sorts of powers have gained legitimacy by retelling the myth of the bad Middle Ages and golden Renaissance, with their preferred X-Factor glittering at its heart.
You see the problems with the question now: the Black Death didn’t cause the Renaissance, not by itself, and the Renaissance was not a golden age, at least not the kind that you would want to live in, or to see your children live in. But I do think that both Black Death and Renaissance are useful for us to look at now, not as a window on what will happen if we sit back and let the gears of history grind, but as a window on how vital action is.
What the Black Death really caused was change. It caused regime changes, instability letting some monarchies or oligarchies rise, or fall. It caused policy and legal changes, some oppressive, some liberating. And it caused economic changes, some regions or markets collapsing, and others growing.
Is there anything we can learn from the plague to inform us about COVID 19 ? Yes, but it certainly doesn't herald a golden age or an apocalypse.
That’s what we’ll see with COVID: collapse and growth, busts for one industry, booms for another, sudden wealth collecting in some hands, while elsewhere whole communities collapse, like Flint Michigan, and Viking Greenland, and the many disasters in human history which made survivors abandon homes and villages, and move elsewhere.  A lot of families and communities will lose their livelihoods, their homes, their everythings, and face the devastating need to start again.  And as that happens, we’ll see different places enact different laws and policies to deal with it, just like after the Black Death.  Some places/regimes/policies will increase wealth and freedom, while others will reduce it, and the complicated world will go on being complicated. 
That’s why I say we should aim to do better than the Renaissance. Because we can. We have so much they didn’t. We know so much.
The stakes are higher.  Unlike in 1348 we have a lot of knowledge, answers, options, concepts we could try like safety nets, or UBI, or radical free markets, many very different things.  Which means that acting now, demanding now, voting, pushing, proposing change, we’re shaping policies that will affect our big historical trajectory more than normal—a great chance to address and finally change systemic inequalities, or to let them entrench.  There is no predetermined outcome of pandemic; pandemic is a hallway leading to a room where something big is going to be decided—human beings decide.

Black Death, COVID, and Why We Keep Telling the Myth of a Renaissance Golden Age and Bad Middle Ages

"If the Black Death caused the Renaissance, will COVID also create a golden age?" Versions of this question have been going around as people, trying to understand the present crisis, reach for history's most famous pandemic.

Tuesday, 9 June 2020

Woe the humanities ?

A few years ago I wrote a piece describing how valuable the humanities courses are for teaching critical thinking, here broadly in the sense of "getting at the real truth of a statement". At high school, you can't do that when teaching trigonometry. You can't have kids wondering if the sine of an angle is really the ratio of the opposite to hypotenuse, because it just is. Science at that level deals with hard facts. But language and literature can leap beyond that to the world of "what does this really mean ?" To be aware that a poem conveys information beyond its superficial surface content, which is often barely intelligible, is an important step in realising how emotions manipulate you. If you can disentangle Shakespeare, then understanding the effects of advertising and political slogans becomes a whole lot easier.

So I'm 100% in favour of teaching humanities. Not just because critical thinking is a valuable spin-off, but solely for their own sake. It is a fundamentally good thing to explore the irrational side of humanity. This means I'm sympathetic to the author of the embedded piece when she says :
Most of these disciplines aren’t quantifiable, scientific, or precise. They are messy and complicated. And when you try to straighten out the tangle, you may find that you lose far more than you gain... The tools of hard science have a part to play, but they are far from the whole story. Forget the qualitative, unquantifiable and irreducible elements, and you are left with so much junk.
Sometimes, there is no easy approach to studying the intricate vagaries that are the human mind and human behavior. Sometimes, we have to be okay with qualitative questions and approaches that, while reliable and valid and experimentally sound, do not lend themselves to an easy linear narrative—or a narrative that has a base in hard science or concrete math and statistics. Psychology is not a natural science. It’s a social science. And it shouldn’t try to be what it’s not.
But what I don't have an answer to is where, specifically, to draw the line. Humanities subjects have irreducible, unquantifiable aspects to them, but they also do have quantitative elements. Under what conditions, if any, should we use the quantitative to inform about the qualitative ?

You might remember my agent-based project to model talent versus luck*. I found this a very interesting way to illustrate and explore different possible scenarios. It was helpful to show how the data could be incredibly misleading and how correlation and causation could be confused in both directions. In exploring the data I thought about the concept of a meritocracy in a deeper and different way to how I would otherwise. The process had value. But at no point was I under any illusion that the models had any direct bearing on the real world : they could never be used to make predictions or inform policy; their use was more oblique than that.

* I basically abandoned the project when I reached a satisfactory point. I even wrote it up in a more academic-paper style format, but I got too bored to submit it.

So models can have value beyond making direct quantitative predictions. But is it fair to criticise those humanity models which do claim to be able to make predictions or understand specific situations ?
Witness the rise of Cliodynamics : the use of scientific methodology (nonlinear mathematics, computer simulations, large-N statistical analyses, information technologies) to illuminate historical events – and, presumably, be able to predict when future “cycles” will occur. Sure, there might be some insights gained. Economist Herbert Gintis calls the benefit analogous to an airplane’s black box: you can’t predict future plane crashes, but at least you can analyze what went wrong in the past. But when it comes to historical events—not nearly as defined or tangible or precise as a plane crash—so many things can easily prevent even that benefit from being realized.
To be of equal use, each quantitative analysis must rely on comparable data – but historical records are spotty and available proxies differ from event to event, issues that don’t plague something like a plane crash. What’s more, each conclusion, each analysis, each input and output must be justified and qualified (same root as qualitative; coincidence?) by a historian who knows—really knows—what he’s doing. But can't you just see the models taking on a life of their own, being used to make political statements and flashy headlines? It's happened before. Time and time again. And what does history do, according to the cliodynamists, if not repeat itself?
The cliodynamists, just like everyone else, will only know which cyclical predictions were accurate after the fact. Forgotten will be all of those that were totally wrong. And the analysts of myths only wait for the hits to make their point—but how many narratives that are obviously not based in reality have similar patterns? And whose reality are we dealing with, anyway? We’re not living in Isaac Asimov’s Foundation, with its psychohistorical trends and aspirations—as much as it would be easier if we were.
The author may be right, I don't know. There's a deep, unanswered and perhaps unanswerable philosophical question here : can the unquantifiable affect the quantifiable ? Is human psychology simply impossible to predict ? (Obligatory other mentions : my own abysmal history of predictions; the concept of deliberately seeking out or creating wrong predictions as an extreme form of the straw man fallacy)

I lean cautiously towards "no". My lingering suspicion is that individual humans are far too complicated to model accurately in a given situation, but that the statistical behaviour of a group (or an individual over time) is easier. You get anomalies and oddballs in every group that defies the major trend, but you can still often pick out the major trend. And the map is not the territory, but it's often good enough. You cannot quantify justice, but you can, maybe, quantify how people feel about it.

Now, only an insane person would claim that modelling the behaviour of a society is simple enough that we can expect predictive-level modelling of political issues anytime soon. But it seems to me a mistake not to try. It might be that some things literally cannot be quantified and are utterly impossible to model in any way whatsoever, but it could also be that things are just fantastically complicated and we don't have or understand all the variables just yet. See, humans are often quite easy to predict - they're often creatures of mindless habit. The difficult bit is the specifics : when they deviate from their habits, what event will be the straw that breaks the camel's back.

My take-home message would then be : humanities aren't a science, but they have scientific aspects. Try and model them but don't expect the same kind of results as in the hard sciences. If you can use them to model a scenario directly, then great, but if not, their less predictive aspects aren't necessarily less valuable.

Humanities aren't a science. Stop treating them like one.

There's a certain allure to the elegance of mathematics, the precision of the hard sciences. That much is undeniable. But does the appeal mean that quantitative approaches are always germane? Hardly-and I doubt anyone would argue the contrary.

Sunday, 7 June 2020

Taking the disaster out of the disaster movie

What happens to Garfield if you take Garfield out of the comic ? It gets funnier. Or at least it becomes a different sort of funny.

The disaster-movie version of this is called Twister. Re-watching this for the billionth time, I realised that it has a very different structure to other natural disaster epics. Normally, the whole point it to prevent a catastrophe, or overcome one that's already happened. Not with Twister. We get some property damage, cows, and a handful of fatalities... and that's it. Sure, the tornadoes cause a spot of bother, but not really anything you could call a disaster.

Don't get me wrong : cheesy disaster flicks are not a guilty pleasure for me, they're just a pleasure. Actually, movies of all sorts from the 1980's-90's are my particular favourite. Are they the best ? No, but they're my favourite. The world back then had a naive silliness to it that movie makers weren't afraid to exaggerate - especially in the movie-in-a-movie world of Last Action Hero, which shows this perfectly. Movies in this era perfected the art of pure entertainment. Believability ? Realism ? They can go hang. Instead they aimed at popcorn-consuming fun, and they got it.

Twister is as entertaining as anything else from this golden age. But instead of stopping an asteroid or killing terrorists, the goal for the protagonists is... collecting data. That's it. Once they've got their data, they survive a tornado and all go home. Oh, sure, the data is useful in some abstract, off-screen future for developing meteorological models, but the in-movie extent of the narrative is collecting data and absolutely nothing else.

What if other movies followed this formula ? Here are a few thoughts.
  • Deep Impact : a daring crew venture onto a deadly comet and take some really accurate readings of its orbital trajectory so they can find out just how screwed the planet is. Then they go home, and everybody dies.
  • Volcano : a great big volcano erupts in Los Angeles, but Tommy Lee Jones is on hand to get some high-precision readings of how hot the lava is. He's invited to give a prestigious guest lecture in a geology conference. Meanwhile, the city is destroyed.
  • Independence Day : a huge alien fleet descends towards Earth, but plucky astronomers are able to work out whether they're carbon or silicon-based. Then they wipe out the world.
  • Armaggeddon :  a drilling crew sent to an asteroid work out ways to stop asteroids in general, but unfortunately not this particular one so everyone dies.
  • The Day After Tomorrow : brave meteorologists make astonishingly accurate graphs of how cold it's getting. Everyone is very impressed and then dies.
  • The Core : an elite team of geologists are sent to the centre of the planet, where they take some amazing readings of the magnetic field and then go home to die horribly.
  • Jurassic Park : a bunch of dinosaurs go on the rampage and eat a load of people, but paleontologists are able to settle several long-running academic disputes shortly before being eaten themselves.
All things considered, realistic disaster movies are probably a terrible idea.

Thursday, 4 June 2020

All glory to the non-violent Hypno Toad

I'm against free speech and in favour of censorship. There, that should annoy a lot of people and provoke suitable outrage.

Regular readers will know that I don't mean this in any unqualified sense, of course. That'd be silly. I just don't believe in the unlimited right to say whatever you want in any circumstance and get away with it scott-free. Like, if you were to sell toothpaste containing plutonium and not put that on the ingredients, you should go to prison as a result. Or if you were to declare that you were planning to murder someone, you should face a criminal investigation. Or if you were to say, "I hate rice pudding !" at the Annual General Meeting of the Rice Pudding Fan Club, you deserve pretty much whatever angry reaction you get.

Of course, the details get messy. Sometimes, contrary to popular memes, the government does have a right to intervene, sometimes it only means accepting that you don't have an automatic right to be either speak to or be heard by anyone you like just because you want to. Can you imagine if you did ? You'd be able to walk into the local Department of Landscape Architecture and declare in a booming voice, "Righto, chaps ! Here's a nice fresh corpse. Put those gardening tools down, for today I shall demonstrate the best way to conduct an autopsy !".

If people had such insane freedom to, they'd lose all freedom from. It'd be crazy. We would, quite literally, not have anything we could honestly call civilization. We'd have no freedom at all. We'd have barbarism.

Without censorship - be that in the form of law of by personal choice to look away - we'd have no rule of law. The right to shut people up is absolutely vital to a cohesive society. So I believe in freedom of speech in the same way I believe in freedom of action : under law.

The essence of this is that different regulations apply in different venues. I can say whatever I like among friends, no matter how hateful. I can be racist. I can be homophobic. I can have a vitriolic hatred of ginger cats or a genocidal mania against hedgehogs - anything. If I have evidence for my arguments, I can even present them in scientific conferences. But I don't expect all venues to be consequence-free. I absolutely cannot expect to start phoning up random people and say, "we need to KILL all the homophobic ginger hedgehogs !" and not expect to be sectioned. I can't yell homophobic things on the street at passers-by without expecting them to shout back. I can't shout, "all women are sluts !" in a job interview for a teaching position in a school for teenage girls and expect to be hired. If I was free to be a monster but no-one else was free to fight back, life would be unbearable.

(Frankly, the idea that I should be able to go around saying whatever I want in all situations is so utterly outrageous that you'd be forgiven for thinking I'm making a straw man. Alas, it isn't. People really do believe this is what "free speech" means.)

Now law is not an absolute. Its standards vary. If I say, "I think Nigel Farage looks like a toad", I can expect little or no response; if I say he should be eaten by piranhas I can expect this to be treated as hyperbole; if I orchestrate a campaign of abuse against him I can expect harsher sanctions; if I conspire to actually try and murder him I should expect imprisonment. Laws and regulations are flexible to the circumstances, not tyrannical absolutes.

Obviously this is messy. It has to be, that's life. But a slippery slope of ever-greater censorship  ? Hardly. If anything the progress of Western civilization over the last few centuries shows a tendency towards ever-increasing freedom*, not less.

* More accurately, there are certainly conditions under which censorship does become a slippery slope. The mistake I want to point out is in thinking that any limitation of any speech automatically starts us going down that road.

It's a peculiarity that so many are able to readily accept this when it comes to other matters but view speech as something which Must Not Be Touched, that any interference at all will inevitably lead to a Ministry of Truth or some such. As though there were some better "golden age" of now-lost freedoms. As though the freedom to make racial slurs in Edwardian England was more than fair compensation for being denied the right to advocate for gay rights. It's total bollocks - an arse-backwards view of history that very specifically targets whatever particular opinion happens to be relevant while ignoring the wider context; all too often, it's used to justify the freedom to hate those who've done them no harm while denying such people the very freedom they disingenuously claim to cherish.

It's absurd. If we can regulate behaviour directly, we can damn well regulate speech as well. And we do - for the most part, successfully. For as deeply flawed as the world is, it could be oh so very much worse.

But still, this is messy.

Take Twitter's policy against violence. Who was Twitter made for ? Ordinary people having conversations. Consequently, they allow for hyperbole. Presumably if I were to tweet that I think certain politicians ought to be shot into the Sun from a great big cannon, they'd understand that that wasn't a direct appeal to Elon Musk to start working on said cannon. Even if I say, "build that cannon, Elon, or there WILL be consequences !", they'll know I don't mean it. But if I start saying something which sounds more like a genuine threat, they'll step in.

This policy is largely sensible. In the online world, where we don't know anyone nearly as well as in real life, extra safeguards need to be taken : if I directly threaten someone, it makes sense to presume a sincere intent unless if can be clearly demonstrated that I didn't mean it. And the target audience matters. A public post is, roughly, equivalent to yelling through a megaphone on a public street, and if you were to yell, "I'M GONNA TEAR YOU A NEW ONE, SONNY !" on the street, you'd certainly not question it if an officer were at the very least take you aside for a quiet word.

If, on the other hand, you yelled something similar in a boxing ring, or during a hockey match, all you'd get is - at most - a referee telling you to calm down. People make threats largely to prevent violence, to issue a warning - not as a prelude to actual violence. It takes a lot to transform anger into action.

The venue matters a great deal to how we interpret the meaning of the words. Indeed, social media itself seems to amplify the tendency towards even reaching the threat stage*. But the sincerity of the threat is not really the issue : if someone says something online that would be considered unacceptable in real life, then the emotional consequences at the other end are similar. To not act against such abuses would be blatantly unfair.

* Whether it goes beyond this is another matter entirely. Plausibly people need greater levels of hyperbole to make the same emotional impact in text as they could in person, due to the sheer amount of information lost. So it might just make them sound angrier but not actually be angrier.

Part of the difficulty is because there's no exact analogy for social media as a venue. A public post has no guarantee of being seen by everyone in the whole world, but this is technically possible. Its potential for reaching a large audience is vastly greater than yelling in the street. A direct private message is most similar to a private phone call, but still significantly easier to reshare. Yes, you can record phone calls and play them back to people, but this takes a great deal of effort - which is why you don't expect it to happen. With digital media, you don't have the right to expect the same degree of privacy simply due to the very nature of the technology, just as if you were to yell at someone on the bus, you shouldn't expect everyone else to cover their ears so you could have a private argument. It is inherently more public, and relies on you trusting the other guy not to spread it around. And posts to online communities are somewhat like conversations overheard in pubs or private clubs, but the differences are profound - the invitation and admissions process alone are wholly different. Social media, in the end, is a medium unto itself.

All this is important for what follows, but I'm in danger of digressing. The wider problem of surveillance - whether the messaging service has a right to examine your data - isn't really what I want to look at here, it's that specific problem of violent messages.

The heart of it is that there is a legitimate use for violence and for the glorification of violence. Think of Eowyn killing the Witch-King, or the terrorist attacks of suffragettes, or the slave uprising of Spartacus, or the American War of Independence, or the battle or Marathon, or the destruction of the Death Star. Fiction and reality alike glorify violence because sometimes it is glorious, or at least necessary. Without a violent response to violent injustice, often that injustice would only persist. Turning the other cheek is entirely laudable, but it doesn't always work. This is particularly problematic when the law permits injustice and peaceful attempts at reform have failed.

So our history and fiction are soaked in it. Many of our now-basic rights were only won - could only have been won - through violence.

Much of it, of course, was not in the least bit glorious but utterly horrific. The genocide of the Kkmer Rouge, the tortures of the Spanish Inquisition, the burning of witches, the slave trade, and countless other episodes are sorry affairs indeed. The point is that violence in itself cannot be objectively said to be glorious or horrific. It depends on context.

Twitter's specific policy, however, is largely sensible. It is concerned with specific, credible threats against real individuals, not against vaguer ideals. It does not state much specifically about the glorification of violence, but presumably the same basic rules apply : Lord of the Rings gifs should be okay, posting pictures of statues of ancient Roman generals is fine, even hyperbole about hated public figures is probably okay too.

What, though, if you want to organise a violent revolution ? What if you feel you have no choice but to fight violence with violence ?

What bothered me about the recent infamous tweet about looting and shooting (the origin of the phrase being something I was utterly unaware of) was that surely that the President of the US is sometimes going to find advocating and even glorifying violence unavoidable. Would "shock and awe" have been banned under Twitter's rules ? If it came from the source, then I assume so. And it struck me as a bit odd if official policy couldn't be discussed on Twitter : that would reduce the ability to protest.

This is where Twitter breaks down, for the simple reason that it's not a news service. Twitter can take action against individuals for making threats against each other; it can't take action against government policy. My simple conclusion is that this makes it a really, really stupid place to announce government policy. Discussing existing policy ? Sure. Making it the official source for the announcement itself ? No, because sometimes its own rules would prohibit it from doing so.

Therefore the answer to the "President's" threats and would-be revolutionaries is the same : don't do it on Twitter.

Again, the venue matters. Just as digital media is clearly unsuitable for discussions that need total privacy, so too is it a mistake to use Twitter for official policy announcements. Its whole structure makes it no more sensible as a vehicle for official announcements than releasing messages in little tubes strapped to thousands of carrier pigeons or a bunch of very sleepy frogs. Yes, you could do that. You certainly have the right to do that. But it wouldn't be a good idea.

What, though, if the "President" just wants to tell us his private "thoughts" ? Again, a ridiculously small character limit makes this manifestly a poor choice. It's fine for posting pictures of lolcats or commenting on celebrity hairstyles, but when absolutely clarity is required - as it is when we need to know if a Dear Leader is being serious or not - it's as daft as using the Windings font. You can't reduce moral issues or official policies to the simplicity which is Twitter's main feature.

(That said, the "President" himself seems to think you can reduce complex moral issues to unintelligible gibberish, but that's another story.)

I've stayed well away from setting out the conditions under which I'd consider violence to be acceptable, only noting that such conditions do exist, because I doubt I could do it here anyway. For that I'd need books and conversation, not a blog. Quite likely my own initial views would shift on hearing what others had to say, no matter how carefully I thought about it beforehand. But Twitter ? Ridiculous. People do use it to generate threads of intelligent statements, and some of them are even very good, but... why ? Fer gods' sake, why ? Write a blog ! Write a Facebook post ! Just use a suitable frickin' format instead of something designed for meme-length quotes ! Link to something suitable on Twitter by all means, but don't do it directly on Twitter itself - that's ludicrous.

Accepting that official announcements should not be made via Twitter, there remains the tricky problem of whether existing policy should be discussed there. Clearly Twitter ought, if it's not going to impose double standards, to take a much harsher line against the "President", but what should it do when official US policy violates its terms of use ? Isn't it also a valuable tool for exposing injustice ?

I believe its official policy could be interpreted thusly : discussion or simple regurgitation of an official policy would not violate its terms of use. If you quote someone else saying they're going to enact violence, you are not necessarily yourself participating in it (fake news is different, and needs to be removed entirely). The violence is thereby exposed for people to make their own judgement on it. A politician would then be able to, say, declare war in some other media and link to it on Twitter, providing they don't include any promotion of violence within the text of the tweet itself. Members of the public would then be able to express support for the policy, but not be able to tweet violent slogans or suchlike. They could set out why they agreed with the policy, but not engage in the actual promotion of the policy. They could say, for example, that "this war will keep our country safe", but not, "let's kill a bunch of people".

As I said, it's messy. I don't claim that it's a definitive answer.

It's important to have safe spaces where people can discuss whatever the hell they like. But it's also important that not all spaces are safe : that the Rice Pudding Fan Club isn't inundated internally with messages about the latest advancements in lithium battery technology, but that outside of their circle they can expect criticism for their strange obsession with an unpleasant desert. So the President can, for better or worse, set violent policy - he just shouldn't do it directly on Twitter. And let's face it, this particular idiot should have been blocked there long ago if Twitter isn't to impose double standards.

Finally, given the difficulties of how we view private social media companies as analogies to traditional forums, it's not at all obvious whether we should expect them to be completely unrestricted or heavily monitored. We don't expect local pubs or community centres to host any old group who wants to use their facilities, so why should we expect privately-run social media companies to allow online organised gatherings ? There's no particular reason to expect this. Though it's a double-edged sword : sometimes we need a violent response to correct an injustice. Whether you want to start or stop a revolution, then, Twitter might not be the best place to go.

Tuesday, 2 June 2020

Utopia Down Under ?

Although I've been concentrating on other matters lately, my idea about how to construct a political system that actually works is still ticking over in the back of my mind. Long version here, short version here, ranty follow-up here.

The essence of it is that we concentrate on the decision-making process instead of who we elect to power. The bitter contest between left and right is part of the problem : simply trying to get one side to win will never work. Instead, we abandon the winner-takes-all and adopt the system of competitive collaborations used in science; we give some power to everyone. Through a system of anonymous peer review, competing groups of scientists reach conclusions while agreeing to disagree. Oh, they can say whatever they like in conferences, but not in papers. Anonymity usually prevents rivalry from escalating into outright hostility, and mistakes are learning experiences, not grounds for dismissal. A triumvirate system of author, reviewer, and editor avoids the need (at the level of individual studies) for an infinite chain of referees. When this works well, it fulfills a critical function of having author and critic address the salient points, and not go off on an invective-laden diatribe about the other side's fat ugly face or some other pointless distraction, as usually happens in political "debates".

On the larger scale, wider community review permits an emergent consensus.  No-one is "in charge". No-one says, "this is Truth, all unbelievers in the Truth shall burn". Almost nothing and no-one is held to be beyond question. In this way an efficient consensus rapidly emerges that's close to the optimum, most rational conclusion possible given the evidence and methodology of the day.

There's no reason to suppose that adapting such a proven system into the political sphere is either impossible or wouldn't work. Science does work, and was contemplating the nature of time when the prevailing moral view was that a woman's place was in the home. Does it work perfectly ? Of course not, but it works a damn sight better than politics.

What worries me is the "too good to be true" factor (leaving aside the tremendous difficulties of persuading people to adopt such a system). Of course, there are many important questions of detail about how to modify the scientific system into a political one, some of which are discussed in the previous links, but before even contemplating any sort of revolution, we should be damn sure that it's actually going to work.... or at least, will work better and for longer than the current omnishambles.

One problem I've encountered is that undue criticism tends to lead to bullshitting. Permanent extreme hostility by the media is at least a contributing factor as to why politicians frequently convey no meaningful information at all, since literally any answer they give will be treated as definitive proof that they're worse than Hitler. And recently I tried a Clearer Thinking app that claims to help you re-evaluate your beliefs, but all I did was end up modifying my original statement to make it more rigorous - I didn't change my mind at all, I just made things more precise.

But this, I think, points to a wider issue rather than a fundamental flaw in the proposed method. As the primary means for conveying information between politicians and the people, the functioning of the media is critical, but this in itself doesn't undermine a political system based on competitive collaboration : it only underscores the further need for media reform. And the anonymous refereeing system and obligation for cross-party involvement is specifically designed to deal with overly-hostile criticism. So I think the idea is safe enough on that score; I don't think it will lead to people rationalising their pre-existing biases.

The article below raises a more serious potential problem : the nature of scientific versus political truths. Scientific truth is largely objective : it can be measured and quantified to a point beyond reasonable doubt. Yes, yes, things get much more messy when dealing with the nature of models and theories, but that doesn't invalidate the main proposition. For example, whether we view gravity as a force or the curvature of spacetime or something else entirely, it is an undeniable fact that gravity exists. Which is usually what matters if you're nearing a cliff edge, be it physical or political.

Politics has plenty of blunt facts and even objective models, but it also has something science simply doesn't : moral principle. Science is supposed to be a search for the truth no matter how unpleasant it might be, but politics often doesn't have that luxury. The subjective needs of individuals are often in stark contradiction, and what seems virtuous and good to one can seem dark and brutal to another. Thus political decisions simply cannot be based on evidence and fact alone. In order to make choices, the political system needs guiding principles. For example :
“In 1948, New Zealand’s first professor of political science, Leslie Lipson, wrote that if New Zealanders chose to erect a statue like the Statue of Liberty, embodying the nation’s political outlook, it would probably be a Statue of Equality,” he writes. “This reflected New Zealanders’ view that equality (rather than freedom) was the most important political value and the most compelling goal for the society to strive for and protect.”
It's not necessary to have a specific end goal in mind, some mythical Utopia to consciously strive for. But it is necessary to have the basic principles established firmly enough so as to act. In this example, freedom and equality. In some ways I like very much this blunt notion of saying, "to hell with freedom, it's equality we should aim for !" It's a lot more appealing that the complexities of trying to balance the two, and perhaps aiming for equality leads to freedom as a pleasant side-benefit. After all, New Zealand is clearly not a tyrannical dystopia... but on the other hand, the Soviet Union certainly was.

At best, then, this preference for equality over freedom points to an incompleteness in how New Zealand gets it right. One possibility is simply that New Zealanders are sensible people who elect good leaders; it's been claimed that countries with female leaders are doing much better at handling the pandemic (would that Theresa May was still in charge to test this !). And I should emphasise that most systems probably work well enough - even authoritarian monarchies ! - if you happen to have sensible, well-meaning people in charge; difficulties occur only when you don't. So this explanation would only shift the problem : what keeps the New Zealand electorate sensible ?

Knowing next to nothing about New Zealand, I'm marking this one down for further research. Maybe they've got some secret sauce, maybe right now they're just lucky. After all, plenty of Western democracies have produced great leaders in the past.

To return to the main point : freedom and equality are not always in conflict, but sometimes they are. And in that situation only an ideological belief allows you to choose between them, since no objective measurement of which is better is possible. Or you might prefer fairness and justice to either of those, saying that both can be good or bad depending on context, but not aiming to create a society which was more free or equal for its own sake. Regardless, you need an ideological, moral principle on which to make your decision, which is damned hard to define objectively.

For instance, we can all agree that freedom is nice when it lets us choose between fun things like going for a swim or eating eighteen kilos of chocolate cake, and we can similarly agree that it's less desirable when it lets a serial killer do as they will. Likewise, equality is nice when everyone is rewarded in direct proportion to their efforts, but it's not nice at all if we all get the same pay regardless of hours worked.

Or put it like this. Suppose you define freedom as the ultimate moral principle. Then if you give everyone the freedom to eat as much chocolate cake as you want, and half the population gorge themselves to a sticky death, by your definition you've done a good thing. And no-one has acted unfairly at any stage either. Yet allowing this to happen is clearly not right ! It would seem that there is no single simple principle by which we can define morality, yet we need moral principles in order to make many political choices.

So this worries me for any reformation to political systems. Unlike science, the state has to judge not just whether something is correct or not but also whether it's moral : sure, nuking a city would stop a riot, but that doesn't mean you should do it. And sometimes states do have to make the choice to use extreme violence. How should it do this ? Will the cross-party deliberations be equally effective here, or will they just reach an impasse ? If one side wants to go to war but needs the consent of those who don't, how would this ever work ? You can settle scientific arguments objectively. Moral ones ? Not so much.

This is not something I have an easy answer to. In fairness I aimed to modify the political system, not society itself : I don't claim to know what an ideal society would look like.* And it should be noted that science is the epitome of the "marketplace of ideas" not because it's one amazing system, but because it blends multiple systems together : in conferences, the market is almost entirely free and unregulated, whereas in papers regulations are rather strict; researchers in individual, global institutions are organised and connect and collaborate in radically different ways. So too with politics - there's space for votes, elections, sortion... but also more authoritarian methods. The blending matters.

* Who am I kidding, that is obviously for a future blog post...

Perhaps this wouldn't matter much. As scientific truth is emergent, not established through diktat, perhaps so is society's view of morality established through conversation - as this system encourages. Perhaps it's not necessary to establish overarching moral principles ahead of time; maybe the existing diversity will be an asset in forging a moral consensus rather than a hindrance. I don't know. I hope so.

Why is New Zealand so progressive?

"Evening everyone, thought I would jump online and just check in with everyone as we all prepare to hunker down for a few weeks," said the New Zealand woman via Facebook Live as the country prepared for its month-long Covid-19 shutdown. She pointed to her grubby sweatshirt.

Review : Pagan Britain

Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...