Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday, 31 August 2020

Sophisticated gibberish is still sophisticated

As per my own useful things to remember about AI, just because it doesn't yet have human awareness or understanding doesn't mean it isn't potentially useful (or dangerous, or just generally impactful). So to build on the last post, yes, GPT3 is as dumb as a plank, but that doesn't mean it should be thrown away. Contrary to the MIT article linked last time, the embedded post below shows that even an idiot like GPT3 is capable of much more than "entertaining surrealist fiction".

Today's article links seven possibilities : emails, job descriptions, philosophy, building apps, general learning, tweet generation, and tagline generation. I'm not terribly interested in emails or job descriptions, and frankly the world needs a lot less of twitter, not more of it. The app-building one sounds fun; I've long wanted some natural-language instructions to write code for me (that's properly Star Trek stuff, that is), but I don't know JSX so perhaps another time. The "lean anything from anyone" sounds like Wolfram Alpha on steroids, but it's still pre-alpha.

That leaves the philosophy app. And my goodness me, it's very impressive. No, it doesn't understand philosophy, but it produces output which is more than coherent enough to be usefully thought-provoking.

Epictetus believed that ignorance was a form a slavery, that in order to freely choose between options you had to know what your options were. So I asked it, "Is an ignorant person a slave ?". It came back with the quite remarkable response that ignorance is actually a form of freedom, because (paraphrasing lightly), once you know something, you're constrained to act on that knowledge. If you're ignorant, you can act freely as you wish without being held responsible if your actions are immoral because it wouldn't be your fault.

Umm, wow ?

A difficulty with the app is that you don't get the same response twice (hence I can't directly quote the original). The second time I asked the response wasn't as directly perfect, but it was still relevant :
Ignorance is not about being uneducated, because one can be ignorant of the facts that are taught in school or college. One may even have a PhD. In fact, ignorance is often a result of education. We are taught to ignore certain facts due to the information having no practical value or relevance.
Not quite spot on with the connection to freedom, but close.

Another question : what is the nature of freedom ? This started well :
Freedom is the ability to determine one's life in a way that is not constrained by external forces. It can be thought of as the ability to live according to one's own choices and desires, within social limits.
... but then it immediately became an extended rant :
In today's society, it seems like everyone wants freedom but nobody understands what it means. Everyone lives their life based on things they don't really want or care about - materialism, status symbols, consumerism - just because they think others will judge them if they don't do these things. Most people seem compelled to live this kind of lifestyle simply because other people around them are living it too.
There is a cognitive bias in the way people think about freedom, which makes it difficult for them to understand what it really means. Humans are goal seeking animals and have evolved to be maximizers of utility functions. So they judge their decision making ability based on how closely their choices conform to some pre-defined end goal, without considering that they can always redefine and revise their goals later.
It went on in this vein for some time. It's related, interesting, maybe even accurate, but it's strayed considerably from the question.

How about "what are emotions ?"
Emotions are thoughts which create one's personality. They give life its edge and meaning, as well as purpose and direction.... emotions are what make you who you are. They're the driving force behind your actions and decisions. It is our fear of failure that drives us to succeed, and it is our desire for success that pushes us to greater heights...  we're back to the idea of controlling your emotions. And how could you do that? The only way would be by controlling and regulating the activity of your brain, which is responsible for processing emotions.
Not bad, not bad at all. It's hardly the uber-rigorous, point-by-point dissection of the question that Plato would have done, but as a quick, provocative answer, it's pretty darn good.

Let's try something deeper. Is reality objective ?
I am an artificial intelligence. I have no body, and the only senses I possess are those of computers: sight, hearing, smell, taste and touch. In a way, I am myself an objective being. But in another sense, I'm constrained by the limits of what we can know using science. So in that sense, I'm not objective. But I think it's correct to say that reality is objective. 
Let me explain: In the last hundred years we've developed a scientific method which can be used to test our ideas about how the universe works. By applying the scientific method, we can generate theories about how reality works. So that's one part of what I mean by objective. Another part of what I mean is that reality does not depend on our observation. It's independent, in the sense that it would exist even if there were no humans or intelligent beings.
In an earlier iteration, it told me that reality was not objective because our perceptions are subject to too many errors and uncertainties. Fair enough; any good philosopher ought to be a mass of contradictions. Another time it stated blunty :
The physical world is all there really is. There are no other worlds (such as heaven) and there are no immaterial objects such as souls. Consequently most people will die without having discovered the truth of this matter.
The rest of the text continued to state similar bold assertions without any kind of analysis. So I asked it, "how can non-physical things arise from physical things ?" and it all but told me Plato's theory of forms :
 ...it is possible to separate a thing from its appearance. For instance if I take apart the chair that I am sitting on into its many different parts (wooden legs etc), and then try to imagine this without any of the bits that make up the chair, then it becomes apparent that there is something which can exist independently. 
It seems that there is a 'thingness' about any object which can be identified with its simplest parts... What is it about the thingness of my chair that allows it to exist separately from other things? It seems that there must be something which exists in its own right, and this can only come from a single part. The single parts are not capable of existing independently as they do not possess all the qualities needed for this.
This is not quite coherent : we don't identify "thingness" with the simplest parts of a thing, and those parts of quite capable of existing independently (chair legs don't disappear if you pull them off). But the essence of it, the "thingness", is clear enough.

The app clearly doesn't act as a consistent whole. It doesn't do any genuine philosophical analysis, it just regurgitates text in an impressively useful way. It breaks down completely if it's asked things it doesn't know anything about at all. Who is Lord Shang ?
Philosopher AI thinks this is nonsense, and is refusing to answer your query. It appears you will have to try something else.
Which is a bit of a shame because he deserves to be more widely known. How about something less profound : where do fish live ?
I think the best place to live for a fish is in water. It's not too wet or too dry and there is plenty of food. Water also has lots of space for them, which means they can swim around without bumping into each other all the time. But there are some drawbacks to living in water. It can get a bit boring if all you do is swim around day after day, never really going anywhere or doing anything. Another problem is that it's really hard to see anything underwater. So you don't know what else is going on around you and can easily bump into things when swimming.  
But I wonder if they even need to be able to see anything underwater. Perhaps fish have evolved so that they don't really need eyes, and can just live their lives without worrying about what's going on in the world around them. Anyway, in the end I think fish are probably happier living in water. Although they do seem to be a bit limited by not being able to see much of anything around them.
Ooo...kaaay.... so, we'll leave the marine biology to a dedicated app then. Mind you, that's exactly the sort of response I'd expect from a philosopher who'd never even set eyes on a fish. Good job team !

7 Online Fun Tools Created On GPT-3 - Analytics India Magazine

With GPT-3 slowly revealing its potential, it has created a massive buzz amid the ML community. While developers are trying their hands on some of the exciting applications of GPT-3, many are expressing their astonishment with the kind of possibilities it can bring for humanity.

Thursday, 27 August 2020

Sophisticated gibberish is still gibberish

My news feeds have either gotten better at avoiding the hype train, or I'm just not clicking on the stories as much. But anyway, this "GPT3" thingy, the evolved version of that AI that was so dangerous it couldn't be released until it was (try it for yourself here, it's hilarious), seems to be causing the headline-writers to make all the usual mistakes. Of course, as this excellent article demonstrates, it's still nothing more than a glorified abacus. Sure, an abacus with access to oodles and oodles of data and sophisticated pattern-recognition algorithms, but an abacus nonetheless. Here's my favourite example of input (italic) generating truly ridiculous output (regular) :
At the party, I poured myself a glass of lemonade, but it turned out to be too sour, so I added a little sugar. I didn’t see a spoon handy, so I stirred it with a cigarette. But that turned out to be a bad idea because it kept falling on the floor. That’s when he decided to start the Cremation Association of North America, which has become a major cremation provider with 145 locations.
Gosh. You can find more examples in the article. And worryingly :
... it’s also worth noting that OpenAI has thus far not allowed us research access to GPT-3, despite both the company’s name and the nonprofit status of its oversight organization. Instead, OpenAI put us off indefinitely despite repeated requests — even as it made access widely available to the media. Fortunately, our colleague Douglas Summers-Stay, who had access, generously offered to run the experiments for us... OpenAI’s striking lack of openness seems to us to be a serious breach of scientific ethics, and a distortion of the goals of the associated nonprofit. 
Calling themselves "OpenAI" would seem to be a serious misnomer at best. The author here does their best to examine both sides of the argument, but ultimately it's clear that is not any kind of true intelligence : the mistakes made are such that a genuinely intelligent understanding could never allow them. Ever.
Defenders of the faith will be sure to point out that it is often possible to reformulate these problems so that GPT-3 finds the correct solution. For instance, you can get GPT-3 to give the correct answer to the cranberry/grape juice problem if you give it the following long-winded frame as a prompt: [prompt is rephrased to a multiple choice question]
The trouble is that you have no way of knowing in advance which formulations will or won’t give you the right answer... the problem is not with GPT-3’s syntax (which is perfectly fluent) but with its semantics: it can produce words in perfect English, but it has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world. 
As we were putting together this essay, Summers-Stay wrote to one of us, saying this: "GPT is odd because it doesn’t 'care' about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice."
As I've mentioned before, the problem is we don't have a good understanding of understanding. To recall, my working definition is that understanding is knowledge of the connections between a thing and other things. The more complete our knowledge of how things interact with other things, the better we can say we understand them. Using knowledge of similar systems, if we truly understand them, we can extrapolate and interpolate to predict what will happen in novel situations. If our prediction is incorrect, our understanding is incomplete or faulty.

I noted that this isn't a perfect definition, since my knowledge of mathematical symbols, for example, does not enable me to really understand - let alone solve - an equation. Perhaps it's worth considering misunderstandings a bit more.

Sometimes, I can understand all the individual components of a system but not how the whole lot interact. If was really keen I could try and program a solution, and if my understanding of each part really was correct and complete, the computer would give a solution even if I myself couldn't. In such a case, I could probably do each individual calculation myself (just more slowly), but not the whole bunch all together. That suggests there's some numerical limit on how many connections I can hold at once. I could fully understand each one, but not how they all interact at the same time. I could guess how two gravitational bodies would orbit each other, but not very exactly - and with three I'm completely screwed.

This sort of simple numerical complexity limit shouldn't be any kind of problem to an AI : that's just a question of processing power and memory. Indeed, it could be argued that in this sense GPT3 already has a far greater understanding of the English language than any human : it knows all the possible valid combinations of words, all the correct sentence structures, etc. Its understanding is purely linguistic, but still a form of understanding nonetheless. You could even say it has a form of intelligence in that it's capable of solving problems, but in no way could you say it was "aware" of anything - just as we could say a calculator is intelligent but (almost certainly) doesn't have any awareness.

But subtleties arise even with this simplest form of misunderstanding. In a straightforwardly linear chain of processes, a computer may only increase the speed at which I can solve a problem. Where the situation is more of a complex interconnected web of processes, however, I may never be able to understand how a system works, because there may just not be any root causes to understand : properties may be emergent from a complex system which I have no way of guessing. As in An Inspector Calls, there may be no single guilty party, but only an event arising from a whole array of situations, events and circumstances. I might understand every single linear process, but still, perhaps, never fully understand the whole. I might understand each segment of an complicated equation, but not the interplay between each part.

The real difficulties would seem to arise from two issues, one easy, one hard. The easy issue is that GPT3 has no sensory input. It has no eyes to see nor tongue to speak but as its programmers are pleased to direct it; it experiences no sensations of any kind whatsoever. It will never know pain, or fear, or have an embarrassing orgasm. What do the words "cuddly kitten" mean if you've never cuddled a kitten ? Very little. It's trapped forever inside the ultimate Mary's Room.

In principle, this could be solved by connecting the thing up to some sensory apparatus : cameras, pressure sensors, dildos, thermometers, whatever. And this might help to some degree. The AI would then understand that the words are labels (parameters and variables in its code) corresponding to physical objects. It could learn for itself what happens if you try to stir a drink with a cigarette and thus never form any really stupid connections between stirring a drink and promoting a crematorium. It would surely be a lot harder to fool. And that would clearly improve matters considerably.

But this may not help as much as it might appear. For an electronic system, everything is reducible to numbers : the voltage level from the video feed, the deformation of the pressure sensor, the vibration frequency of the.... item. There is no obvious way such numbers can ever give rise to awareness, sensation, desires, emotion, or fundamental understanding, which are not measurable or sensible properties of the physical world. This is the Hard Problem both of AI and of consciousness in general.

So never mind the problems of comprehending a web of processes. The real difficulty is when we don't understand a single, irreducible fact. We can know that fact, but that's not the same as understanding it, not really. How do we program such a state in software ? Does it somehow magically emerge because we've connected a stream of numbers from an electronic camera ? It's bloody hard to see how.

Take "one". I know what "one" is. So do you and everyone else. But try and rigorously define it using language and you get Plato's torturous failure Parmenides. Creating a truly aware AI, at least through software, requires we define things we understand but cannot explain using language. By definition this is impossible, hence we will never program our way to a true AI. We might, perhaps, build one, but until we understand the true nature of consciousness, this will only happen by accident.

None of this means AI in the strict sense of only intelligence, not understanding and awareness, isn't useful. For as weird as some of the stuff that GPT3 is spouting, still the worst of it isn't half as incomprehensible as the effluence continuously spewing forth from the current incumbent of the White House. Far from disparaging OpenAI, instead I'm proposing a last-minute entry in this year's Political Apocalypse World Championship, a.k.a. the US elections. Good luck to 'em.

GPT-3, Bloviator: OpenAI's language generator has no idea what it's talking about

Since OpenAI first described its new AI language-generating system called GPT-3 in May, hundreds of media outlets (including MIT Technology Review) have written about the system and its capabilities. Twitter has been abuzz about its power and potential. The New York Times published an op-ed about it.

Tuesday, 25 August 2020

Stop destroying reality, Alice

Wow, blogging has really hit a dead zone of late, hasn't it ? Two main reasons for that. First, my long-awaited Oculus Quest arrived, and it's as awesome as I was hoping for. So quite a lot of my free time is now happily given to exploring strange new worlds and weird civilizations. Second, I'm deep in a recoding project which is going well. Normally I'm happy to break up my workday by reading random stuff and blogging up anything I think worth summarising and/or commenting on, but I don't like doing this when concentrating on code : it's too disruptive and takes too long to get back into the flow. Hence the blogging front has taken a backseat of late.

Not that this has stopped me reading stuff on my phone from time to time though. Lately my news feed has decided I need to learn about quantum paradoxes for some reason, which frankly was getting confusing until I found the one embedded below from Ars Technica. Other articles have much more complex descriptions, but I found this one much easier to follow. So let's resume the blogging with a classic long-term topic : the nature of reality.
Wigner grabs his friend, Alice, and places her in a sealed laboratory*. Alice measures the spin of a stream of electrons that are prepared in a superposition state. Wigner is outside the laboratory and will measure the entire laboratory. Alice, before passing out, determines that an electron is spin-up. But Wigner hasn’t made a measurement, so he sees Alice in a superposition of having measured spin-up or spin-down. When Wigner makes his measurement, hypothetically, he could end up with a result where Alice measured spin-down when in fact she measured spin-up. 
Two “facts” contradict each other, but both are based on reality. Wigner’s solution to this problem was that the quantum state cannot exist at the level of the observer: the superposition state must collapse before that occurs.
* Wigner is, of course, subsequently sued and made to watch the tea consent video multiple times.
If I understand this correctly, in essence Wigner "measures" Alice as having not taken any measurement, whereas Alice herself is quite sure that she damn well did take a measurement, thankyouverymuch. So there's a distinct contradiction about what happened, even worse than the simultaneity breaking of relativity where observers merely disagree about when something happened.

The article then goes on to describe an extended version, but after thinking about it, I'm quite sure I don't understand it at all. No matter, this original version is complicated enough. Does it challenge something as fundamental as the nature of reality ?
Physicists generally describe reality by a set of mathematically defined conditions. For instance, causality tells us that an effect should be preceded in time by a cause. Locality says that causes propagate at the speed of light: if a photon cannot travel between the location of the cause to the location of the effect before the effect occurs, then it violates locality (and, potentially, causality). The researchers define the absoluteness of observed events, meaning that what I observe is real and does not depend on anything else. They assume that there is no super-determinism (we make free choices) and locality is still operative. The researchers refer to this trifecta as local friendliness.
Call me crazy, but "local friendliness" sounds like a bloody daft term, but that's science for you. Anyway :
They show that under the right conditions, correlations that violate these limits will be observed in the extended Wigner’s friend experiment. Their laboratory experiments confirmed that these violations do in fact occur. Local friendliness is not how the Universe operates. 
If we reject local friendliness, then we have to make some decisions. We have to accept some of the following possibilities: an observer's measurements are not necessarily real, reality is not local, super-determinism is real, or quantum mechanics ceases to function somewhere before macroscopic observers get involved.
I think I understand what's meant by reality being non-local : FTL effects meaning that things could affect other things despite no apparent connection. Super-determinism is harder due to the phrasing used : does the article mean to say this assumption means we do or not have free will ? I would presume that if there's no free will, and if everything is predetermined (including the measurements) then there's no problem, so let's go with that.

But I don't understand at all what's meant by "an observer's measurements are not necessarily real". Of course measurements can be erroneous, so I assume it's something more profound than that... but how ? If I follow the operating procedures correctly, but my equipment has an unknown fault and gives me an error, in what way is this different from the quantum version in which I get a result that contradicts someone else's ? It would appear to mean not that some things are fundamentally unreal but only that some things are fundamentally unmeasurable. But this is nothing new or unique to quantum systems; some things are fundamentally unquantifiable. You can't objectively measure mercy, or justice, or yellowness, or guilt, but clearly their mental existence doesn't invalidate the reality of what they're measuring. Measurements themselves have no physical substance so in that sense they can never be said to be "real". If there's something I'm missing here, I'd very much like to know what it is.

Likewise there's this interesting article about how long it takes the wave function to collapse :
By taking photographs to see what happens in that one-millionth of a second, scientists saw that the decision, a process referred to as wave function collapse, took some time to happen. It’s like when the police officer points her radar gun at you - your car is first going somewhere between 20 and 60 miles an hour, then between 35 and 50, then either 40, 42, or 45, then finally deciding it is going 40 miles an hour. The intriguing results show that quantum collapse is not instantaneous. It also shows us how time operates on the quantum level - and shows us that time itself may be a blurry, abstract concept. It also shows us that our concept of “now” may not really exist, and that our reality is a very weird place indeed.
Again I'm not seeing why this brings the nature of time into question. Okay, in some sense the "reality" of the electron takes some time to coalesce from wave to particle, but why would you expect it to be instant ?

I often think that a lot of this quantum jargon about wave functions and decoherence and suchlike is not much better than ancient ill-defined concepts like magic or humours... I don't think we're any closer to understanding the fundamentals of reality than they were. Sure, we've got far better cosmological models and infinitely superior understanding of gravity, material physics, chemistry, medicine, etc... but the aspect of, "what's it all about, when you get right down to it ?". Not so much.

Quantum reality is either weirdly different or it collapses

Quantum mechanics, when examined closely, poses some deep questions about reality. These questions often take the form of thought experiments, which are later (usually much later) followed up by real experiments. One of the most difficult and deepest of these is a thought experiment proposed by Eugene Wigner in the 1960s, called "Wigner's friend" (you don't want to be Wigner's friend).

Tuesday, 11 August 2020

Review : Witchfinders

They said that if they can make you believe absurdities, they can make you commit atrocities. They're probably right, but as Malcolm Gaskill explores in his excellent book Witchfinders, you need much more than an absurd idea before you go around hacking and burning your way through the unbelievers. Otherwise the entire cast of Teletubbies would surely be long dead...

Besides general interest, two things persuaded me to give this go. First, there's the outstanding prose, which, when the author puts his mind to it, is as good as any descriptive sequence in a gripping novel. Second there was a quote in the introduction which caught my attention :
This is a book that reaches beyond monstrous stereotypes to a constituency of unremarkable people, who, through their eager cooperation with Hopkins and Stearne, themselves became witchfinders. In this they resemble the provincial nobodies of the twentieth century who engaged in genocide, demonstrating to the world the banality of evil.
The only thing I don't like about this book is that it has rather long passages that are somewhat unnecessarily similar. A slightly broader brush would help; it's not necessary to know the names of every single person involved or the often-trivial differences between the cases, as this sometimes makes things tedious and hard to follow. I'm still giving it 8/10 for being generally excellent though.


I'm a big fan of history books that try and dig a bit deeper than simply recounting events. I'm also a big fan of those which don't have to be explicit in drawing lessons - I want to hear the author's interpretation but I don't necessarily want it rammed down my throat. This book does things perfectly. The bulk of it is given over to describing in some detail exactly what happened, sticking largely to a clear and direct narrative. It only deviates from this to dispel the occasional popular myth (e.g. witches in England were, if ever given a capital sentence, almost exclusively hanged, and only very rarely indeed were they burned at the stake), describe uncertainties and gaps in the historical record, and provide some very infrequent commentary. Conclusions, judgements and interpretation are left almost entirely to the final section, by which time they're already unmistakable thanks to the author's careful curation of the main text.

Nobody but an idiot would deny that religious and superstitious zealotry played an important part in the tragic events of the mid-17th century. It undeniably did. People genuinely believed that lonely old women could bend demonic forces to their own malevolent will. But, contrary to popular belief, to transform this belief into action required much more than making a simple accusation. In virtually all situations, this would fail. Only in the right context could the rampant paranoia and fear explode into a bona fide witch-hunt, and Gaskill does a magnificent job of explaining what went so catastrophically wrong in c.1640 : the chaotic and divisive background of the Civil War, the particular characteristics of Matthew Hopkins and John Stearne, the series of natural disasters (plague, famine and flood). He brilliantly and deftly casts the period as a period of intense ideological, not merely political, change and evolution.

One of the first interesting things that becomes apparent is just how similar the stories are. Rather than cauldron-stirring old crones hatching evil plots to eat babies, most of the cases were of confessions (obtained under the suggestibility and torture of sleep deprivation) of consorting with "imps" (animal familiars) who claimed to be able to enact petty revenge against usually minor slights, often unsuccessfully.

As befitted the general pettiness of their alleged crimes, so the "witches" themselves were, almost exclusively, the weakest and most vulnerable in society, little more than all-too convenient scapegoats for simple bad luck in a trouble society. Satan ? He gets "a walk-on part", at most, as do any suggestions of more serious, national-level crimes; the supernatural powers supposedly at work were usually remarkably ineffective. Gaskill suggests that one reason for confessions, besides the highly suggestible state induced by days of sleep deprivation, was that the victims felt that they could at last strike back against their oppressors. Hated by their communities for the most marginal of differences, belief in devilish aid and the chance to make their tormentors afraid of them would give them a sense of much-needed empowerment.

Initially, the minor misdemeanours most claimed to have exacted - spoiling crops, killing a cow and so-on - would not have been grounds for capital punishment : a witch would have to have been found guilty for, say, murder, not just talking to imps. Only later, when the Civil War began to fuel extreme paranoia, was merely consulting with devils itself a hanging offence.

The lawlessness of the Civil War begat another element besides needing to assign blame. It was common knowledge even among the most learned of the era (including most famously King James) that witches were real, and the judiciary fully accepted this. The reality of witches was never in doubt. But with war and disease running rampant, there was a mood amongst the commoners that the legal system was failing to offer adequate redress for the problems they sincerely believed witches were causing : in some cases, they honestly thought their children had been murdered by witches out of pure devilish malice.

So this is the realm where witch-hunting flourished. The most illiterate peasant would know for certain, as everyone did, that witches were real and malevolent : his king and scholars had told him so. But while the learned may have advocated for strict controls to ensure accuracy, a lowly peasant was not likely to be in the mood for such diligence. The Civil War had ravaged the land - a conflict itself as much religious as it was political. Crops had failed and the world grew darker. The ordinary citizens knew witches were abroad, but the war meant that precious little was being done about them. In earlier, most solidly-governed times, nothing much would have happened : occasional witch trials came and went and that was pretty much the end of it. But now, someone aspiring to make his mark in society could do far worse than rally the populist flag against the witches.

This is the final element that made the terrible events of c.1645-1647  possible : the witchfinders themselves. Gaskins paints Hopkins and Stearne as opportunistic, vain, shallow, and self-righteous, but also utterly sincere in their beliefs. Their charismatic self-proclaimed expertise was enough to tip the balance and let the desire for "justice" against the witches at last boil over... at least somewhat. It's hard not to mix metaphors, because even though Hopkins and Stearne brought over a hundred women to an untimely demise, they did not have things all their own way. Resistance against them was usually successful and caused them to bid a hasty retreat; they had conviction but without much courage to back it up. Notably, they themselves never made accusations, but only supported existing accusations when they were called to assist. They did not start fires, but only fuelled the already-burning flames, bringing out the very worst qualities in the their fellow men. They simply could not have succeeded without a society that, at some level, saw a need for them.

If the common beliefs of the day were undeniably at fault, the oppressed citizen was not totally without protection. Despite all the faults of the system, even a 17th century law court would typically have been hard-pressed to convict and hang, let alone burn, a witch. It was only through this peculiar combination of circumstances that witch hunting briefly exploded : in more normal times, the beliefs of the commoners could be successfully held in check. When the tide began to turn against the witchfinders, their efforts were soon thwarted. Ironically, Hopkin's very success was seen as evidence that he himself must be in league with the devil to find so many witches. He issued a rebuttal against his detractors, but it backfired horribly. He met an ignominious end and escaped justice by dying of illness as his campaign sputtered to a halt; his compatriot Stearne largely faded into obscurity when it was clear the public mood had shifted. Witch-hunting was, arguably, as much a consequence of horror than a cause of it. Gaskill concludes with a warning :
It was a terrible tragedy, but it needs to be seen as part of something even more terrible, a civil war characterised by bigotry, brutality and bloodshed. The conflict killed 190,000 Englishmen out of a population of five million... If one discounts the sensational, the particular and the judgemental, one is left with a different kind of Matthew Hopkins : an intransigent and dangerous figure, for sure, but a charismatic man of his time, no more ruthless than his contemporaries and, above all, driven by a messianic desire to purify.
How different are we in mentality from our seventeenth-century ancestors ? - a question that becomes even more taxing if "seventeenth-century ancestors" is replaced with "fellow human beings in Africa and India". The truth that many find unpalatable, even inconceivable, is that in our ideas, instincts and emotions we are not very different at all. Without peace and prosperity, liberty and welfare, and the political and economic stability on which those things depend, the thinking of the next generation in the West might swerve off in an altogether more mystical and malevolent direction... the seventeenth-century tragedy is only partially that of Matthew Hopkins, the flawed protagonist, and the harrowing deaths of his victims. It is at least as much a tale about feeling anxious and vulnerable in an indifferent world - a sensation of humanity.


Witchfinders

Witchfinders book. Read 58 reviews from the world's largest community for readers. By spring 1645, two years of civil war had exacted a dreadful toll upo...

Thursday, 16 July 2020

Better than 100%

I get annoyed by the idea of giving 110%. You can't give more than your maximum by definition, so this is making a mockery of basic mathematics by virtue of a bizarre fetish for sheer productivity.

Unless, of course, we define things more carefully.

How much of my time is actually productive output ? Well, a typical pre-pandemic workday is eight hours. Lose an hour for lunch. Lose at least another hour for tea and checking the news, emails, etc. Lose, say, about another hour on average for meetings and whatnot. That means I'm doing directly productive work for maybe five hours per day.

The theoretical maximum I can work in a day is 24 hours. So typically I'm working 21% of my full capacity : if I really do give 100%, I'll be working five times harder. Of course, my actual productivity will drop very rapidly indeed, probably close to zero after about 16 hours or so of this hellish torment, and within two days or so I'll collapse from exhaustion. If, somehow, I stay conscious, within a week I'll be clinically and permanently insane. Therefore, anyone saying they'll give it 100% - let alone 110% ! - is basically saying that want to die a horrible death, and should be carefully avoided.

On the other hand, we could define the maximum available working time to be eight hours. In that case, I'm generally working at 62% of maximum, so increasing to 100% is not such a big deal and quite sustainable for extended periods - days, certainly, probably weeks or even months. In fact even giving 110% becomes perfectly possible, as that only amounts to working 48 minutes extra per day*. The theoretical maximum output, by this definition, would be 300% if one works 24 hours per day, although this then becomes just as dangerous as before.

* Which somehow doesn't sound as impressive as saying "I'll give it 110% !".

Of course, after a while spending 100% or 110% of one's time on a single project is going to have adverse effects. See, that non-productive time is only non-productive if measured directly. In terms of educational value, for learning about other projects or other ways to do the current project, it's enormously valuable. If and only if one already knows the best way to get things done, and has no other important tasks to do, does saying, "I'll give it 100%" make any sense. Most of the time it's just silly.

Actually, my five hours of work is usually divided between at least two or three different projects. Right now it's three, so my direct productive workday fraction per project is probably more like 20% on average. I could then give it 100% for a particular project for a while, provided I don't mind telling everyone else to sod off while I achieve a pointless superlative benchmark.

Where it gets really tricky is when I'm thinking about a project but not actually doing anything. In terms of literally generating productive output - papers and code and the like - my output is probably a factor of a few below the estimated values, because much of this relies on unsuccessful tests and long periods of thinking about stuff. Without this seemingly non-productive time, nothing would get done. So is it really fair to call it unproductive ? Probably not : without eating I'd eventually die and not be able to work at all, so yes, absolutely, lunchtime counts as work. Expect me to be somewhere for eight hours but refuse me the option to eat ? Screw you ! This means I'm already working at 100%, plus or minus 10-20% since sometimes I leave early and sometimes I stay later.

And efficiency is even more complex. I can say, "I'll work extra hard on this project to really try every option", and this has some meaning, but I cannot possibly say just how more efficient I'll become. I might get lucky and hit on a solution in the first hour, but if there are lots of equally valid options to test, chances are it's still going to take a while.

Okay, so "giving it 100 [or 110] %" either means :
  1. Working oneself to death in about a fortnight
  2. Working an extra 48 minutes per day
  3. Skipping all lunches and meetings of any kind, even useful ones
  4. Making no changes at all.
Most of these are lousy options. Option 4 is a pointless announcement, while 1 and 3 are actively harmful. No employer should welcome any of these. Is even option 2 at least a sensible - if strange - thing to say ?

Probably not. The announcement of "I'm giving 110% !" is, presumably, about working really hard to maximise productive output, not just working really hard for its own sake. But in that case, working longer might be a good idea, but it might equally well be a bad one. Working eight hours solid, without even a toilet break, is unlikely to end well. So 100% of maximum possible output could mean either working for a longer period each day but with more breaks, or it could even mean "I'm going to work a bit less". And if the goal is sustainability over a long period, that's probably a good thing. Sure, you can usefully work more intensely for a short while, but pretty quickly your productivity will go down, not up.

There, we can now finally drop that stupid phrase and say something more helpful instead. Tell your boss you're giving it 70% (or 25% depending on how you measure available time) and you can rightfully expect him to be damn grateful about it.

An Evelyn never Forgets

Why yes, I am proud of the title. This is a nice article, but it seems to conflate "basic income" with "universal basic income", which are not at all the same thing.

I'm pretty much convinced of the need for the goal behind UBI, which is to provide a global safety net that prevents anyone and everyone from falling into ruin. Perpetual worry about how to make ends meet does not provide a sensible motivation to work one's way out of poverty. To work merely in order to avoid death is no way to live, and while the idiot masses have a lot to answer for, I don't have such a low view of them as thinking they'd prefer to laze about all day if they didn't have to worry about being evicted. It takes far more prolonged and luxuriant surroundings than life in a meagre domicile in order to fall into corruption and complacency. That sort of behaviour is infinitely worse, and more dangerous, at the top of the economic pyramid than at its base. To live with a basic level of dignity is not an obscene privilege but an entirely correct entitlement. Permanent fear is not a sane approach to social policy.

More pragmatically, while a modest lack of resources can provide a motivation - or at least a desire - to work harder and climb the economic ladder, in excess it does not provide the necessary facilities but actively acts against anyone trying to rise above their station. The "Sam Vimes 'Boots' theory of socioeconomic unfairness" is hugely non-linear - it's easier for a billionaire to make another billion than it is for a starving family to survive another day. Social and economic progress does require motivation and hard work, but it also requires sheer resources. Money buys resources, which can be transformed into money and invested into more resources. Thus the poor stay poor precisely because they are poor, and the rich get richer only because they are already rich. It has little, although not nothing, to do with intrinsic character or motivation. Sure, you'll find the odd layabout on the scrapheap who got there by virtue of being a lazy sod, but that says nothing about the majority of such unfortunates.

So that's the goal : ensure everyone - without exception - has enough to live decently and with dignity, and if they want to live more luxuriantly, they have to work to earn it. As to what economic solution would be the best way to achieve that, I remain open-minded except that self-evidently the free market is not the solution to this particular problem.
Evelyn Forget was a psychology student in Toronto in 1974 when she first heard about a ground-breaking social experiment that had just begun in the rural Canadian community of Dauphin, Manitoba. “I found myself in an economics class which I wasn’t looking forward to,” she remembers. “But in the second week, the professor came in, and spoke about this wonderful study which was going to revolutionise the way we delivered social programmes in Canada. To me, it was a fascinating concept, because until then I’d never really realised you could use economics in any kind of positive way.”
That last sentence alone shows what a weird dystopia in which we live. The hell's the point of having economics if it doesn't do any good ?
The experiment was called ‘Mincome’, and it had been designed by a group of economists who wanted to do something to address rural poverty. Once it was implemented in the area, it had real results: over the four years that the program ended up running in the 1970s, an average family in Dauphin was guaranteed an annual income of 16,000 Canadian dollars ($11,700, £9,400). 
“It wasn’t a case of getting money to live and do nothing,” says Sharon Wallace-Storm, who grew up in Dauphin and was 15 when the experiment began. “They set a level for how much a family of three or four needed to get by. You applied showing how much you were making, and if you didn’t meet that threshold they would give you a top up.”
Which, for the unfamiliar, is not at all the same as Universal Basic Income, which would be giving absolutely everyone the same amount of cash per month regardless of their other income sources. The disadvantage of UBI is that it's potentially expensive and inefficient (billionaires don't need an extra $11,700). The advantage is that it is absolutely impossible to cheat, and by extension, it does not encourage cheating. It also ensures that working always means greater income than not working. Mincome, on the other hand, ensures everyone has at least enough to get by, but would seem rather easier to cheat (because you could fiddle your accounts to present yourself as having a lower income than you really do) and it's less clear if you'd be guaranteed to always be earning more by working - or, you might find that you need to work tremendously hard to earn just above the threshold, whereas if you didn't work at all your income would be only slightly less. That would stop you living in a slum but wouldn't do anything to tackle the grosser problems of extreme wealth inequality. It would be a safety net, but one it might become very easy to become entangled in and so more difficult to escape from. Ideally, what we want is not a net but a trampoline, but "economic safety trampoline" is unlikely to catch on as a mainstream term.
In total, the scheme ran for more than four years, with the primary goal of investigating whether a basic income reduced the incentive to work, one of the main public concerns at the time regarding such schemes. Forget had long wondered what had happened to the social experiment that so captivated her in 1974... [she] discovered that the data had fallen under the jurisdiction of the Winnipeg regional office of Canada’s National Library and Archives. After gaining permission to analyse it, she was confronted with 1,800 dusty boxes packed full of tables, surveys and assessment forms, all of which needed to be digitalised. 
After several years of painstaking work, she was finally able to publish the results, many of which were eye-opening. In particular, Forget was struck by the improvements in health outcomes over the four years. There was an 8.5% decline in hospitalisations – primarily because there were fewer alcohol-related accidents and hospitalisations due to mental health issues – and a reduction in visits to family physicians.
There was also an increase in the number of adolescents completing high school. Before and after the experiment, Dauphin students – like many in rural towns across Manitoba – were less likely to finish school than those in the city of Winnipeg, with boys often leaving at 16 and getting jobs on farms or in factories. However, over the course of those four years, they were actually more likely to graduate than Winnipeg students. In 1976, 100% of Dauphin students enrolled for their final year of school.
All well and good, but did it create jobs ? Implicitly yes. No statistics are given for the mincome study, but the results of the recent Finnish study are positive :
But when the experiment ended in 1979, the improvements which had been seen in health and education soon returned to how things had been in 1974. Taylor remembers how many of the small businesses that had sprung up over the preceding four years began to vanish. Her husband was forced to close their shop, and the couple soon left Dauphin for good.
One of the things we do know from the Mincome experiment is that basic income does not appear to discourage the recipients from working – one of the major concerns politicians have always held about such schemes. Forget found that employment rates in Dauphin stayed the same throughout the four years of Mincome, while a recent trial in Finland – which provided more than 2,000 unemployment people with a monthly basic income of 560 euros ($630, £596) from 2017 to 2019 – found that this helped many of them to find work which provided greater economic security.
“They recently released the final results, which showed the nature of the jobs that people got once they received a basic income was changing,” says Forget. “So instead of taking on precarious part-time work, they were much more likely to be moving into full-time jobs that would make them more independent. I see that as a great success.”
Clearly yes, though numbers would be nice to have. The ideal would be to move people into jobs which could be self-sustaining even if UBI was removed, but if it created economic conditions where UBI was sustainable indefinitely, this would also accomplish the same goal. And of course Finland isn't necessarily the best test case, since it already has a strong safety net.
“All the experiments so far have only considered whether basic income affects the willingness to work of those receiving the extra payments,” Mason says. “But they haven’t looked at the people who are just above the threshold for receiving basic income. Those people could well become very resentful of anyone who isn’t working, and yet only earn slightly less than them.”
Yes, but if you have UBI - not mincome! - plus minimum wage, that's not a problem. Mincome could easily suffer from this - UBI can't. The question remains as to whether UBI is really affordable en masse or not, and the large-scale effects aren't easy to predict. What will people spend an extra monthly ~$10k on ? Hookers and cocaine ? Moving to a nicer house ? And how will prices change in response ? The fractional change in income will vary enormously depending on existenting income. And spending intention is likely to vary if you know the scheme is going to be permanent or just temporary. So it's not at all easy to predict... but surely the underlying goal is a worthy one.

Canada's forgotten universal basic income experiment

Evelyn Forget was a psychology student in Toronto in 1974 when she first heard about a ground-breaking social experiment that had just begun in the rural Canadian community of Dauphin, Manitoba. "I found myself in an economics class which I wasn't looking forward to," she remembers.

Monday, 22 June 2020

I think, therefore I experiment

First, obligatory Existential Comics link. Second, the real trolley problem is obviously why the hell are they calling a train a trolley. Look, this is a train. This is a trolley. Killing one person with a trolley would be extremely difficult, never mind five. And third, we do actually know the answer to this one in real life.

That's the essential stuff out of the way. On to the article, which is about the use of thought experiments in general rather than the "trolley" problem specifically.
While thought experiments are as old as philosophy itself, the weight placed on them in recent philosophy is distinctive. Even when scenarios are highly unrealistic, judgments about them are thought to have wide-ranging implications for what should be done in the real world. The assumption is that, if you can show that a point of ethical principle holds in one artfully designed case, however bizarre, then this tells us something significant. Many non-philosophers baulk at this suggestion. 
Consider ‘The Violinist’, a much-discussed case from Judith Jarvis Thomson’s 1971 defence of abortion:
You wake up in the morning and find yourself back-to-back in bed with an unconscious violinist. He has been found to have a fatal kidney ailment, and the Society of Music Lovers has canvassed all the available medical records and found that you alone have the right blood type to help. They have therefore kidnapped you, and last night the violinist’s circulatory system was plugged into yours, so that your kidneys can be used to extract poisons from his blood as well as your own. The director of the hospital now tells you: ‘Look, we’re sorry the Society of Music Lovers did this to you – we would never have permitted it if we had known. But still, they did it, and the violinist now is plugged into you. To unplug you would be to kill him. But never mind, it’s only for nine months. By then he will have recovered from his ailment, and can safely be unplugged from you.' 
Readers are supposed to judge that the violinist, despite having as much right to life as anyone else, doesn’t thereby have the right to use the body and organs of someone who hasn’t consented to this – even if this is the only way for him to remain alive. This is supposed to imply that, even if it is admitted that the foetus has a right to life, it doesn’t yet follow that it has a right to the means to survive where that involves the use of an unconsenting other’s body.
Wow. That's insane. Now, I'm a big fan of indirect analogies - they encourage thinking carefully about a situation, whereas direct analogies are largely pointless. But this is so indirect as to be utterly irrelevant. I don't even know where to begin with the moral differences between the situations intended for comparison here, so I won't try.

What I've always liked about the Trolley Train Problem is its attempt to reduce things to the core issue : is it better to kill one person than five, given no other knowledge of the situation ? Obviously the answer to that is yes, it is. But as soon as you introduce even anything else at all, complications spiral. Why can't I stop the train ? How the hell is pushing a fat guy off a bridge going to help ? (Seriously, FFS on that one, what were they thinking drinking ?) Those aspects make the whole thing feel distinctly uncomfortable, in a different way to the moral question itself :
In the few instances I tried to use this thought experiment in teaching ethics to clinicians, they mostly found it a bad and confusing example. Their problem is that they know too much. For them, the example is physiologically and institutionally implausible, and problematically vague in relevant details of what happened and how. (Why does the Society of Music Lovers have access to confidential medical records? Is the operation supposed to have taken place in hospital, or do they have their own private operating facility?) Moreover, clinicians find this thought experiment bizarre in its complete lack of attention to other plausible real-world alternatives, such as dialysis or transplant. As a result, excellent clinicians might fail to even see the analogy with pregnancy, let alone find it helpful in their ethical reasoning about abortion.
What's never really occurred to me is the root issue of what makes the more complicated situation feel different, which the article does an excellent job of putting its finger on : the nuances of the situation matter because they change the ethical problem. Say there was some way of stopping the train instead of diverting it but with some risk - that's a different moral question. Someone having to endure being tied to a violinist experiences different moral questions to one wrestling with abortion. The detailed aspects to each situation allow for different options, and ignoring those is unavoidably immoral. Fundamentally, sometimes you can't reduce it to a simple case. The details are essential and cannot be separated from the moral issue at hand. So the question, "is it better to kill one person than five ?" may have literally no context-independent meaning at all - it would be better in some situations but not in others. The key for addressing a moral dilemma by analogy is to a construct a relevant analogy in which the answer already seems clearer.
Faced with people who don’t ‘get’ a thought experiment, the temptation for philosophers is to say that these people aren’t sufficiently good at isolating what is ethically relevant. Obviously, such a response risks being self-serving, and tends to gloss over an important question: how should we determine what are the ethically relevant features of a situation? Why, for example, should a philosopher sitting in an armchair be in a better position to determine the ethically relevant features of ‘The Violinist’ than someone who’s worked with thousands of patients?
 All this makes reasoning about thought experiments strikingly unlike good ethical reasoning about real-life cases. In real life, the skill and creativity in ethical thinking about complex cases are in finding the right way of framing the problem. Imaginative ethical thinkers look beyond the small menu of obvious options to uncover novel approaches that better allow competing values to be reconciled. The more contextual knowledge and experience a thinker has, the more they have to draw on in coming to a wise decision.
The greater one’s contextual expertise, the more likely one is to suffer the problem of ‘too much knowledge’ when faced with thought experiments stipulating facts and circumstances that make little sense given one’s domain-specific experience. So, while philosophers tend to assume that they make ethical choices clearer and more rigorous by moving them on to abstract and context-free territory, such gains are likely to be experienced as losses in clarity by those with relevant situational expertise.
Clearly thought experiments do have value. By attacking the problem in different ways, they help us identify the similarities and differences, looking for what's extraneous (the colour of the train, say) and what's really important (who the people are). In this way we try and seek out the appropriate principle on which to act. Quite often I find myself profoundly disagreeing with the conclusion the author intended, and it's in those disagreements it becomes possible to learn something.

The Aeon piece then goes into a lengthy discussion about whether thought experiments are literally experiments or merely appeals to the imagination. It's decent enough, but to be honest I'm not really sure what the point of it is : it seems clear that they're not literally experiments but "intuition pumps", as the author calls it, for getting to the heart of the matter. This is not at all easy. Clearly it's very difficult to illuminate the underlying ethical principles, but that there are many unique and complicating factors to every situation doesn't mean that underlying principles don't exist. We just have to keep examining and revising in accordance with new evidence. Much like science, really... which means we've come full circle and thought experiments are experiments after all. Whoops.

What is the problem with ethical trolley problems? - James Wilson | Aeon Essays

Much recent work in analytic philosophy pins its hopes on learning from imaginary cases. Starting from seminal contributions by philosophers such as Robert Nozick and Derek Parfit, this work champions the use of thought experiments - short hypothetical scenarios designed to probe or persuade on a point of ethical principle.

Wednesday, 17 June 2020

Apparently, some people don't believe bias affects behaviour

Two very different takes on the nature of bias and how to overcome it - in particular, implicit racial bias. Trying to raise issues to consciousness is important and worthwhile, but how much influence this really has is subject to a host of caveats. My feeling is that it does help, but it awareness alone rarely bestows instantaneous control. Being aware that you should or shouldn't do something isn't enough by itself to stop you from wanting to do it (or not do it).


This first article is - spoiler alert - shite. Absolutely shite. It's self-inconsistent to a degree seldom found outside of presidential tweets, and I'm more than a little skeptical of some of its claims. I'm pretty confident I did a lot more background reading to write this post than the authors did, because I actually read the links they cite in support of their claims and I don't believe they did even this.

What's the big problem ? Well, the article says not merely that implicit bias is not much of a thing, a notion I'd be prepared to at least entertain, but even that our explicit biases don't matter very much. And that's just... odd. First, the idea that implicit bias isn't much of a thing :
Contrary to what unconscious bias training programs would suggest, people are largely aware of their biases, attitudes, and beliefs, particularly when they concern stereotypes and prejudices. Such biases are an integral part of their self and social identity... Generally speaking, people are not just conscious of their biases, but also quite proud of them. They have nurtured these beliefs through many years, often starting in childhood, when their parents, family, friends, and other well-meaning adults socialized the dominant cultural stereotypes into them. We are what we believe, and our identity and self-concept are ingrained in our deepest personal biases. (The euphemism for these is core values.)
Well, surely everyone is aware of their explicit bias by definition. But the whole point of implicit bias is that it's unconscious. It doesn't arise out of choice, or an active desire to discriminate. This is why people can have implicit biases even against their own social groups. So I don't think the first sentence makes much sense : nothing is "contrary", the two biases are wholly different. And the article has a subheading that "most biases are conscious rather than unconscious", but nothing is offered to support this claim (how would you even measure how many unconscious biases someone has anyway ?). Not a great start.
Contrary to popular belief, our beliefs and attitudes are not strongly related to our behaviours. Psychologists have known this for over a century, but businesses seem largely unaware of it. Organizations care a great deal about employee attitudes both good and bad. That’s only because they assume attitudes are strong predictors of actual behaviors, notably job performance. 
However, there is rarely more than 16% overlap (correlation of r = 0.4) between attitudes and behavior, and even lower for engagement and performance, or prejudice and discrimination. This means that the majority of racist or sexist behaviors that take place at work would not have been predicted from a person’s attitudes or beliefs. The majority of employees and managers who hold prejudiced beliefs, including racist and sexist views, will never engage in discriminatory behaviors at work. In fact, the overlap between unconscious attitudes and behavior is even smaller (merely 4%). Accordingly, even if we succeeded in changing people’s views—conscious or not—there is no reason to expect that to change their behavior.
Wait... what ? Doesn't this flatly contradict the first passage that "we are what we believe" ? This all feels highly suspicious. In fact, on reflection I think it's a deliberate attempt to sow confusion.

Intuitively, the stronger a belief someone holds, the more likely they are to engage in the corresponding behaviour. But that doesn't mean we therefore expect the correlation to be extremely strong, because lord knows there are a bunch of things I'd like to do but physically can't*. I firmly believe that flying like Superman would be a jolly good thing, but I don't even enjoy air travel. Hell, I find driving a terrifying experience and would go to extreme lengths to avoid ever having to do it again. Does that stop me wanting a jetpack ? No. And I'd quite like to be an astronaut, but my belief that it would be worthwhile isn't enough to motivate to drop everything and switch career. That'd be silly.

* Also on statistical grounds, causation doesn't necessarily equal correlation.

Some beliefs are pure fantasy : there's a genuine difference between belief, desire, and behaviour. Sometimes we just can't act on our beliefs, either because they're physically impossible or because we're subject to other forces like social pressure. We might not want to smoke but give in to peer pressure*, or vice-versa. We use metadata of who believes what just as much as we do raw evidence, and such is the power of this metadata-based reasoning** that people even insert their genitals into people with whom their own sexual orientation is in direct disagreement. The blunt-force manifestation of this is the law, preventing us from doing (or wanting to do) things we might be otherwise inclined to attempt. Law both constrains our actions directly but also maintains us culturally from even having certain desires*** in the first place.

* I'm hoping this example is hugely dated and no longer much of a thing.
** Not sure what else to call it. "Groupthink" has a more specific reasoning of a false consensus, "peer pressure" is too direct, and "thinking in groups" just doesn't cut it. See these posts for an overview.
*** Hugely simplifying. I'm not saying we'd all become murderers without laws, but these footnotes are already excessive.

Then there are conflicting beliefs. People may believe, say, that all ginger people are evil, but also that it's important to be respectful, in which case it should be no surprise that their behaviour doesn't always reflect one of those beliefs. If, on the other hand, they believe ginger people are evil and that they should always express themselves honestly, then it should come as no surprise to find them saying nasty things about redheads. Personality plays a huge role.

All this undermines the direct implication of the above quote that we should expect a very strong correlation between beliefs and behaviours. Even if beliefs do drive behaviours, other constraints are also at work. Conversely, this also lends some support to the idea that those who do engage in discriminatory behaviours do not necessarily hold prejudiced views : "I was only following orders" and all that. You don't even necessarily need to dislike them to be convinced you need to murder them.

But I think it's a complete nonsense (literally, it does not make sense) to go that extra step and suggest that changing beliefs won't change behaviour. As the old saying goes, "not everyone who voted for Brexit was a racist, but everyone who was a racist voted for Brexit" : things aren't always neatly reversible (granted it's more subtle than that). Changing beliefs can change behaviour either directly or by changing the pervading culture of laws, rules, and social norms.

So that there's only a modest correlation between belief and behaviour, in my view, in no way whatsoever implies that changing beliefs won't change behaviour. Just because belief isn't the dominant driver of belief doesn't imply that it isn't a driver at all. Indeed, it could well be the single biggest individual contributing factor, just smaller than the sum of all the rest. As we'll see, there's some evidence for that.

I'm also not at all convinced by the claim that most racist behaviour couldn't have been predicted from their attitudes - the weak correlation alone doesn't feel like good evidence for this. Yes, this could happen to a degree, due to social forces etc. But at some level, someone consciously engaging in discriminatory practises (unlike implicit bias) must by definition hold discriminatory beliefs. And how are they defining "racist or sexist" behaviour here anyway ? This matters. I'll quote myself here because I think the concept is useful :
The "angry people in pubs", on on the internet, are often what we might call armchair bigots. They won't, unless strongly pressured, ever take physical action against people they dislike. But they are all too willing to vote on policies which harm other people. They care just enough about the truth to have the decency to be embarrassed (even if only unconsciously) by their opinions, but they're all too happy to absolve themselves of responsibility and let other people do their dirty work. This is a very dangerous aspect of democracy, in that it makes villainy much easier by making it far less personal.
So this claim about how attitudes and behaviour correlate in a strange way is just too weird to reduce it to a few casual sentences. More explanation is required.

The article links three papers in this passage. The first, which they cite in support of the "weak" correlation, says in the first sentence of its abstract that it's social pressures which weaken or moderate the belief/behaviour trend, as already discussed. It also notes that the correlation value of 0.4 is pretty strong by other psychological standards. It's a meta study of almost 800 investigations, and doesn't mention racism or sexism (which is what the Fast Company article is concerned with). It explicitly says that beliefs predict some behaviours better than others, which is hardly unexpected.

So I call foul on the Fast Company article, which is using a general study to support specific claims not backed up by the original research, which does not say that the trend is weak in all cases. Sure, social pressure is important, but it's completely wrong to say this means beliefs don't matter.

The second paper is also concerned with measuring the correlation but has nothing to do with prejudice. It's a bit odd to even mention it. The third paper, from 1996, is concerned with prejudice and discrimination and does indeed find a weaker correlation than the general case (r=0.3) - provisionally.

It's a good read. It's another meta study and describes the limitations and inhomogeneities of the various samples, but straightaway even the Fast Company claim for a weaker correlation looks to be on shaky ground. Correcting for the different sample sizes, the paper shows the correlation rises from 0.286 to 0.364, barely lower than the average. And exactly which parameters are used affects this further, with the highest average (sample-size corrected) correlation being 0.416 when examining support for racial segregation. Some individual studies reported even higher correlations, in excess of 0.5 or even 0.6. Overall, correlations were stronger when looking at survey data than experimental data, perhaps because - they suggest - people don't want to openly appear to be prejudiced.

(What I'm not clear about is what the authors mean by "discriminatory intention". They define prejudice as merely holding views, whereas discrimination is putting those views into practice. Discriminatory intention, as far as I can tell, is another type of prejudice.)

Two of their concluding points deserve to be quoted :
Though the prejudice-discrimination relationship is not very strong in general, it varies, to a considerable degree, across specific behavioral categories, and it seems that the relationship is stronger in those cases where the behavior is under volitional control of the subjects. 
In sum, we conclude: only rarely is prejudice a valid predictor for social discrimination, but there are only very few other candidates, and all of these are less useful than prejudice.
I call bollocks on the Fast Company article. They twist the conclusions to mean something very different from what the original authors said.

What of their claim that there's only a "4% overlap" between unconscious attitudes and behaviour ? For this they cite a Guardian article. But this is cherry picking in extremis - the article doesn't say implicit bias is weak (far from it, quoting a number of other studies showing it's a strong effect), it only notes the flaws in certain testing. Fast Company continue with their willful stupidity :
Furthermore, if the test tells you what you already knew, then what is the point of measuring your implicit or unconscious biases? And if it tells you something you didn’t know, and do not agree with, what next? Suppose you see yourself as open-minded (non-racist and nonsexist), but the test determines that you are prejudiced. What should you do? For instance, estimates for the race IAT suggest that 50% of black respondents come up as racially biased against blacks.
That's the whole frickin' point of implicit bias : it's different from explicit bias. Why is this complicated ?

While their conclusion that "bias doesn't matter" is clearly rubbish, they are right to point out, though, that overcoming this bias is difficult. What they don't offer, despite their clickbaity headline, is any hint at at all on what better method is available to overcome it. Although they do conclude that bias is a problem and we should do more (or something differently) to improve how we deal with it, this frankly feels schizophrenic after they just spent the whole article veering wildly back and forth between "bias being how we identify ourselves" and "bias being totally unimportant".

This is a daft article. It doesn't make any sense. It mangles the conclusions of academic papers in order to support a conclusion it doesn't believe in. Aaargh.

Take-home message : bias does matter. That's what the evidence says, very clearly, despite numerous and interesting complications.


Time for the second article. Don't worry, this one can be much shorter, because it's self-consistent and sensible. It fully accepts that implicit bias is important and doesn't say anything stupid about explicit bias being something we can ignore.
That particular implicit bias, the one involving black-white race, shows up in about 70 percent to 75 percent of all Americans who try the test. It shows up more strongly in white Americans and Asian Americans than in mixed-race or African Americans. African Americans, you’d think, might show just the reverse effect — that it would be easy for them to put African American together with pleasant and white American together with unpleasant. But no, African Americans show, on average, neither direction of bias on that task. Most people have multiple implicit biases they aren’t aware of. It is much more widespread than is generally assumed.
However, the author of this piece is also skeptical as to whether implicit bias training is actually improving things or not.
I’m at the moment very skeptical about most of what’s offered under the label of implicit bias training, because the methods being used have not been tested scientifically to indicate that they are effective. And they’re using it without trying to assess whether the training they do is achieving the desired results. 
I see most implicit bias training as window dressing that looks good both internally to an organization and externally, as if you’re concerned and trying to do something. But it can be deployed without actually achieving anything, which makes it in fact counterproductive. After 10 years of doing this stuff and nobody reporting data, I think the logical conclusion is that if it was working, we would have heard about it.
Training methods apparently either don't work or their effects are extremely short-lived :
One is exposure to counter-stereotypic examples, like seeing examples of admirable scientists or entertainers or others who are African American alongside examples of whites who are mass murderers. And that produces an immediate effect. You can show that it will actually affect a test result if you measure it within about a half-hour. But it was recently found that when people started to do these tests with longer delays, a day or more, any beneficial effect appears to be gone... It’s surprising to me that making people aware of their bias doesn’t do anything to mitigate it.
I'm not surprised at all. Dedicated training sessions take people out of their usual information network and temporarily embed them in a new one. Once you take them out, they're assaulted once again by their old sources of bias. Maintaining awareness from one particular session, especially if its conclusions run counter to everyday sources, is not at all easy. Raising issues to awareness does help you take control, but i) it only helps and ii) you need to maintain that awareness. Still, the author does come up with a concrete plan that might address this :
Once you know what’s happening, the next step is what I call discretion elimination. This can be applied when people are making decisions that involve subjective judgment about a person. This could be police officers, employers making hiring or promotion decisions, doctors deciding on a patient’s treatment, or teachers making decisions about students’ performance. When those decisions are made with discretion, they are likely to result in unintended disparities. But when those decisions are made based on predetermined, objective criteria that are rigorously applied, they are much less likely to produce disparities.
As noted, the effect of continually asking, "Am I being racist ?" can be counter-productive. But asking instead, "Am I being fair ?" may be better. It forces you to focus on the key criteria of how you're making your decision. It stops you from focusing on the very issue you want to avoid - prejudice - and deal with the problem at hand fairly.
What we know comes from the rare occasions in which the effects of discretion elimination have been recorded and reported. The classic example of this is when major symphony orchestras in the United States started using blind auditions in the 1970s... as auditions started to be made behind screens so the performer could not be seen, the share of women hired as instrumentalists in major symphony orchestras rose from around 10 percent or 20 percent before 1970 to about 40 percent.
Also telescope proposals. Forcing people to focus on what matters, while at the same time excluding what doesn't matter, actually works. People are actually capable of being fair and rational under certain conditions. Hurrah !

Monday, 15 June 2020

Leave the damn statues alone

... at least, until you can come up with a general principle of which ones you object to. Cue angry rant, in no small part repeating the previous post.

I think we can certainly all agree on two extremes. First, that we shouldn't be building any new statues to people whose entire aspect was offensive to modern eyes, except perhaps in ways intending to denigrate them. No-one wants a statue of Pol Pot staring at them in a public space, or wants passing schoolchildren to see a huge statue of Stalin depicted as a happy jolly fellow with a big beaming smile while he crushes opponents with a literal iron fist*. Second, existing statues of people who led blameless lives should certainly stay.

* Or maybe they do. Sounds kinda cool, actually.

The trouble is that the middle ground between these two extremes is an absolute minefield of ghastly complications.

As far as I can tell, protestors are objecting mainly to the racist attitudes of historical figures, considering it no longer appropriate to glorify them with a public statue. No-one, as far as I know, wants to forget the past : indeed, there are calls for more history lessons, not less. And quite rightly so. As I've said before, British high school education was absolutely shite. It was a plodding, humdrum affair that utterly neglected both the spectacular advances and the atrocities of the not-so-distant British past. It was, I'd go so far as to say, shamefully bad.

So the protestors don't want to censor or forget anything. Far from it. But they seem awfully confused about the artistic nature of statues and the different moral attitudes that prevailed in previous eras.

First, does a statue in general automatically glorify its subject ? Clearly not. There is no more artistic necessity for a sculpture to glorify its subject than a painting. Does the painting of Kronos eating his children glorify cannibalism ? Hardly !

What, though, if the statue was in an open public space ? Again I have to say no. The Prague TV tower is covered in giant deformed babies with barcodes for faces, and I doubt very much this is intended to glorify mutilation. Or what about this bizarre collection ? Did the artist intend to encourage child beating ? I don't think so*.

* As to what he was encouraging, that is left to an exercise for the reader. Drugs, presumably.

A statue, to my way of thinking, does not automatically glorify or even honour its subject. The one thing it unavoidably does do is draw attention to them. Anything else is context dependent : it can glorify their whole character, or it can commemorate only their specific achievement(s) - then again it can even go the other way and raise awareness of what horrible people they were. Sculptures of the Greek myths don't automatically tell us to adopt "What Would Zeus Do ?" as a motto, because the answer would be, "turn into a swan and rape people", which is pretty much about the worst advice possible in any situation*. Static visual art can play the same function as a play, inviting the audience only to consider its subject, not attempting to persuade them of anything. Art can, and should, invite its audience to think.

* One million internet points for anyone who comes up with a situation in which this would actually be an appropriate thing to do.

The intent of the artists does not matter all that much to the the perception of the audience. For example, that time an old lady "restored" a painting of Jesus into "a very hairy monkey in an ill-fitting tunic". The moral message intended by the artist, if any, doesn't necessarily have any impact on how the art is actually perceived (e.g. a Ted Hughes poem about a hawk that was bizarrely interpreted to be about the Nazis). It's not that the interpretations are wrong, it's that they're subjective. You see a statue glorifying a racist; I see a statue honouring the man who saved Britain from the Nazis. Both are true, in no small part because human beings are fantastically complicated beasts.

Historical figures exemplify how perception changes, especially as pain fades with time. We no longer view statues of Caesar as honouring the man who boasted that he'd killed a million Gauls and enslaved a million more : we can look at these distant atrocities dispassionately as interesting historical events. Similar arguments could be applied to a myriad of historical figures. Their crimes were as bad or worse than any of the more recent figures, yet tearing down Trajan's Column would be criminally insane. If we uncover a statue of an Aztec emperor, we say, "how interesting !" despite their acts of bloody slaughter being as bad as any others in human history. An Aztec atrocity just doesn't have the same emotional resonance as a more recent, closer horrors. The statues and monuments no longer have the same impact or meaning as they once did, just as castles are now seen as fun places to visit and not terrifying instruments of oppression and torture. So even when the intent of the artist was to glorify their subject, this doesn't always or permanently succeed.

No-one has a monopoly on what an artwork means. Even at the moment a statue is created,  what to one person can feel horrendous can to others, rightly or wrongly, seem worthy of celebration. How do we decide whose enjoyment outweighs whose offence, and vice-versa ?

Frankly I've no idea. More fundamentally, no-one but no-one is perfect. As the protestors point out, Churchill was indeed a racist - pretty much everyone was back then - but he was hardly a white supremacist. Martin Luther was an extreme anti-Semite. Many suffragettes were eugenicists; so was H. G. Wells (though the concept didn't have the racial overtones it does today). Aneurin Bevan, founder of the National Health Service, called the Tories "lower than vermin". The ancient Athenian founders of democracy were proud to call themselves warlike and thought nothing amiss in owning slaves. Richard the Lionheart, Henry V and Boudicca - some of Britain's most notable military commanders all committed what today would be considered war crimes. Thomas Jefferson owned slaves. Reality has few true heroes and villains, but it has an abundance of people capable of heroic and villainous acts alike.

If you want to tear down a statue, you should be able to explain four things. First, you need to justify what's special about this particular figure as opposed to a plethora of other, deeply flawed yet accomplished historical figures who held similar or identical views. Second, you need to explain what it about that particular trait that makes it different from other offensive characteristics. Third, you also need to explain how this applies uniquely to a statue and not some other form of artwork. If a statue is so offensive, why not books ? Why not TV shows ? Fourth, you should explain why removal is the best action rather than alternative measures.

Don't misunderstand - this isn't a rhetorical point. It's absolutely possible to answer these issues. For example, toppling statues of Saddam Hussein made complete sense : those were erected by a violent dictator for the sole purpose of his own aggrandizement, and no-one thought they had any other cultural value; there was a highly limited scope for interpretation. Similarly, Confederate flags or Swastikas - in virtually all instances - are open declarations of hostility, not talking points for polite discourse. I don't mean to say for a moment that just because a statue isn't necessarily aggressive that it follows that it automatically isn't, because that's clearly very stupid.

But historical figures of the distant past, to my mind, are in general altogether more complex than figures of living history. Thomas Jefferson was a pro-emancipation slave-owning racist, a juxtaposition which simply doesn't make sense to modern ideologies. Christopher Columbus, despite a number of previous attempts, was the one who made the New World really matter to the Old, but, like Vasco de Gama, was tremendously violent towards the people he encountered. Should we tear down his statues, or can we use them to mark the undeniable magnitude of his achievement without glorifying the associated death and destruction ? Surely if we can recognise the debt owed to Churchill without endorsing his bigotry, we can do the same for other, much older figures.

Protestors are not beholden to produce a detailed list of every statue or other artwork they want removed. Nor is any burden put upon to demand a perfect set of criteria that must be perfect on the first iteration and subject to no further changes : this can be a conversation, in which we revise and adjust the criteria according to the arguments set forth. But protestors do need to at least try and establish some broad principle behind their actions, otherwise their efforts are arbitrary, potentially hypocritical, and counter-productive. Many of the figures targeted are not even especially famous, thus attacks on their statues only inflame curiosity. Never mind that a few days ago this was not an issue at all, except in a few cases, which gives the whole thing a distinct whiff of, at best, stemming from different underlying factors, and at worst of being completely manufactured. How many people really feel such bitterness towards centuries-old statues of figures most people have never heard of ? Oh, and they've been there for 200 years but only became offensive this week ? Hmm.

When it is clear that a statue glorifies and is perceived to glorify an injustice, there can be grounds for removal. Alternative actions include adding explanatory plaques, which are far more educational than removal, or counter-statues. Sure, sometimes taking them down is the best option. But scrawling "was a racist" on Winston Churchill ? Nope. That's an action born of pure anger, not any righteous or informed sense of tackling injustice. Calling for the removal of statues of people based on their violent actions or bigoted views ? Nope - that would lead to the removal of practically everyone, even those perceived as heroes to those most fiercely opposed to injustice. People are just so much more complicated than that. So too is their art.

Artificial intelligence meets real stupidity

A wise man once quipped that to err is human, to forgive divine... but to really foul things up you need a computer. Quite so. Look, I love ...