Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Monday 16 November 2020

The mind of a monster

One of the many, many things that annoyed me about Stephen Pinker was his brusque dismissal of utility monsters. Such a monster gains pleasure from the misery of others, supposedly - according to critics of utilitarianism - causing untold amounts of suffering for sake of the good of the monster. These creatures "don't seem to be much of a problem", according to Pinker. I found this stupid because surely you're not supposed to take the idea so bloody literally. Far more useful to treat them as thought experiments, as ways of probing interesting moral dilemmas.

But what if we do take them literally ? That's explored in this rather nice overview from the BBC.

“I want to eat you, please,” the monster said.

“Sorry, but I’d prefer not to be your lunch,” the philosopher replied, and moved to keep walking.

“Wait,” said the monster, holding up a clawed finger. “What if I could present you with a sound argument? Your idea of happiness is only a mere fraction of what I am capable of feeling,” it says. “I am as different to you, a human, as you are to an ant. If I eat you, it will give me more well-being and satisfaction than all humans who have ever lived.”

The philosopher hesitates while trying to think of a counterargument. “Well, gosh, that’s certainly a valid...” But time’s up, the professor is lunch.

This immediately struck me as bollocks, ironically enough for a bit of subtlety that Pinker explained quite nicely (and the article does briefly mention later on) : the difference between happiness and value. Why should a monster's happiness ever be worth anyone's life ? By that logic, genocide is fully justified because it makes dictators happy. The whole premise is invalid : we value happiness, but we don't value people because they're happy - sad people matter too ! True, we also deem it worthy to cause happiness, but we don't automatically or unequivocally assume that happiness acts as a counterweight to suffering. The two are not opposites, even in the case where one leads directly to the other. The problem is brilliantly described in Doctor Who :

All dead. If the Dalek gets out it'll murder every living thing. That's all it needs. 
But why would it do that !? 
Because it honestly believes they should die. 

Daleks are utilitarians in extremis : my happiness outweighs your suffering, and my happiness depends on your death. Sucks to be you !

Surely a better approach is to try and maximise happiness while also minimising suffering, with the latter the more important by far. And it would have to be more complex than mere addition and subtraction, otherwise that would again lead to the Dalek-like conclusion of wholesale extermination, albeit as quickly and painlessly as possible. No, you'd need some kind of integration or perhaps convolution. What you want to do is minimise both the number of people enduring poor conditions and the degree to which each individual experiences anything unpleasant. Only once everyone is no longer ever in any serious danger, or at least to a degree they're content with, do you start trying to give everyone a solid gold house or whatever. You try and raise everyone up to some minimum standard first, not push a few down so others go up higher.

In essence, there are two aspects to the problem : why use happiness as the goal (rather than lack of suffering), and why the monster ? That it is undeniably good to be happy doesn't outweigh that it might be even better not to suffer - no proof is offered to equate happiness with value. And why the subjective emotional state of one monster should outweigh the equally real and valid feelings of a philosophy professor, even if the capacity of one is greater than the other, is also far from obvious. Why do the monster's feelings matter more ? What good is it going to do to make the monster happy ? Is the monster just going to sit there like a great big smug lemon, cheerfully guzzling philosophers and feeling amazingly content but not actually helping anyone ? Good God man, that's monstrous indeed !

There's an even more fundamental aspect to this which I'll return to later on. For now, suffice to say that I think "minimise suffering first, then maximise happiness" is a useful guideline, but cannot ever be anything more than that.

To return to the article, certain pretentious scientists think that utility monsters might indeed have to be taken literally :

Bostrom is one of the main academic proponents of the idea that we ought to prepare for the sudden arrival of super-intelligent machines, far smarter than the best human minds and capable of raising new ethical dilemmas for humanity. In a recent paper, posted on his website, he and Shulman propose scenarios where one of these digital minds could become a well-being “super-beneficiary” (a term they prefer to “monster” because they believe such minds should be described with non-pejorative language).

Well, I prefer "monster" because I happen to think that Daleks are Not Nice, and I immediately suspect that these researchers might need a good slap in the face. But let's proceed :

One thing they identified during this exercise was that digital minds might have the potential to use material resources much more efficiently than humans to achieve happiness. In other words, achieving wellbeing could be easier and less costly for them in terms of energy, therefore they could experience more of it. A digital mind’s perspective of time could also be different, thinking much faster than our brains can, which could mean that it is able to subjectively experience much more happiness within a given year than we ever could. They also need no sleep, so for them it could be happiness all night long too. Humans also cannot copy themselves, whereas in silicon there is the strange possibility of multiple versions of a single digital being that feels a huge amount of well-being in total.

If our demise meant their success, then by the basic utilitarian logic that we should maximise well-being in the world, they’d have an argument for metaphorically eating us. Of course, only the most uncharitable interpretations of utilitarianism say we are obliged to detrimentally sacrifice ourselves for the sake of others’ happiness.

But this is all wildly speculative. Why would such a being have any need for us to suffer at all ? Why can't it just increase its sense of well-being at will without having to eat anyone ?  Or to take it to its logical extreme, I could equally well speculate about a device which does absolutely nothing else at all except experience happiness. Would there be any ethical questions about switching a device off or on ? The poor thing would have an utterly meaningless, pointless existence. Merely having such a device in the world - and indeed, merely making real people happy - does not automatically equate to moral good. We can state that quite confidently without even needing to tackle the much harder question of what we mean by "moral good". Creating a inanimate cube which sits around all day doing nothing at all except feeling orgasmically happy is not something which makes the world a better place.

I think, probably, I'm not generally obligated to make myself unhappy in order to satisfy others. Far better to strive to avoid making other people unhappy (and actively helping them to avoid misery when it's inflicted on them) rather than satiating their desires. The pursuit of happiness is usually and individual choice, whereas the avoidance of suffering often requires a societal effort. That is, only once you've reached some minimum standard can you freely fend for yourself - until that point, you need the assistance of other people.

The much more fundamental point is that happiness is not something you can numerically quantify or subject to mathematical principles : you can't define morality as the vector sum of happiness. More than that, you can't even objectively define value itself, still less say why happiness is all that matters. And if you can't properly define it, quantifying it is absolutely hopeless. Utilitarianism is a useful, thought-provoking guideline, but God help anyone who takes it any further than that.

The intelligent monster that you should let eat you

One day, a philosopher was walking down the street, when a monster jumped out. Despite its terrifying fangs, it was actually more polite and articulate than expected. "I want to eat you, please," the monster said. "Sorry, but I'd prefer not to be your lunch," the philosopher replied, and moved to keep walking.

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Philosophers be like, "?"

In the Science of Discworld books the authors postulate Homo Sapiens is actually Pan Narrans, the storytelling ape. Telling stories is, the...