Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Wednesday 20 April 2022

Slaying the utility monsters

I've previously gone on something of a rant against utilitarianism, but since reading On Liberty (I will eventually finish that planned post series, I promise) I've become far more sympathetic

As I see it, neither liberalism nor utilitarianism provide a complete moral system, but both provide important parts of the whole. In particular, you can't treat moral actions as a linear sum : allowing the happiness of one to balance out the misery of the many is pretty much guaranteed to lead straight to hell. Rather, you have to actively seek first and foremost to minimise the suffering so that as few people as possible experience the smallest amount of suffering possible. Minimising suffering is the priority, over and above increasing happiness.

Furthermore, whatever action you take you have to justify everything by some independent criteria : you don't ever cause suffering for the explicit purpose of improving happiness, otherwise you're in cliché territory about the ends justifying the means. Your goals are to minimise suffering and maximise happiness (that seems a pretty safe set of goals for any would-be social engineering*), but you can't let your goals dictate what is just and fair. The two must be kept strictly independent. Furthermore, in a liberal system you should always seek to employ the smallest possible action that gives the greatest possible gain.

*Yeah, this is a simplification, but without simplifications we'll get nowhere.

On then to this post. I believe the above caveats should address most of the objections, e.g. :

Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives ?

Duuhhh, no ! This would be letting the goals dictate justice. And going around murdering people for their organs is so obviously wrong that I will have Words with anyone who thinks I need to explain why that's the case. Of course it's wrong. You know it, I know it, so let's not pretend otherwise. Never mind that like the Trolley Problem the details of the situation (which are here lacking) could profoundly change the moral problem being addressed, you just can't justify murder.

Most of the other examples are very similar : 

You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?

Egads no. Why should I ever reward the evil ? First I must minimise suffering, which means punishing the evil, not giving them cookies. Good grief. Enjoyment alone cannot possibly directly equate with moral good.

There is a large number of Nazis who would enjoy seeing an innocent Jewish person tortured – so many that their total pleasure would be greater than the victim’s suffering. Should you torture an innocent Jewish person so you can give pleasure to all these Nazis?

Pfft. Again, happiness does not equate with morality, that's plainly daft. And as I've said before, because they can't be mathematically quantified, you can't you linearly sum happiness and misery anyway. That's like trying to take the smell of rancid cheese and divide it by a sausage : it just doesn't make any sense at all. You've just lost Godwin's Law, methinks.

...because failing to save lives is just as wrong as actively killing...

This is bollocks, though, isn't it ? I mean yeah, if you see someone drowning and can throw them a lifebelt but actively choose not you, sure, you've basically murdered them. Choosing your life career based around what makes you happy instead of what saves the most lives isn't the same at all. In the former, you have a straightforward binary choice. In the latter, you have a whole slew of hypotheticals and a maddeningly complex optimisation problem : what exactly does save the most lives ? Being a doctor ? Being a politician out for healthcare reform ? 

I think again these things cannot be so easily quantified. And you cannot spend your entire life constantly worrying about other people because that's just not human and will inevitably lead to your own ruin. Sorry, but it's just not nearly as simple as choosing to become a medical doctor versus an athlete, because that's not how people work. And any moral framework must be constructed with respect to how real human beings actually function, rather than treating them as linear machines capable of any arbitrary action.

So yes, I agree with piece as far as that particular conception of utilitarianism goes : if we treat pleasure itself as moral good, and we seek to maximise this with no other concerns... absolutely, that is a catastrophically appalling idea. But I don't see any reason why we have to take this useless definition of "utilitarianism" to be the only possibly interpretation of it. It seems to me that there are far better, more worthy implementations which deserve consideration instead. 

No comments:

Post a Comment

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Review : Ordinary Men

As promised last time  I'm going to do a more thorough review of Christopher Browning's Ordinary Men . I already mentioned the Netf...