Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby

Saturday 27 October 2018

The public's views on the ethics of self-driving cars

I think the cultural differences in ethics are far more interesting in their own right than how to apply them to self-driving cars. Autonomous vehicles are going to have to be significantly safer than human drivers anyway, and most of the time the only option they'll have is to slow down or not. Situations where there is a clear choice between who to kill are going to be vanishingly rare, it will be almost exclusively about juggling probabilities. The machine won't have significantly better to judge this on than a human anyway : it won't know the life stories of the people involved; it won't be able to make any complex calculations beyond simple numbers. I doubt it will be good enough to distinguish between old and young.

The results from the Moral Machine suggest there are a few shared principles when it comes to these ethical dilemmas. But the paper’s authors also found variations in preferences that followed certain divides. None of these reversed these core principles (like sparing the many over the few), but they did vary by a degree.

The researchers found that in countries in Asia and the Middle East, for example, like China, Japan, and Saudi Arabia, the preference to spare younger rather than older characters was “much less pronounced.” People from these countries also cared relatively less about sparing high net-worth individuals compared to people who answered from Europe and North America.

The study’s authors suggest this might be because of differences between individualistic and collectivist cultures. In the former, where the distinct value of each individual as an individual is emphasized, there was a “stronger preference for sparing the greater number of characters.” Counter to this, the weaker preference for sparing younger characters might be the result of collectivist cultures, “which emphasize the respect that is due to older members of the community.”

These variations suggest that “geographical and cultural proximity may allow groups of territories to converge on shared preferences for machine ethics,” say the study’s authors.

However, there were other factors that correlated with variations that weren’t necessarily geographic. Less prosperous countries, for example, with a lower gross domestic product (GDP) per capita and weaker civic institutions were less likely to want to crash into jaywalkers rather than people crossing the road legally, “presumably because of their experience of lower rule compliance and weaker punishment of rule deviation.”

https://www.theverge.com/platform/amp/2018/10/24/18013392/self-driving-car-ethics-dilemma-mit-study-moral-machine-results

2 comments:

  1. ...and yet, the very fact that the ethical buzz generated around this topic has become the media go-to hot potato for popular AI-related heuristics reveals quite a lot about the culture and the audience within which it manifests. An extensive investment in a macabre theatre of hypotheticals focuses altogether on the wrong end of the issue, in my rather humble and peripherally-informed opinion.

    I would prefer to see a non-trivial interest in what makes AI in autonomous vehicles so damn (aspirationally) good that a motor-transport system liberally sprinkled with self-driving machines will be safer by far than our current relatively dodgy roadways. Perhaps there are unconscious reasons why we are collectively drawn to perceiving technological questions through the filter of tragedy and loss; or perhaps the media just knows how to spot a good (i.e. effectively "sticky" and self-propagating) story when it sees it.

    ReplyDelete
  2. The potential is obvious: they see in all directions, and they don't get distracted (barring bugs, anyway): they're always paying attention. Ergo, vastly safer.

    That said, the software is eons away from the kind of ethical decision-making that so fascinates the public and press. Apple's self-driving car, when signalling a right turn (on the right side of the road, in all senses of the adjective ;-) ), can't even recognize when an otherwise "in the way" cyclist is moving behind and to the left to allow the car's turn to happen more quickly. It keeps shifting to the left in sync with the bike...because the humans who wrote the code weren't imaginative enough to foresee the (not terribly uncommon) possibility.

    After another 5 years of refinement, maybe they'll be in the ballpark of ethics...but the real-world incidence rate drops off dramatically, in inverse proportion to the sheer number of weird edge cases (which means simulations won't make up the shortfall), so I'd guess it will be more like 10 to 20 years, minimum, before they become that bulletproof. Maybe much longer.

    ReplyDelete

Due to a small but consistent influx of spam, comments will now be checked before publishing. Only egregious spam/illegal/racist crap will be disapproved, everything else will be published.

Whose cloud is it anyway ?

I really don't understand the most militant climate activists who are also opposed to geoengineering . Or rather, I think I understand t...