Sorry ClearerThinking, but this one is a fail. Train problems (I refuse to call them "trolleys" because that's something you wheel around a supermarket not send down a railway) are just far too black and white for the real world.
The "fat man" example. Come on. How many people are fat enough to stop a frickin' train with their sheer body mass ? None, that's how many. It's a useless example. An elephant might do it, but no-one's strong enough to push an elephant off a bridge anyway. Ridonculous.
The tunnel problem is more interesting. Why not both slow down and swerve into the wall, possibly injuring the driver but not killing anyone ? If there's time to swerve, there's time to brake. There's probably only an incredibly narrow window of time in which you face this driver/pedestrian kill choice, the rest of the time it will be about injuries.
Or, better yet, just program the car to automatically slow down to a safe speed inside tunnels. And for crying out loud make it difficult for pedestrians to walk right outside the entrance to a vehicular tunnel. There comes a point where it's just their own dang fault.
The infinite train problem is patently absurd. Yes, I would stop the train even if everyone else in the entire world was on board except for one unfortunate individual somehow tied to the tracks. So it's going to inconvenience a lot of people ? Well, boo hoo, sucks to be them. It doesn't take that long to stop the train and get the person off the rails. And probably >>99% of train journeys never encounter a single person on the rails anyway, so it makes perfect sense to always stop in this event because it's so rare.
There isn't any way to make anything 100% safe. I agree with the conclusion that waiting for self-driving cars to be perfect is unnecessary because they just have to be better than humans, but this is not a good way to convince people. Rather, I would say that because self-driving cars are likely to be more cautious than human drivers, they're less likely to have to face the awkward choices in the first place.
http://www.clearerthinking.org/#!This-comic-shows-how-selfdriving-cars-may-face-complicated-moral-issues/c1toj/56b9185e0cf2dc1600ea461e
Sister blog of Physicists of the Caribbean in which I babble about non-astronomy stuff, because everyone needs a hobby
Subscribe to:
Post Comments (Atom)
Review : Pagan Britain
Having read a good chunk of the original stories, I turn away slightly from mythological themes and back to something more academical : the ...
-
"To claim that you are being discriminated against because you have lost your right to discriminate against others shows a gross lack o...
-
I've noticed that some people care deeply about the truth, but come up with batshit crazy statements. And I've caught myself rationa...
-
For all that I know the Universe is under no obligation to make intuitive sense, I still don't like quantum mechanics. Just because some...
Here at "Better Than Humans" we're not great but we're still better than you!
ReplyDeleteWonder if "Better Than Life" is one of their products?
I'm just not sure the situation is ever so simple as to choose who lives and who dies like that. It seems to me that real world situations are almost always vastly more complex. The "trolley" problem presents a false choice that doesn't really exist.
ReplyDeletehttp://existentialcomics.com/comic/106
Rhys Taylor That's a good point!
ReplyDeleteWhile it is easy to code a solution to the Trolley problem, if one is chosen, real-life situations are both more complex and with less information.
That tunnel example? Are chances of death 100% in both cases? What about crippling injuries? What are the risks for the passenger if the car hits the child? What are the chances of the car succeeding at avoiding the child if it tries? What are the risks of subsequent accidents due to the wreckage? Is the child already dead, maybe? Is it a child at all, and not a fake as a prank?
In fact, there is another problem : those algorithms will be abused.
Someone will cross the road to stop cars and create traffic jams.
Someone will jump in front of a car and get injured to sue someone.
Someone will jump in front of a car (or use a mannequin) to make it crash, killing its passenger.
With the trolley problem, am I the only one whose first thought was "I swap tracks just as the trolley is on it, to derail it"?
Elie Thorne Exactly, it's all about risk. Maybe swerving too much will flip the car and you'll die anyway. Maybe crashing into the side will cause a larger pile-up and potentially kill far more people. Maybe the child gets up at the last second and jumps out of the way. The number of variables is huge - it's not an either/or choice. It's a question of which risk to take, and the magnitude of each risk depends very strongly on the circumstances.
ReplyDeleteHence self-driving car developers should focus on minimizing those risks in the first place, not dealing with them. Writing a code to deal with that level of complexity may be impossible. But keeping the risks low overall may be relatively easy : e.g. keep to the speed limit to reduce stopping distance, go well below the limit in more dangerous situations, don't do all the stupid things most human drivers do regularly, etc.
"A child jumps out in front of you and you have no time to bra... " "What ? Of course I bloody do, I'm doing 20 mph !"
I think the hardest part is going to be programming cars to deal with human drivers who will take greater risks. That might be difficult until most cars are automated.
Interesting point about abusing the algorithms.