What moral decisions should driverless cars make?

The issue with this logic problem is that it’s a human problem, not an AI problem. We place this logic problem in front of a human and someone dies. Maybe even five people. But His are not limited to the same laws that we are. And that’s the law of fuckwittery. This logic problem assumes … Continue reading “What moral decisions should driverless cars make?”

The issue with this logic problem is that it’s a human problem, not an AI problem.

We place this logic problem in front of a human and someone dies. Maybe even five people. But His are not limited to the same laws that we are. And that’s the law of fuckwittery.

This logic problem assumes there is no choice. But looking at His show there is a choice. Humans speed. They drive without seatbelts. They mess with the radio or their phone while driving. They talk and get distracted.

AIs don’t do this.

Imagine being in a car which will never exceed the speed limit. In fact it will likely go a significantly slower speed than the limit. Always. And it has LIDAR – it can see over obstacles and round corners because it’s not alone. It can patch into the LIDAR of other cars.

The car won’t kill the people in the road, or the pedestrian or you. And I’m not taking it too literally.

It will automatically slow down when approaching a blind corner and it will know it’s stopping distance intimately. It’ll slow to a crawl if it detects something unreasonable.

And yes, it’l do this while you’re screaming that you’ll be late. Which if you are, it’s your fault, because the AI would have told you when it was appropriate to leave for your destination. It’s smarter than you. And that doesn’t mean it’s more creative or more empathic, it just doesn’t make the stupid mistakes that humans do.

So why is this on TED?

One thought on “What moral decisions should driverless cars make?”

Leave a Reply