1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

Opinion: Autonomous car makers must be gods

Zulfikar AbbanyApril 26, 2016

Stop! Look! Listen! Self-driving cars will soon decide more than just the routes we take. If we're not careful, self-driving cars will make moral choices for us, without our realizing, says Zulfikar Abbany.

https://p.dw.com/p/1Ib8v
BdT Deutschland Produktion Volkswagen in Zwickau 26.01.2015
Image: picture-alliance/AP Images/J. Meyer

The more I think about it, I would hate to be one of the secret scientists engaged in developing Apple's "iCar" in Berlin. It would scare me witless to know the moral choices I make, and the algorithms I design, will mean life or death for everyone out on the roads. I'd have to think I was a god to do the job.

It was the Frankfurter Allgemeine Zeitung, adding to existing rumors, which reported the US-based technology company had poached 15 to 20 top engineers, software and hardware developers, to work on an "iCar."

"Sources" said they were working at a "secret lab" in the middle of the German capital.

The first iteration of the iCar was unlikely to be a self-driving, autonomous model, however, wrote FAZ's Maximilian Weingartner. The iCar would run on a car-sharing business model to begin with - and automation would come later.

And it surely will.

Automatic as standard

Automation of the self-driving kind will become a standard in cars, whoever makes them. If we're not too careful, it will become a standard in our brains as well.

And therein lies a danger.

Abbany Zulfikar
DW's Zulfikar Abbany

We're fond of saying the big difference between humans and computers is our creativity, human intuition, empathy, our being so random, oft irrational, and easily confused by emotion. We see these attributes as positives as they allow us to make sensitive decisions.

A computer, robot, algorithm, self-driving car, or any other automated system, on the other hand, makes decisions based on numerical evaluations, statistics, probabilities, and hard rules set down by... humans, albeit humans with their own moral code.

A computer won't think outside of the box. It doesn't know how to - yet. As a result, it won't think ethically or morally. There are no grey zones, just zeros and ones in computing. So far this is what has kept us from "strong AI" - or artificial general intelligence - the point at which machines may one day perform intellectual tasks just like humans can.

But they can't. Yet.

To paraphraze Nicholas Carr in his 2014 book "The Glass Cage," if a self-driving car were faced with either saving a passenger texting unawares in the backseat or a school kid who slips in front of the car on an icy road, who would it choose?

It will know it has to protect human life. That's a basic rule in robotics. But if it can only save one human life, the driver or the kid...

Casting aside legal and commercial interests, this is the sort of decision most humans would hate to make at the best of times - me included.

But the car would choose. In less than a second.

Just a version

The question is, will that decision be moral? Will it even need to be moral? If it's impossible to automate moral choices now, we may do away with them all together.

Once our roads become saturated with self-driving cars, will we even know the difference or care? Chances are we won't, because the nature of automation will have become so standardized, so normal, that our perception of ethics and morals will have adapted to AI's numerical view of the world.

Would it matter? What if our notions of ethics, morals and human intuition did not constitute the best version of life? What if artificial intelligence - whether ethical or not - were just another version?

Trembling on the precipice of the future

We could quite happily get used to artificial, machine thinking. We wouldn't have to worry about difficult decisions, we wouldn't even know they were being made, and our journey would continue as programed.

Sounds efficient and simple, doesn't it?

But we need to start thinking seriously about whether we want this kind of a future.

I can tell you I don't. I'd rather hold onto the imperfections of my human self, my self-determination, and the right to make my own mistakes. It may not make me a god, but at least I'll still be human.