The AI car problem: to kill the child or the elderly person?

The AI car problem: to kill the child or the elderly person?

Would you kill one to save many? For centuries philosophers have been discussing this question for which there seems to be no objective answer at all. Or is there?

Researchers from MIT, Harvard University, University of British Columbia and Université Toulouse Capitole tested millions of people from 200 countries from all over the world in an experiment called “The Moral Machine” on their moral-driven decisions. Participants were confronted with a scenario based on the famous trolley problem which allows the person control over a lever which a trolley is moving towards. On each track a person, an animal or a group is blocking the route, hence the participant has to decide: Do I switch? Whom do I kill?

According to The Hacker News, the research is meant to help in the development of algorithms for AI-driven cars.

Read more about if AI cars are going to decide for or against you in future here. 

Copyright Lyonsdown Limited 2021

Top Articles

Overcoming the security challenge in remote working environments

The pandemic has changed the way we work. Remote working is no longer a nice-to-have for organisations, but a necessity especially if they want to attract the best talent.

President Biden pens Executive Order to boost US cybersecurity

US President Joe Biden signed an Executive Order this week to boost the cyber security of federal government systems and data.

DarkSide ransomware gang shuts shop following 'law enforcement request'

The DarkSide ransomware group has announced it is shutting shop as its servers and cryptocurrency accounts were allegedly seized "at the request of law enforcement agencies."

Related Articles