Autonomous vehicles are becoming more common every year. The technology hasn’t advanced enough for us to create cars without steering wheels or human drivers — at least not yet — but it is progressing further every year.
One of the biggest challenges facing autonomous driving is the fact that it’s impossible to program computers to make the moral decisions that a human driver is capable of. That’s why MIT is trying to make autonomous driving programs more human. How can MIT add humanity to a self-driving car, and why should these programs be more human?
The Trolley Problem
The Trolley Problem is an ethics question that has plagued philosophers and programmers since 1967. It goes something like this:
There’s a train coming down the tracks, swiftly approaching a fork in the tracks. On one branch, there are five people standing on the tracks. On the other branch is only one person. Do you change the direction of the train so it will hit the one person?
This is the kind of decisions that humans make in a split-second behind the wheel of a car, especially if an accident is a possibility. If a pedestrian steps out in front of you, do you hit the brakes and turn the wheel, even if it means you might hit a wall?
That’s one of the biggest problems that autonomous car programs are facing — making them capable of choosing who lives and who dies in these situations.
Making Self-Driving Cars More Human
In addition to making life and death decisions, human drivers are more capable of navigating unfamiliar locations without the need for a GPS or map. They also adapt more quickly to things like dangerous intersections like those with flashing lights or those coming from service or frontage roads. MIT is working toward making self-driving cars more human — or at least more human-like — when it comes to navigation in unfamiliar areas.
Traditionally, driverless cars rely on advanced GPS systems which are usually fairly accurate, but they can’t adapt to changes in traffic patterns or areas where the maps don’t match up with the reality of the road. MIT’s new system learns how to navigate new areas by mimicking a human driver traveling along the same path. The goal of this new training system is to create a self-driving system that works just as well on unfamiliar streets or in rural areas as it does in well-mapped urban areas.
The Future of Vehicular Autonomy
Current autonomous vehicles still require a human behind the wheel to take control if the car encounters something outside of its programming, like new traffic patterns, car accidents, or anything else that the programmers couldn’t foresee. MIT’s new training program could help bring self-driving cars a little closer to reality. Self-driving cars will need to be able to communicate with each other and react to new and potentially unmapped environments just like a human driver would — especially if manufacturers start making autonomous vehicles without steering wheels.
The trolley problem is only one of the hurdles that programmers and legislators will need to overcome before truly autonomous cars can start hitting the highways of the world.