It’s no secret that before self-driving cars are mainstreamed into society – whenever that may be – a slew of ethical dilemmas need to be answered.
How will self-driving cars make life-or-death decisions?
Should self-driving cars protect the occupants at all costs?
Should self-driving cars always minimize the loss of life?
Researchers need to teach self-driving cars how to make safe driving decisions. Engineers at Stanford University recently shed some light on what goes into programming self-driving cars to make those safe driving decisions.
In the video above, Stanford uses the example of a self-driving car encountering an obstacle in the middle of its lane, and all the possibilities that need to be accounted for in the programming stages.
“We can treat that as a very hard, strict constraint and the vehicle will have to come to a complete stop to avoid the obstacle,” said Sarah Thornton, a PhD candidate who’s in Stanford’s Dynamic Design Lab. “Another option would be to minimize how much it violates the double yellow line and veer very closely to the obstacle – very uncomfortable for the occupant in the passenger seat. The third scenario is to enter the oncoming traffic lane to give more space to the obstacle as you maneuver around it.”
Stanford developed a self-driving car named “Shelley” that can hit speeds up to 120 miles per hour. The custom Audi TTS has been tested at the three-mile Thunderhill Raceway in California. Shelley hit average speeds of between 50-70 MPH, but on the quicker parts of the track the self-driving car reached speed between 110-120 MPH.
Shelley was developed to see how the car adjusts its throttle, braking systems and record data from these movements to be to improve collision avoidance software.