"The biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book."

The problem with Google's self-driving car are human drivers

#Driving

Thu, Sep 3rd, 2015 11:00 by capnasty NEWS

The problem with Google's self driving car is that it's too safe, and when faced in an environment of human drivers who don't quite follow the rules by-the-book, not only it doesn't know what to do, but it can lead into human-caused accidents. As the head of software for Google’s Self-Driving Car Project Dmitri Dolgov explains, “human drivers needed to be less idiotic.”

Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.

It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book.

  1307

 

You may also be interested in:

“Eliminating the time needed to stop and re-charge a conventional electric car’s battery.”
"We think that it’s a service to the community to know if you’re a crazy driver or not."
Public Service Announcement: Drive Recklessly
Every New Car Will Be a Hybrid
The Police Car of 2012