The race by automakers and technology firms to develop self-driving cars has been fueled by the belief that computers can operate a vehicle more safely than human drivers. But that view is now in question after the revelation that the driver of a Tesla Model S electric sedan was killed in an accident when the car was in self-driving mode.
Federal regulators, who are in the early stages of setting guidelines for autonomous vehicles, have opened a formal investigation into the incident, which occurred on May 7 in Williston, Fla.
In a statement, the National Highway Traffic Safety Administration said preliminary reports indicated that the crash occurred when a tractor-trailer made a left turn in front of the Tesla, and the car failed to apply the brakes.
It is the first known fatal accident involving a vehicle being driven by itself by means of sophisticated computer software, sensors, cameras and radar.
The crash casts doubt on whether autonomous vehicles in general can consistently make split-second, life-or-death driving decisions on the highway.
Even as the companies conduct many tests on autonomous vehicles at both private facilities and on public highways, there is skepticism that the technology has progressed far enough for the government to approve cars that totally drive themselves.
The federal traffic safety agency is nearing the release of a new set of guidelines and regulations regarding the testing of self-driving vehicles on public roads. They are expected to be released in July.
To read the entire New York Times article, follow this link: http://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html?_r=1