Auto Publishers
|

Who’s to Blame for Self-Driving Vehicle Crashes?

January 27, 2016

Who’s to Blame for Self-Driving Vehicle Crashes?

In a previous Auto Publishers article, I made the case that self-driving vehicle technology is far more advanced than the public imagines. Indeed, the argument can be made that these vehicles are better at driving than humans. Where self-driving vehicles like the Google Car (pictured) often fall short is their strict adherence to the rules of the road. While it may sound counterintuitive, the biggest obstacle facing autonomous car manufactures is making a self-driving vehicle that can drive like a human without making it too human. Just to be clear, I’m not predicting that self-driving vehicles will somehow become self-aware (although the possibility can’t be completely ruled out). Rather, the issue at play is the extent to which self-driving vehicles should be given the capacity for independent thought. In short, should a self-driving vehicle be allowed at times to break the letter the law in order to preserve the lives of its passengers and other drivers on the road? Let’s consider some statistics regarding the safety of self-driving vehicles. According to a University of Michigan study, autonomous vehicles have double the crash rate as those driven by humans. Some of the fault lies with self-driving vehicles that obey traffic laws much too literally, leaving no room for interpretation when confronted with driving situations outside its normal protocols. As it turns out, however, aggressive or inattentive human drivers were typically the ones at fault in crashes involving self-driving vehicles—and as anyone who drives can attest, those two driver types are in abundance on U.S. roads. Even brain trusts behind self-driving vehicle departments freely acknowledge the current limitations of this technology. For instance, self-driving vehicles tend to exercise excessive amounts of caution when encountering highway traffic conditions. In particular, they have a difficult time changing lanes—whereas a human driver instinctively knows how to go with the flow of traffic, a self-driving vehicle isn’t able to discern intent quite as easily (at least, not yet). One possible solution to these sorts of issues, as previously mentioned, would be to program self-driving vehicles to behave more like a human driver would. At this stage, self-driving models simply don’t measure up to the instincts and reflexes of a human driver. However, the crashes self-driving vehicles get into are low-impact in nature because they don’t take the unnecessary risks that a human driver might. As a result, the injury and fatality rates for self-driving vehicles project to be much lower than accidents between human drivers. Still, there remains a great deal of skepticism about self-driving vehicle technology. About a month ago, the California DMV proposed a moratorium on self-driving vehicles on public roads within the state until auto companies (and Google) adhere to their regulations. These regulations include the inclusion of steering wheels and pedals for human control and use. More recently, the state of California, which requires autonomous vehicle makers to file “disengagement reports” with the DMV, revealed a total of 2,894 disengagements between September 2014 and November 2015. Most of these disengagements, defined as an occasion when a human driver was forced to take control of a self-driving vehicle, occurred most frequently with Google autonomous vehicles. Of course, it should be pointed out that these disengagement figures represent a small sample size that doesn’t take other states outside of California into account. Moreover, the findings show that in most instances, the self-driving vehicle ceded control to its human driver due to—here’s that phrase again—an excess of caution on the part of the vehicle itself. However, it may not be long before uniform self-driving vehicle regulations are put in place across the U.S. In fact, the Department of Transportation has a self-imposed, six-month deadline to create a set of guidelines for testing and regulating self-driving vehicles. The makers of self-driving vehicles would surely welcome the move, as it would supplant the current patchwork of rules across various states. With that said, the federal government will leave most of the nitty-gritty decisions—such as how self-driving vehicles are allowed to operate on roads—up to the states. Make no mistake: it isn’t merely an altruistic pursuit of scientific excellence that motivates Google and other automakers. Indeed, self-driving vehicles are poised to become a service as ubiquitous and as lucrative as Uber, as witnessed by Google’s plans to offer an autonomous car-sharing service in the coming years. Those plans are presumably under way with or without the cooperation of the State of California. Either way, it appears clear that self-driving vehicle technology will continue to progress by leaps and bounds in the coming years. It’ll be us humans who’ll be the ones who have to adjust. Image Credit: The Telegraph