It is interesting to note when something – whether a fashion trend or new technology – goes mainstream. It seems no matter how “good” something is, until the stars align, it just doesn’t come into the public’s stream of consciousness. I can usually tell what is becoming “it” when all my Google Alerts give me the same or similar results no matter what the alert is for. That’s been happening for “driverless cares” – also referred to as “autonomous vehicles.”
The universe’s continuous poking of me on this issue culminated in attending an interesting lecture by Bryant Walker Smith, an internationally recognized expert on the law of self-driving vehicles. As I took in his insights and the articles and white papers from the alerts, my instincts for categorization and organization kicked in to be able to get a better handle on this complex issue. So I present the following three things I think lawyers (whether we practice technology law or not) should be aware of regarding this emerging transportation experience.
They are here.
Automation of certain car functions has been around for a while in the guise of “driver assists” technologies. For example, automated breaking systems, anti-lock breaking, adaptive cruise control, electronic stability control (ESC), and automated parallel parking, among others. Major car manufacturers such as Volvo, Mercedes-Benz, Audi, Ford, Land Rover, Nissan, and Toyota offer one or more of these “automated” functions in current models. But “automated” is not “autonomous.”
Enter Google, other Silicon Valley tech companies, the government and university research teams, and you are now surveying a different paradigm in the relationship between the vehicle, the road, and people. For example, Defense Advanced Research Projects Agency (DARPA) has held a series of “Grand Challenge Races” offering cash prizes to the team that could build an autonomous vehicle that could get through an obstacle course in the quickest amount of time. As for Google, they are on record (and in many news headlines) that their self-driving cars have travelled more than 500,000 miles without an accident, starting back before 2012. So the technology is here, its being tested, and its being continuously improved.
They are complicated.
One reason for their complication is the lack of a standard definition as to what an autonomous vehicle is. So far only 5 state-governments have some legislation on its books regarding these vehicles – California, Florida, Michigan, Nevada, and the District of Columbia. Each of their language differs in some context. On the federal level the National Highway Traffic Safety Administration (NHTSA) has broken the technology into automation levels:
- Level 0: No automation
- Level 1: Function-Specific Automation
- Level 2: Combined Function Automation
- Level 3: Limited Self-Driving automation
- Level 4: Full Self-Driving Automation
That’s the vehicle part of it, but these laws also differ in their definition of who the driver is – can it be human, only human, or can it be a remote controller – human or machine? For certain of us lawyers the “who” is important because it can answer the question of “who” is liable for any accident caused. Too many road accidents are happening a day and autonomous vehicles could be a great way to bring this number down. However, companies like the Attorneys at Becker Law Office may lose business due to their not being enough accidents to keep them in profit! Of course, the next question is liable for what? In Smith’s lecture he mentioned the debate as to whether this notion of liability “could be an impediment to innovation” and how he believed it should not.
Smith and others bring up the fact that we already have liability laws in place to address certain situations that may come up because of the use of autonomous vehicles. For example: product liability, Strict Liability, Negligence, Misrepresentation, and Breach of Warranty. Smith goes on to offer a potential two-prong analysis for liability:
- In the situation (accident, incident, etc.) did the autonomous driving system (ADS) behave as a human would have?
- Did the ADS perform at least as well as we expect an ADS should perform?
But that’s just the liability issue. Further issues unresolved for this technology include whether it is legal at all, specific regulations and who would regulate it, insurance and risk, the data question – including privacy and security – and intellectual property concerns. Oh and did I mention that the Vienna Convention on Road Traffic of 1949 and 1968 as well as the Geneva Convention may also be factors?
They will change everything.
One of the biggest arguments for the importance of autonomous vehicle technology (AVT) is based on a sobering fact – there are over 30,000 deaths and two million injuries annually in the US because of car accidents and over 90% of them are caused by human error (including distracted driving). The logic goes if we take out the human these numbers will go drastically down.
Driver assist technology has saved numerous lives already – NHTSA estimated that ESC saved over 2000 lives in a three-year period. The arguments against AVT (besides the liability discussed above) focus on the cultural aspect of owning and driving cars, especially in the United States. But humans adapt, especially when the technology is beneficial.
Can you imagine (never mind remember) a time before smart phones existed? How did we survive? Ride-sharing is now a thing. Uber and Flyt are a thing. Co-owning is a thing. So why can’t autonomous vehicles be a thing. I’m sure you have heard by now the prediction that if you have a child under the age of five, they will never learn to drive a car. Notice, it’s not that they won’t need to learn how to drive a car, it’s that they won’t learn. They will not need to. It is unknown whether you’ll still need to get cheap auto insurance in VA for a driverless car, but until then, you are definitely going to need it for your current car.
Now I’m sure you will tell me there are parameters – urban vs. rural, etc. But the point Smith and others make is that our current perspective, knowledge and laws, are not what may be needed for the future – our future and the future of our clients and children. So if these technologies change everything, then it seems logical that they will also change society’s expectations and cultural norms, and with those, the law itself.
There is so much more to discuss regarding this issue, but these top three should be enough to start the discussion wherever you are at and whatever law you are practicing. Do you have any thoughts, experience, or insights about this issue? I would really love to hear it. I’m not done with learning and exploring this issue yet. See you at the next rest stop.
For more information:
 Estimating Lives Saved by Electronic Stability Control, supra note 1. The Research Note provides figures of 634 lives saved in 2008, 705 in 2009, and 863 in 2010, for a total of 2202. Brookings Institute White Paper, 2014,
Main image was created by Norbert Aepli.