Autopilot

The legal issue of consent in autonomous driving

With autonomous and semi-autonomous systems gaining traction in today's automobile landscape, the issue of legal liability is become more relevant.

Human Factors research has shown time and again that driving assistance technology -- including more "archaic" systems like Adaptive Cruise Control and Lane Keeping Assistance Systems, is far from being error-proof. Recent studies have demonstrated that a limited understanding (or mental models) of how these systems operate can in fact lead to system misuse.

A recent study published on the Humanities and Social Science Communications tackles the issue of driver over trust and system misuse from the legal viewpoint.

Every time we register for a new social media account, or install a new smartphone app, the always-present consent message pops up: BY REGISTERING FOR THIS SERVICE YOU ACCEPT ALL TERMS AND CONDITIONS.

Typically, very few people ever bother to skim over this information, let alone read it in its entirety. However, the issue of consent and its implications on liability, will become more relevant as we entrust the autonomous system with our safety and the safety of the all vehicle passengers.

The authors of the study suggest that automakers may use the already-existing in-vehicle digital interfaces as a way to obtain consent from the driver (and possibly all passengers). However, this decision is far from being ideal or even safe.

It is argued that using the car touchscreen may not provide nearly enough information to the driver. Also, the authors suggest that "individuals may misunderstand the nature of the notices which grant permissions".

"Warning fatigue" and distracted driving are also causes of concern.

All in all, given the sizeable limitations of using digital interfaces for obtaining consent, it is suggested this won't shield automakers from their legal liability should the system malfunction or an accident occur.

Similarly to what I described in a recent article, training is seen as a potential aid in ensuring that drivers fully understand system capabilities and limitations.

Whatever the solution may be, this is yet another challenge that all autonomous vehicle stakeholders (including automakers and transportation agencies) needs to address if they wants to take a proactive (rather than a reactive) stance on the issue.

Reference

Pattinson, J. A., Chen, H., & Basu, S. (2020). Legal issues in automated vehicles: critically considering the potential role of consent and interactive digital interfaces. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020-00644-2

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

Who’s to blame when a self-driving car has an accident?

This article was originally published on December 1, 2020 on The Conversation.

With self-driving cars gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.

Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.

Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.

A webinar on the dangers of advanced driver-assisted systems.

There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?

Trust and accountability

In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.

One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.

The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.

The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.

Unlike using a smartphone app, operating a car has intrinsic and sizeable safety risks, whether the driver is human or software. Human drivers need to consent to take responsibility for the outcomes of the software and hardware.

“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.

Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.

Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.

All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.

A user's guide to self-driving cars

This article was originally published by the author on The Conversation, an independent and nonprofit source of news, analysis and commentary from academic experts.

You may remember the cute Google self-driving car. In 2014, the tech giant announced their brand-new prototype of what the future of transportation might one day look like. If you wish you could drive one today, you are out of luck. The design was unfortunately scrapped in 2017. But don’t worry, what happened didn’t make a dent in the plan of introducing the world to self-driving cars, I mean autonomous cars, driverless cars, automated vehicles or robot cars?

Today’s cars offer a vast selection of driving aids available. Relatively few models, however, come with advanced features like self- or assisted-parking technology and systems capable of taking over steering and acceleration in different driving situations. A recent report shows that despite an optimistic surge in market penetration of these systems, the general public is still on the fence when it comes to fully relying on them.

Systems of classification

In 2016, Mercedes-Benz released an ad for their new 2017 E-Class car. What the ad focused on instead was their futuristic self-driving F 015 concept car driving around with the front and back-row passengers facing each other and using futuristic Minority Report-like displays. The ad came under attack by road safety advocates because it overstated “the capability of automated-driving functions available” of the E-Class. You may even spot the fine print: “Vehicle cannot drive itself, but has automated driving features.”

A similar controversy had Tesla at the centre of the debate in 2016, when it announced it would release self-driving capabilities over-the-air to their vehicles. Similar to what happened with Mercedes-Benz, the company was criticized for misleading advertising and “overstating the autonomy of its vehicles.”

Labelling expectations

When I buy a dishwasher, what I want is a machine that automates the manual task of washing dishes. What I need to do is just push a button and the machine will do its thing with no additional command or intervention. Now, believe it or not, a similar logic applies to automated driving systems. If I am told — or shown or suggested or hinted — that the car might in fact drive itself, what do you expect I, as a human, will do?

Leaving aside related technical or ethical issues, from the perspective of someone who teaches and researches cognitive ergonomics and human factors, I can tell you that providing inaccurate, or even wrongful, information on how automation works has direct safety consequences. These include using machines in unintended ways, reducing the level of monitoring or attention paid to their functions and fully ignoring possible warnings. Some of these safety consequences were touched upon in the official investigation report following the first fatality involving a car with an automated driving system.

Informing consumers

What, you may wonder, are today’s drivers left to do?

A few things: First, before you drive a car equipped with autonomous or self-driving features, you might want to find more about the actual capabilities and limitations. You can ask your dealership or do some good old online research. A valuable resource for consumers is MyCarDoesWhat.org. This website, with helpful videos and links to manufacturers’ websites and user guides, is valuable in presenting the dos and don’ts of automated driving systems.

Finally, before using your car’s automated driving features in real traffic, you may want to familiarize yourself with how they work, how to engage them, etc. Do all of this while stationary, when parked in your driveway perhaps.

I know it may sound like a lot of work (and sometimes it may not even be sufficient), but as research and accident reconstruction already showed many times over, when you are at the wheel, the safest thing to do is to keep your mind and eyes on the road, instead of thinking about how a self-driving car might make your commute much simpler and much more enjoyable.

Training drivers to use autonomous systems

Inefficient or poorly-designed systems can diminish the potential safety benefits of vehicle automation. Despite this being a critical issue in road safety, little has been done to develop ways to optimize driver’s use of driving aids.

In a recent study, Dr. Biondi contributed to the design of a driver training system that leveraged the principles of precision teaching to help drivers learn the capabilities and limitations of automated driving aids.

Precision teaching is an educational technique that takes frequent measurements of the human behavior, and feeds this information back to the learner so that they can optimize their learning.

In the study, Dr. Biondi presented drivers information about the state and functioning of a lane keeping assistance system - a system that helps maintain the vehicle within the lane. When the vehicle was safely within the lane, positive feedback was sent to the driver. Vice versa, when the vehicle drifted off the lane, warning signals were shown.

Results showed that the drivers who received the training made better and safer use of the system. Additionally, such behavioral improvements were maintained over time even when the training was no longer provided.

This indicates that the adoption of sound Human Factors practices yields effective and safe adoption of autonomous systems.

References

Biondi et al. (2020). Precision teaching to improve drivers’ lane maintenance. Journal of safety research.

The Danger of Vehicle Automation

Incorrect or incomplete understanding of vehicle automation is detrimental to safety. Evidence shows that drivers with limited or flawed mental models are in fact more at risk of misusing vehicle automation, and, in turn, road collision.

Watch Dr. Biondi’s talk to find out about the Human Factors issues of misusing vehicle automation.

Yet another case of vehicle automation misuse!

Unfortunately, this will not sound like a news to many (me included.). But yet another Tesla driver was caught napping behind the wheel of a semi-autonomous vehicle in Edmonton, Alberta.

How come this isn’t news you ask? Well, there are now countless examples of erratic, unsafe drivers blatantly misusing (and abusing) vehicle automation.

The National Transportation Safety Boards’ investigations following a handful of fatal and nonfatal collisions involving Tesla Autopilot reported that driver’s inattention and over-reliance on the system, coupled with system’s operational design contributed to these collisions.

Also, this summer, German regulators ruled that the Autopilot name is misleading motorists to believe that Autopilot is in fact an auto-pilot .. which is not.

Efforts from the American Automobile Association and others have recently contributed to the development of a naming convention for semi-autonomous systems that hopes to help consumers make educated decisions when purchasing a vehicle, and reduce the likelihood of misusing its systems.

Much has been done thus far to promote a safe adoption of these systems. My research and others’ have contributed to better understand how Human Factors affect driver’s adoption of autonomous and semi-autonomous systems. Transportation agencies and road safety stakeholders are too pushing for safe regulations. But much more needs to be done.

References

Biondi et al. (2018).80 MPH and out-of-the-loop: Effects of real-world semi-automated driving on driver workload and arousal https://doi.org/10.1177/1541931218621427

CBC (2020). Speeding Tesla driver caught napping behind the wheel on Alberta highway https://www.cbc.ca/news/canada/edmonton/tesla-driver-napping-alberta-speeding-1.5727828

Credit: CBC

Credit: CBC