Human Factors

The legal issue of consent in autonomous driving

With autonomous and semi-autonomous systems gaining traction in today's automobile landscape, the issue of legal liability is become more relevant.

Human Factors research has shown time and again that driving assistance technology -- including more "archaic" systems like Adaptive Cruise Control and Lane Keeping Assistance Systems, is far from being error-proof. Recent studies have demonstrated that a limited understanding (or mental models) of how these systems operate can in fact lead to system misuse.

A recent study published on the Humanities and Social Science Communications tackles the issue of driver over trust and system misuse from the legal viewpoint.

Every time we register for a new social media account, or install a new smartphone app, the always-present consent message pops up: BY REGISTERING FOR THIS SERVICE YOU ACCEPT ALL TERMS AND CONDITIONS.

Typically, very few people ever bother to skim over this information, let alone read it in its entirety. However, the issue of consent and its implications on liability, will become more relevant as we entrust the autonomous system with our safety and the safety of the all vehicle passengers.

The authors of the study suggest that automakers may use the already-existing in-vehicle digital interfaces as a way to obtain consent from the driver (and possibly all passengers). However, this decision is far from being ideal or even safe.

It is argued that using the car touchscreen may not provide nearly enough information to the driver. Also, the authors suggest that "individuals may misunderstand the nature of the notices which grant permissions".

"Warning fatigue" and distracted driving are also causes of concern.

All in all, given the sizeable limitations of using digital interfaces for obtaining consent, it is suggested this won't shield automakers from their legal liability should the system malfunction or an accident occur.

Similarly to what I described in a recent article, training is seen as a potential aid in ensuring that drivers fully understand system capabilities and limitations.

Whatever the solution may be, this is yet another challenge that all autonomous vehicle stakeholders (including automakers and transportation agencies) needs to address if they wants to take a proactive (rather than a reactive) stance on the issue.

Reference

Pattinson, J. A., Chen, H., & Basu, S. (2020). Legal issues in automated vehicles: critically considering the potential role of consent and interactive digital interfaces. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020-00644-2

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

Training drivers to use autonomous systems

Inefficient or poorly-designed systems can diminish the potential safety benefits of vehicle automation. Despite this being a critical issue in road safety, little has been done to develop ways to optimize driver’s use of driving aids.

In a recent study, Dr. Biondi contributed to the design of a driver training system that leveraged the principles of precision teaching to help drivers learn the capabilities and limitations of automated driving aids.

Precision teaching is an educational technique that takes frequent measurements of the human behavior, and feeds this information back to the learner so that they can optimize their learning.

In the study, Dr. Biondi presented drivers information about the state and functioning of a lane keeping assistance system - a system that helps maintain the vehicle within the lane. When the vehicle was safely within the lane, positive feedback was sent to the driver. Vice versa, when the vehicle drifted off the lane, warning signals were shown.

Results showed that the drivers who received the training made better and safer use of the system. Additionally, such behavioral improvements were maintained over time even when the training was no longer provided.

This indicates that the adoption of sound Human Factors practices yields effective and safe adoption of autonomous systems.

References

Biondi et al. (2020). Precision teaching to improve drivers’ lane maintenance. Journal of safety research.

Here We Are Again: The Human Factors of Voting

Human Factors determine how we, as humans, interact with a multitude of machines in every aspect of our lives. Despite Human Factors investigations playing a central role in fields like automotive or aviation, one field that too often fails to account for adequate Human Factors design is voting.

In the US mid-term election of 2018, Texas was at the center of a Human Factors fiasco, when its electronic voting machines flipped the vote to the opposite party’s candidates every time the voter opted for a straight-ticket ballot. This happened whenever the voter pushed the keyboard before the page had fully loaded.

A similar issue is now happening in Georgia, where, as a result of a machine glitch, the voting machine touchscreen won’t display all candidates’ names on one single page.

Despite these being two separate issues, the root cause is the same: poor Human Factors.

Both user-experience issues can be traced back to the lacking or inadequate Human Factors testing being conducted on the Georgia and Texas voting machines. Applying common Human Factors practices would have undeniably helped designers unveil these user issues early and address them prior to the software being deployed.

References

https://apnews.com/article/election-2020-senate-elections-technology-georgia-elections-af357b7ab7145033f11ee34a1bbf4a3c

https://www.dallasnews.com/news/2018/10/26/company-blames-texas-voters-problems-on-user-error-saying-its-machines-don-t-flip-straight-ticket-ballots/

Yet another case of vehicle automation misuse!

Unfortunately, this will not sound like a news to many (me included.). But yet another Tesla driver was caught napping behind the wheel of a semi-autonomous vehicle in Edmonton, Alberta.

How come this isn’t news you ask? Well, there are now countless examples of erratic, unsafe drivers blatantly misusing (and abusing) vehicle automation.

The National Transportation Safety Boards’ investigations following a handful of fatal and nonfatal collisions involving Tesla Autopilot reported that driver’s inattention and over-reliance on the system, coupled with system’s operational design contributed to these collisions.

Also, this summer, German regulators ruled that the Autopilot name is misleading motorists to believe that Autopilot is in fact an auto-pilot .. which is not.

Efforts from the American Automobile Association and others have recently contributed to the development of a naming convention for semi-autonomous systems that hopes to help consumers make educated decisions when purchasing a vehicle, and reduce the likelihood of misusing its systems.

Much has been done thus far to promote a safe adoption of these systems. My research and others’ have contributed to better understand how Human Factors affect driver’s adoption of autonomous and semi-autonomous systems. Transportation agencies and road safety stakeholders are too pushing for safe regulations. But much more needs to be done.

References

Biondi et al. (2018).80 MPH and out-of-the-loop: Effects of real-world semi-automated driving on driver workload and arousal https://doi.org/10.1177/1541931218621427

CBC (2020). Speeding Tesla driver caught napping behind the wheel on Alberta highway https://www.cbc.ca/news/canada/edmonton/tesla-driver-napping-alberta-speeding-1.5727828

Credit: CBC

Credit: CBC

The Danger of ADAS webinar series

On Septembers 30th, 2020, I will be the guest speaker in iNAGO’s Intelligent Assistant webinar series on the topic of The Danger of ADAS.

The National Highway Traffic Safety Administration estimates that 94% of serious crashes are due to human error (NHTSA, nd). While advanced driver assistance systems are designed to minimize the impact of human error on safety, recent evidence suggest that lacking understanding of these systems, and the over-trust resulting from it, may contribute to drivers misusing ADAS and engaging in potentially dangerous behaviors (NTSB, 2020).

The webinar will cover:

  • Understanding ADAS and its role on driver safety

  • How connected vehicles can be safer by making drivers more knowledgeable

  • Demonstration of a conversational assistant-driven car feature information system

  • User Study results on the use of in-car knowledge assistants by Human Systems Lab

  • Live Q&A with Dr. Biondi and Ron DiCarlantonio

Reserve your virtual seat HERE

Capture.PNG

Distracted driving uptick since the COVID-19 lockdown

A recent study published by ZenDrive shows an uptick in distracted driving and speeding since the beginning of the COVID-19 lockdown in March.

While this is not surprising per se, there may be two important factors determining this.

First, with possibly fewer cars on the road, some motorists may feel like they can take more risks, and, perhaps, convinced of the lower police presence, they are less at risk of being caught.

The second and frankly more disturbing contributor is remote working. As suggested in the ZenDrive report, the ‘mass migration’ to remote working and virtual conferencing has made us even more dependent to communication technology. This, possibly combined with the difficulty to distinguish between work and leisure time during remote working, may have made motorists more inclined to attend work meeting while driving.

Altogether, this evidence suggest that distracted driving may have gotten worse since the beginning of the COVID-19 lockdown in March.