ADAS

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

Who’s to blame when a self-driving car has an accident?

This article was originally published on December 1, 2020 on The Conversation.

With self-driving cars gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.

Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.

Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.

A webinar on the dangers of advanced driver-assisted systems.

There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?

Trust and accountability

In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.

One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.

The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.

The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.

Unlike using a smartphone app, operating a car has intrinsic and sizeable safety risks, whether the driver is human or software. Human drivers need to consent to take responsibility for the outcomes of the software and hardware.

“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.

Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.

Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.

All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.

The Danger of Vehicle Automation

Incorrect or incomplete understanding of vehicle automation is detrimental to safety. Evidence shows that drivers with limited or flawed mental models are in fact more at risk of misusing vehicle automation, and, in turn, road collision.

Watch Dr. Biondi’s talk to find out about the Human Factors issues of misusing vehicle automation.

Yet another case of vehicle automation misuse!

Unfortunately, this will not sound like a news to many (me included.). But yet another Tesla driver was caught napping behind the wheel of a semi-autonomous vehicle in Edmonton, Alberta.

How come this isn’t news you ask? Well, there are now countless examples of erratic, unsafe drivers blatantly misusing (and abusing) vehicle automation.

The National Transportation Safety Boards’ investigations following a handful of fatal and nonfatal collisions involving Tesla Autopilot reported that driver’s inattention and over-reliance on the system, coupled with system’s operational design contributed to these collisions.

Also, this summer, German regulators ruled that the Autopilot name is misleading motorists to believe that Autopilot is in fact an auto-pilot .. which is not.

Efforts from the American Automobile Association and others have recently contributed to the development of a naming convention for semi-autonomous systems that hopes to help consumers make educated decisions when purchasing a vehicle, and reduce the likelihood of misusing its systems.

Much has been done thus far to promote a safe adoption of these systems. My research and others’ have contributed to better understand how Human Factors affect driver’s adoption of autonomous and semi-autonomous systems. Transportation agencies and road safety stakeholders are too pushing for safe regulations. But much more needs to be done.

References

Biondi et al. (2018).80 MPH and out-of-the-loop: Effects of real-world semi-automated driving on driver workload and arousal https://doi.org/10.1177/1541931218621427

CBC (2020). Speeding Tesla driver caught napping behind the wheel on Alberta highway https://www.cbc.ca/news/canada/edmonton/tesla-driver-napping-alberta-speeding-1.5727828

Credit: CBC

Credit: CBC

The Danger of ADAS webinar series

On Septembers 30th, 2020, I will be the guest speaker in iNAGO’s Intelligent Assistant webinar series on the topic of The Danger of ADAS.

The National Highway Traffic Safety Administration estimates that 94% of serious crashes are due to human error (NHTSA, nd). While advanced driver assistance systems are designed to minimize the impact of human error on safety, recent evidence suggest that lacking understanding of these systems, and the over-trust resulting from it, may contribute to drivers misusing ADAS and engaging in potentially dangerous behaviors (NTSB, 2020).

The webinar will cover:

  • Understanding ADAS and its role on driver safety

  • How connected vehicles can be safer by making drivers more knowledgeable

  • Demonstration of a conversational assistant-driven car feature information system

  • User Study results on the use of in-car knowledge assistants by Human Systems Lab

  • Live Q&A with Dr. Biondi and Ron DiCarlantonio

Reserve your virtual seat HERE

Capture.PNG