user error

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

How to make vehicle tech less distracting

In a recent entry, I talked about the role of training for automated vehicle aids.

In a study published in 2017 in collaboration with the AAA Foundation for Traffic Safety and the University of Utah, I investigated driver interaction with in-vehicle infotainment systems, which are those systems that allow drivers to, e.g., make phone calls or send text messages without using mobile devices.

One of the most striking findings from that study was that, although technology like touchscreens and voice interaction systems have been around for many years, they are still challenging to use, at least for some groups of drivers.

Issues that we found with this touchscreens included relatively low responsiveness, cluttered menu designs, and long interaction times. In certain cases, for examples, primary functions were buried deep in menus or the design of the menu made frequently-used features almost invisible to the driver.

For voice technology, certain systems were overly verbose, and, as a result, imposed large memory load and required long interaction times.

One possible solution to this problem is using off-the-shelf systems like Android Auto and Apple CarPlay, which, in a later research, were shown to burden driver’s attentional resources to a lesser degree.

Another possible solution is to encourage drivers to familiarize themselves with this technology when the vehicle is stationary. which may help them find and utilize frequently-used functions more quickly and efficiently.

References

https://aaafoundation.org/visual-cognitive-demands-using-vehicle-information-systems/

https://aaafoundation.org/visual-cognitive-demands-apples-carplay-googles-android-auto-oem-infotainment-systems/

Here We Are Again: The Human Factors of Voting

Human Factors determine how we, as humans, interact with a multitude of machines in every aspect of our lives. Despite Human Factors investigations playing a central role in fields like automotive or aviation, one field that too often fails to account for adequate Human Factors design is voting.

In the US mid-term election of 2018, Texas was at the center of a Human Factors fiasco, when its electronic voting machines flipped the vote to the opposite party’s candidates every time the voter opted for a straight-ticket ballot. This happened whenever the voter pushed the keyboard before the page had fully loaded.

A similar issue is now happening in Georgia, where, as a result of a machine glitch, the voting machine touchscreen won’t display all candidates’ names on one single page.

Despite these being two separate issues, the root cause is the same: poor Human Factors.

Both user-experience issues can be traced back to the lacking or inadequate Human Factors testing being conducted on the Georgia and Texas voting machines. Applying common Human Factors practices would have undeniably helped designers unveil these user issues early and address them prior to the software being deployed.

References

https://apnews.com/article/election-2020-senate-elections-technology-georgia-elections-af357b7ab7145033f11ee34a1bbf4a3c

https://www.dallasnews.com/news/2018/10/26/company-blames-texas-voters-problems-on-user-error-saying-its-machines-don-t-flip-straight-ticket-ballots/