Has COVID-19 made us more distracted?

Following the spread of COVID-19 in March 2020 , remote working has become a reality for many. With remote working often comes virtual conferencing which has consumed many remote workers’ hours over the last year.

Although virtual conferencing has its undeniable benefits, the risk is that remote workers may be tempted to take work calls or virtual conferences outside their home office.

A 2020 ZenDrive report shows an uptick in distracted driving since March 2020. This is possibly the result of some motorists witnessing less busy roads and engaging in more risk-taking behaviors including using cellphones, navigating social media, and taking conference calls while are at the wheel of their vehicles.

Distracted driving is among the top causes of road collisions and mortality. In addition to taking the drivers’ eyes away from the road (to look at a smartphone) and hands away from the wheel (to use a touchscreen), distraction also take attention away from the task of driving.

Cognitive distraction, as it is known in Human Factors research, is also responsible for other disruptive phenomena like attentional tunneling, or the tendency to disregard the visual information in the periphery of the visual field, or inattentional blindness, or the tendency to be blind toward certain information when our mind is occupied with non-driving activities.

References

Zendrive. (2020). Mobility Amidst Lockdown : Every Minute on the Road is Riskier.

COVID-19 has fuelled automation — but human involvement is still essential

The article was originally published on The Conversation on February 7, 2021.

The COVID-19 pandemic has disrupted the way we work and interact with machines — and people — in the workplace. The surge in remote working brought on by the pandemic has magnified the need for unmanned work operations. More automation, however, does not always make the workplace more efficient.

Industries that have heavily relied on manual operations, like warehouses or meat packers, are now introducing more automated or tele-operated systems. Unlike traditional, manually operated machines, in tele-operation the human operator sits in a remote location away from the machine they control.

Despite some of the unquestionable benefits of automation, however, these trends are in part an attempt to address the high rates of COVID-19 among factory workers.

Despite some of the unquestionable benefits of automation, however, simply adopting a technology-driven approach aimed at replacing all manual operations with robots is not a viable fix.

Human-machine interaction

For decades, what’s known as human factors, a discipline at the intersection of cognitive science, engineering and kinesiology, has investigated the human-machine interaction in the workplace, with the goal of understanding the benefits and unintended consequences of automation. Among the phenomena being investigated is what’s known as the paradox of automation.

The paradox of automation — also known as the paradox of technology — occurs when introducing automated system will add to, not reduce, the workload and responsibilities of the human operator.

This is because automated systems often require more knowledge, human supervision and intervention from the human operator whenever something goes wrong.

Case in point is airport security screening. This industry has undergone an automation revolution for many decades now. Yet data shows that the failure rate in this industry is still as high as 95 per cent. Why?

Travellers wear face masks while passing through a security checkpoint at Denver International Airport in December 2020. (AP Photo/David Zalubowski)

The answer to this question is less about the technology, and more about the fact that system developers too often ignore or overlook the human factor.

In other words, a technology-centred approach is adopted over a human-centred one.

Ignoring or undervaluing human factors in automation does not only make systems impossible to use but, more importantly, hinders safety.

Recognize boundaries

A solution to this is developing systems that help automate manually intensive operations, as well as account for known boundaries in human cognition like the inability to multi-task effectively or sustain attention on a given task for long periods of times.

Automated systems must account for human boundaries, but not cut them out entirely. (Pixabay)

Like many other innovations borne out of challenging times in human history, the push for more automation and tele-operation triggered by the COVID-19 pandemic must come with the promise of more efficient and safer workplace operations.

But instead of fully and solely relying on what’s technologically possible, system developers must put human beings at the centre of designing automation instead of relegating them to its periphery.

This article is republished from The Conversation, a nonprofit news site dedicated to sharing ideas from academic experts. It was written by: Francesco BiondiUniversity of Windsor.

The legal issue of consent in autonomous driving

With autonomous and semi-autonomous systems gaining traction in today's automobile landscape, the issue of legal liability is become more relevant.

Human Factors research has shown time and again that driving assistance technology -- including more "archaic" systems like Adaptive Cruise Control and Lane Keeping Assistance Systems, is far from being error-proof. Recent studies have demonstrated that a limited understanding (or mental models) of how these systems operate can in fact lead to system misuse.

A recent study published on the Humanities and Social Science Communications tackles the issue of driver over trust and system misuse from the legal viewpoint.

Every time we register for a new social media account, or install a new smartphone app, the always-present consent message pops up: BY REGISTERING FOR THIS SERVICE YOU ACCEPT ALL TERMS AND CONDITIONS.

Typically, very few people ever bother to skim over this information, let alone read it in its entirety. However, the issue of consent and its implications on liability, will become more relevant as we entrust the autonomous system with our safety and the safety of the all vehicle passengers.

The authors of the study suggest that automakers may use the already-existing in-vehicle digital interfaces as a way to obtain consent from the driver (and possibly all passengers). However, this decision is far from being ideal or even safe.

It is argued that using the car touchscreen may not provide nearly enough information to the driver. Also, the authors suggest that "individuals may misunderstand the nature of the notices which grant permissions".

"Warning fatigue" and distracted driving are also causes of concern.

All in all, given the sizeable limitations of using digital interfaces for obtaining consent, it is suggested this won't shield automakers from their legal liability should the system malfunction or an accident occur.

Similarly to what I described in a recent article, training is seen as a potential aid in ensuring that drivers fully understand system capabilities and limitations.

Whatever the solution may be, this is yet another challenge that all autonomous vehicle stakeholders (including automakers and transportation agencies) needs to address if they wants to take a proactive (rather than a reactive) stance on the issue.

Reference

Pattinson, J. A., Chen, H., & Basu, S. (2020). Legal issues in automated vehicles: critically considering the potential role of consent and interactive digital interfaces. Humanities and Social Sciences Communications, 7(1), 1–10. https://doi.org/10.1057/s41599-020-00644-2

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

Who’s to blame when a self-driving car has an accident?

This article was originally published on December 1, 2020 on The Conversation.

With self-driving cars gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.

Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.

Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.

A webinar on the dangers of advanced driver-assisted systems.

There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?

Trust and accountability

In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.

One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.

The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.

The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.

Unlike using a smartphone app, operating a car has intrinsic and sizeable safety risks, whether the driver is human or software. Human drivers need to consent to take responsibility for the outcomes of the software and hardware.

“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.

Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.

Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.

All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.

A user's guide to self-driving cars

This article was originally published by the author on The Conversation, an independent and nonprofit source of news, analysis and commentary from academic experts.

You may remember the cute Google self-driving car. In 2014, the tech giant announced their brand-new prototype of what the future of transportation might one day look like. If you wish you could drive one today, you are out of luck. The design was unfortunately scrapped in 2017. But don’t worry, what happened didn’t make a dent in the plan of introducing the world to self-driving cars, I mean autonomous cars, driverless cars, automated vehicles or robot cars?

Today’s cars offer a vast selection of driving aids available. Relatively few models, however, come with advanced features like self- or assisted-parking technology and systems capable of taking over steering and acceleration in different driving situations. A recent report shows that despite an optimistic surge in market penetration of these systems, the general public is still on the fence when it comes to fully relying on them.

Systems of classification

In 2016, Mercedes-Benz released an ad for their new 2017 E-Class car. What the ad focused on instead was their futuristic self-driving F 015 concept car driving around with the front and back-row passengers facing each other and using futuristic Minority Report-like displays. The ad came under attack by road safety advocates because it overstated “the capability of automated-driving functions available” of the E-Class. You may even spot the fine print: “Vehicle cannot drive itself, but has automated driving features.”

A similar controversy had Tesla at the centre of the debate in 2016, when it announced it would release self-driving capabilities over-the-air to their vehicles. Similar to what happened with Mercedes-Benz, the company was criticized for misleading advertising and “overstating the autonomy of its vehicles.”

Labelling expectations

When I buy a dishwasher, what I want is a machine that automates the manual task of washing dishes. What I need to do is just push a button and the machine will do its thing with no additional command or intervention. Now, believe it or not, a similar logic applies to automated driving systems. If I am told — or shown or suggested or hinted — that the car might in fact drive itself, what do you expect I, as a human, will do?

Leaving aside related technical or ethical issues, from the perspective of someone who teaches and researches cognitive ergonomics and human factors, I can tell you that providing inaccurate, or even wrongful, information on how automation works has direct safety consequences. These include using machines in unintended ways, reducing the level of monitoring or attention paid to their functions and fully ignoring possible warnings. Some of these safety consequences were touched upon in the official investigation report following the first fatality involving a car with an automated driving system.

Informing consumers

What, you may wonder, are today’s drivers left to do?

A few things: First, before you drive a car equipped with autonomous or self-driving features, you might want to find more about the actual capabilities and limitations. You can ask your dealership or do some good old online research. A valuable resource for consumers is MyCarDoesWhat.org. This website, with helpful videos and links to manufacturers’ websites and user guides, is valuable in presenting the dos and don’ts of automated driving systems.

Finally, before using your car’s automated driving features in real traffic, you may want to familiarize yourself with how they work, how to engage them, etc. Do all of this while stationary, when parked in your driveway perhaps.

I know it may sound like a lot of work (and sometimes it may not even be sufficient), but as research and accident reconstruction already showed many times over, when you are at the wheel, the safest thing to do is to keep your mind and eyes on the road, instead of thinking about how a self-driving car might make your commute much simpler and much more enjoyable.

How to make vehicle tech less distracting

In a recent entry, I talked about the role of training for automated vehicle aids.

In a study published in 2017 in collaboration with the AAA Foundation for Traffic Safety and the University of Utah, I investigated driver interaction with in-vehicle infotainment systems, which are those systems that allow drivers to, e.g., make phone calls or send text messages without using mobile devices.

One of the most striking findings from that study was that, although technology like touchscreens and voice interaction systems have been around for many years, they are still challenging to use, at least for some groups of drivers.

Issues that we found with this touchscreens included relatively low responsiveness, cluttered menu designs, and long interaction times. In certain cases, for examples, primary functions were buried deep in menus or the design of the menu made frequently-used features almost invisible to the driver.

For voice technology, certain systems were overly verbose, and, as a result, imposed large memory load and required long interaction times.

One possible solution to this problem is using off-the-shelf systems like Android Auto and Apple CarPlay, which, in a later research, were shown to burden driver’s attentional resources to a lesser degree.

Another possible solution is to encourage drivers to familiarize themselves with this technology when the vehicle is stationary. which may help them find and utilize frequently-used functions more quickly and efficiently.

References

https://aaafoundation.org/visual-cognitive-demands-using-vehicle-information-systems/

https://aaafoundation.org/visual-cognitive-demands-apples-carplay-googles-android-auto-oem-infotainment-systems/

Training drivers to use autonomous systems

Inefficient or poorly-designed systems can diminish the potential safety benefits of vehicle automation. Despite this being a critical issue in road safety, little has been done to develop ways to optimize driver’s use of driving aids.

In a recent study, Dr. Biondi contributed to the design of a driver training system that leveraged the principles of precision teaching to help drivers learn the capabilities and limitations of automated driving aids.

Precision teaching is an educational technique that takes frequent measurements of the human behavior, and feeds this information back to the learner so that they can optimize their learning.

In the study, Dr. Biondi presented drivers information about the state and functioning of a lane keeping assistance system - a system that helps maintain the vehicle within the lane. When the vehicle was safely within the lane, positive feedback was sent to the driver. Vice versa, when the vehicle drifted off the lane, warning signals were shown.

Results showed that the drivers who received the training made better and safer use of the system. Additionally, such behavioral improvements were maintained over time even when the training was no longer provided.

This indicates that the adoption of sound Human Factors practices yields effective and safe adoption of autonomous systems.

References

Biondi et al. (2020). Precision teaching to improve drivers’ lane maintenance. Journal of safety research.

How to reduce distraction

As we all know, driver distraction is among the top causes of road collisions. It is in fact estimated that 1 every four crashes involves phone use. While reducing the use of personal or vehicle technology is needed and feasible in many cases, there are however some exceptions to this rule.

Emergency vehicle operators like ambulance or police car drivers are in fact exempt from many restrictions, according to the Highway Traffic Act. There are also professions like commercial driving where the use of portable dispatch devices is part of the job description.

This brings us to the question of how can we reduce distraction in these workplaces?

One way could be by providing better training. Despite there being virtually no evidence that distraction can be fully trained away, cognitive research shows that extensive practice can reduce the attentional component of completing simple experimental tasks. Hypothetically, training programs could be developed that reduce the cognitive component of certain driving tasks.

A second possibility would be to design down the distracting effect of using communication technology. What I mean by this is to design technology that adopts modalities that require lower cognitive, manual, or visual demand. In a recent study, we found that certain off-the-shelf infotainment systems were in fact "better" than vehicle's native technology.

These offer possible avenues that should be explored when attempting to tackle the disruptive effect of distraction on road safety.

References

https://aaafoundation.org/wp-content/uploads/2018/06/AAA-Phase-6-CarPlay-Android-Auto-FINAL.pdf

Here We Are Again: The Human Factors of Voting

Human Factors determine how we, as humans, interact with a multitude of machines in every aspect of our lives. Despite Human Factors investigations playing a central role in fields like automotive or aviation, one field that too often fails to account for adequate Human Factors design is voting.

In the US mid-term election of 2018, Texas was at the center of a Human Factors fiasco, when its electronic voting machines flipped the vote to the opposite party’s candidates every time the voter opted for a straight-ticket ballot. This happened whenever the voter pushed the keyboard before the page had fully loaded.

A similar issue is now happening in Georgia, where, as a result of a machine glitch, the voting machine touchscreen won’t display all candidates’ names on one single page.

Despite these being two separate issues, the root cause is the same: poor Human Factors.

Both user-experience issues can be traced back to the lacking or inadequate Human Factors testing being conducted on the Georgia and Texas voting machines. Applying common Human Factors practices would have undeniably helped designers unveil these user issues early and address them prior to the software being deployed.

References

https://apnews.com/article/election-2020-senate-elections-technology-georgia-elections-af357b7ab7145033f11ee34a1bbf4a3c

https://www.dallasnews.com/news/2018/10/26/company-blames-texas-voters-problems-on-user-error-saying-its-machines-don-t-flip-straight-ticket-ballots/