automation

COVID-19 has fuelled automation — but human involvement is still essential

The article was originally published on The Conversation on February 7, 2021.

The COVID-19 pandemic has disrupted the way we work and interact with machines — and people — in the workplace. The surge in remote working brought on by the pandemic has magnified the need for unmanned work operations. More automation, however, does not always make the workplace more efficient.

Industries that have heavily relied on manual operations, like warehouses or meat packers, are now introducing more automated or tele-operated systems. Unlike traditional, manually operated machines, in tele-operation the human operator sits in a remote location away from the machine they control.

Despite some of the unquestionable benefits of automation, however, these trends are in part an attempt to address the high rates of COVID-19 among factory workers.

Despite some of the unquestionable benefits of automation, however, simply adopting a technology-driven approach aimed at replacing all manual operations with robots is not a viable fix.

Human-machine interaction

For decades, what’s known as human factors, a discipline at the intersection of cognitive science, engineering and kinesiology, has investigated the human-machine interaction in the workplace, with the goal of understanding the benefits and unintended consequences of automation. Among the phenomena being investigated is what’s known as the paradox of automation.

The paradox of automation — also known as the paradox of technology — occurs when introducing automated system will add to, not reduce, the workload and responsibilities of the human operator.

This is because automated systems often require more knowledge, human supervision and intervention from the human operator whenever something goes wrong.

Case in point is airport security screening. This industry has undergone an automation revolution for many decades now. Yet data shows that the failure rate in this industry is still as high as 95 per cent. Why?

Travellers wear face masks while passing through a security checkpoint at Denver International Airport in December 2020. (AP Photo/David Zalubowski)

The answer to this question is less about the technology, and more about the fact that system developers too often ignore or overlook the human factor.

In other words, a technology-centred approach is adopted over a human-centred one.

Ignoring or undervaluing human factors in automation does not only make systems impossible to use but, more importantly, hinders safety.

Recognize boundaries

A solution to this is developing systems that help automate manually intensive operations, as well as account for known boundaries in human cognition like the inability to multi-task effectively or sustain attention on a given task for long periods of times.

Automated systems must account for human boundaries, but not cut them out entirely. (Pixabay)

Like many other innovations borne out of challenging times in human history, the push for more automation and tele-operation triggered by the COVID-19 pandemic must come with the promise of more efficient and safer workplace operations.

But instead of fully and solely relying on what’s technologically possible, system developers must put human beings at the centre of designing automation instead of relegating them to its periphery.

This article is republished from The Conversation, a nonprofit news site dedicated to sharing ideas from academic experts. It was written by: Francesco BiondiUniversity of Windsor.

The Swiss Cheese Model of Human Error

I recently read a New York Times article discussing the Swiss Cheese Model of Pandemic Defense. The article discussed James Reason’s Swiss Cheese model of human error to describe the concerted response to the COVID-19 pandemic.

The model uses the analogy of the swiss cheese to exemplify the number of possible defenses against possible threats, there being human error in transportation or a global pandemic.

Each slice represents a possible line of defense. But, like the swiss cheese, each layer has holes and each hole introduces a new vulnerability to the system.

This framework can easily be applied to human interaction with complex systems in virtually any Human Factors application.

In healthcare, for example, the fatal administration of the wrong drug could be caused by a chain of failures wherein two different drugs have a similar packaging and the healthcare professional administering the drug was distracted or poorly trained to notice the differences between the two, winding up administering the wrong one to the patient.

In autonomous vehicles, because of the poor operational design of the system (a hole), combined with the poor HMI (another hole), the driver is unsure about the capabilities of the system (yet another hole), and winds up misusing the system (error).

This model is a useful Human Factors tools when identifying everything that can go wrong in human-machine interaction. It also offers a model to help shrink the size of the holes or remove them altogether.

Who’s to blame when a self-driving car has an accident?

This article was originally published on December 1, 2020 on The Conversation.

With self-driving cars gaining traction in today’s automobile landscape, the issue of legal liability in the case of an accident has become more relevant.

Research in human-vehicle interaction has shown time and again that even systems designed to automate driving — like adaptive cruise control, which maintains the vehicle at a certain speed and distance from the car ahead — are far from being error-proof.

Recent evidence points to drivers’ limited understanding of what these systems can and cannot do (also known as mental models) as a contributing factor to system misuse.

A webinar on the dangers of advanced driver-assisted systems.

There are many issues troubling the world of self-driving cars including the less-than-perfect technology and lukewarm public acceptance of autonomous systems. There is also the question of legal liabilities. In particular, what are the legal responsibilities of the human driver and the car maker that built the self-driving car?

Trust and accountability

In a recent study published in Humanities and Social Science Communications, the authors tackle the issue of over-trusting drivers and the resulting system misuse from a legal viewpoint. They look at what the manufacturers of self-driving cars should legally do to ensure that drivers understand how to use the vehicles appropriately.

One solution suggested in the study involves requiring buyers to sign end-user licence agreements (EULAs), similar to the terms and conditions that require agreement when using new computer or software products. To obtain consent, manufacturers might employ the omnipresent touchscreen, which comes installed in most new vehicles.

The issue is that this is far from being ideal, or even safe. And the interface may not provide enough information to the driver, leading to confusion about the nature of the requests for agreement and their implications.

The problem is, most end users don’t read EULAs: a 2017 Deloitte study shows that 91 per cent of people agree to them without reading. The percentage is even higher in young people, with 97 per cent agreeing without reviewing the terms.

Unlike using a smartphone app, operating a car has intrinsic and sizeable safety risks, whether the driver is human or software. Human drivers need to consent to take responsibility for the outcomes of the software and hardware.

“Warning fatigue” and distracted driving are also causes for concern. For example, a driver, annoyed after receiving continuous warnings, could decide to just ignore the message. Or, if the message is presented while the vehicle is in motion, it could represent a distraction.

Given these limitations and concerns, even if this mode of obtaining consent is to move forward, it likely won’t fully shield automakers from their legal liability should the system malfunction or an accident occur.

Driver training for self-driving vehicles can help ensure that drivers fully understand system capabilities and limitations. This needs to occur beyond the vehicle purchase — recent evidence shows that even relying on the information provided by the dealership is not going to answer many questions.

All of this considered, the road forward for self-driving cars is not going to be a smooth ride after all.

A user's guide to self-driving cars

This article was originally published by the author on The Conversation, an independent and nonprofit source of news, analysis and commentary from academic experts.

You may remember the cute Google self-driving car. In 2014, the tech giant announced their brand-new prototype of what the future of transportation might one day look like. If you wish you could drive one today, you are out of luck. The design was unfortunately scrapped in 2017. But don’t worry, what happened didn’t make a dent in the plan of introducing the world to self-driving cars, I mean autonomous cars, driverless cars, automated vehicles or robot cars?

Today’s cars offer a vast selection of driving aids available. Relatively few models, however, come with advanced features like self- or assisted-parking technology and systems capable of taking over steering and acceleration in different driving situations. A recent report shows that despite an optimistic surge in market penetration of these systems, the general public is still on the fence when it comes to fully relying on them.

Systems of classification

In 2016, Mercedes-Benz released an ad for their new 2017 E-Class car. What the ad focused on instead was their futuristic self-driving F 015 concept car driving around with the front and back-row passengers facing each other and using futuristic Minority Report-like displays. The ad came under attack by road safety advocates because it overstated “the capability of automated-driving functions available” of the E-Class. You may even spot the fine print: “Vehicle cannot drive itself, but has automated driving features.”

A similar controversy had Tesla at the centre of the debate in 2016, when it announced it would release self-driving capabilities over-the-air to their vehicles. Similar to what happened with Mercedes-Benz, the company was criticized for misleading advertising and “overstating the autonomy of its vehicles.”

Labelling expectations

When I buy a dishwasher, what I want is a machine that automates the manual task of washing dishes. What I need to do is just push a button and the machine will do its thing with no additional command or intervention. Now, believe it or not, a similar logic applies to automated driving systems. If I am told — or shown or suggested or hinted — that the car might in fact drive itself, what do you expect I, as a human, will do?

Leaving aside related technical or ethical issues, from the perspective of someone who teaches and researches cognitive ergonomics and human factors, I can tell you that providing inaccurate, or even wrongful, information on how automation works has direct safety consequences. These include using machines in unintended ways, reducing the level of monitoring or attention paid to their functions and fully ignoring possible warnings. Some of these safety consequences were touched upon in the official investigation report following the first fatality involving a car with an automated driving system.

Informing consumers

What, you may wonder, are today’s drivers left to do?

A few things: First, before you drive a car equipped with autonomous or self-driving features, you might want to find more about the actual capabilities and limitations. You can ask your dealership or do some good old online research. A valuable resource for consumers is MyCarDoesWhat.org. This website, with helpful videos and links to manufacturers’ websites and user guides, is valuable in presenting the dos and don’ts of automated driving systems.

Finally, before using your car’s automated driving features in real traffic, you may want to familiarize yourself with how they work, how to engage them, etc. Do all of this while stationary, when parked in your driveway perhaps.

I know it may sound like a lot of work (and sometimes it may not even be sufficient), but as research and accident reconstruction already showed many times over, when you are at the wheel, the safest thing to do is to keep your mind and eyes on the road, instead of thinking about how a self-driving car might make your commute much simpler and much more enjoyable.

Training drivers to use autonomous systems

Inefficient or poorly-designed systems can diminish the potential safety benefits of vehicle automation. Despite this being a critical issue in road safety, little has been done to develop ways to optimize driver’s use of driving aids.

In a recent study, Dr. Biondi contributed to the design of a driver training system that leveraged the principles of precision teaching to help drivers learn the capabilities and limitations of automated driving aids.

Precision teaching is an educational technique that takes frequent measurements of the human behavior, and feeds this information back to the learner so that they can optimize their learning.

In the study, Dr. Biondi presented drivers information about the state and functioning of a lane keeping assistance system - a system that helps maintain the vehicle within the lane. When the vehicle was safely within the lane, positive feedback was sent to the driver. Vice versa, when the vehicle drifted off the lane, warning signals were shown.

Results showed that the drivers who received the training made better and safer use of the system. Additionally, such behavioral improvements were maintained over time even when the training was no longer provided.

This indicates that the adoption of sound Human Factors practices yields effective and safe adoption of autonomous systems.

References

Biondi et al. (2020). Precision teaching to improve drivers’ lane maintenance. Journal of safety research.

The Danger of Vehicle Automation

Incorrect or incomplete understanding of vehicle automation is detrimental to safety. Evidence shows that drivers with limited or flawed mental models are in fact more at risk of misusing vehicle automation, and, in turn, road collision.

Watch Dr. Biondi’s talk to find out about the Human Factors issues of misusing vehicle automation.

Yet another case of vehicle automation misuse!

Unfortunately, this will not sound like a news to many (me included.). But yet another Tesla driver was caught napping behind the wheel of a semi-autonomous vehicle in Edmonton, Alberta.

How come this isn’t news you ask? Well, there are now countless examples of erratic, unsafe drivers blatantly misusing (and abusing) vehicle automation.

The National Transportation Safety Boards’ investigations following a handful of fatal and nonfatal collisions involving Tesla Autopilot reported that driver’s inattention and over-reliance on the system, coupled with system’s operational design contributed to these collisions.

Also, this summer, German regulators ruled that the Autopilot name is misleading motorists to believe that Autopilot is in fact an auto-pilot .. which is not.

Efforts from the American Automobile Association and others have recently contributed to the development of a naming convention for semi-autonomous systems that hopes to help consumers make educated decisions when purchasing a vehicle, and reduce the likelihood of misusing its systems.

Much has been done thus far to promote a safe adoption of these systems. My research and others’ have contributed to better understand how Human Factors affect driver’s adoption of autonomous and semi-autonomous systems. Transportation agencies and road safety stakeholders are too pushing for safe regulations. But much more needs to be done.

References

Biondi et al. (2018).80 MPH and out-of-the-loop: Effects of real-world semi-automated driving on driver workload and arousal https://doi.org/10.1177/1541931218621427

CBC (2020). Speeding Tesla driver caught napping behind the wheel on Alberta highway https://www.cbc.ca/news/canada/edmonton/tesla-driver-napping-alberta-speeding-1.5727828

Credit: CBC

Credit: CBC