We do we trust failing humans more than we trust flawless AI?

Although human behaviour fails more often than artificial intelligence, we are more forgiving of people, writes UNSW Business School’s Frederik Anseel

Will artificial intelligence make universities redundant? Universities are thinking hard about how to reinvent themselves now that AI is developing so quickly. But typical predictions these days underestimate how difficult the collaboration between AI and humans will become. We are blinded by the dizzying technological possibilities of AI. But the limitations may have nothing to do with technology, but everything to do with the brain. Human psychology will set limits to what we can and want to leave to AI.

Simply put, the motto of the human brain is: “What we do ourselves, we do better.” An example illustrates this. The Achilles heel of self-driving cars, which are on the verge of a breakthrough, is fatal accidents. Accidents will inevitably happen. But the bar is set much higher for technology than for humans. In Australia, more than 1000 people die on the roads every year. Yet we do not ban people from driving. One fatality from a self-driving car will lead to calls to ban AI in cars. Why?

The difference between judging humans and AI

In How Humans Judge Machines, data scientist César Hidalgo explains that we judge machines and people differently. When we use a machine, we focus on the performance of the AI. Every error in judgment by that program makes us lose trust in it.

In most cases, AI will make better decisions than a human, but that doesn't matter. With humans, we take into account the intentions and emotions of the person. If we trust a person, we assume that that person will not deliberately run someone else off the road.

Frederik Anseel UNSW Business School.jpeg
UNSW Business School Dean Frederik Anseel said people appear to perform better with the help of AI, until they discover that there is an AI system behind the advice. Photo: supplied

Although human behaviour fails more often than AI, we are more forgiving of humans – especially when we are given a good story for that human failure. We understand that an accident can happen quickly. Once we take intentions and feelings into account in our mental assessment of decisions, we are more tolerant of mistakes. In short, we expect AI machines to be rational and humans to be human.

Having an AI bot as a colleague

In our own research, we find that people respond positively to digital algorithms at work when they believe they were put in place to help them. When they believe they are meant to control them, we see an increase in burnout. In that study, we assumed that a manager decides to use algorithms and has good or bad intentions. But what if there is no human intention anymore?

In the future, we will work with AI bots as if they were human colleagues. For many food delivery companies, AI management is already a reality. Schedules and delivery routes are dictated by an AI app. Studies show that food delivery employees do not trust AI management because it does not empathise with what they themselves experience on the road.

This leads to paradoxical situations. People appear to perform better with the help of AI until they discover that there is an AI system behind the advice. Then those same people suddenly appear to perform better only with advice from a human colleague. This does not only happen in the food delivery industry. In a recent study, doctors were given cases to diagnose, with half of the group, getting GPT-4 access to help. In the experiment, the ‘control group’ got a 73% score in diagnostic accuracy, but the GPT-4 group got a score of 77%. Apparently, no big difference between those working on their own and AI users. But here is the twist: GPT-4 alone – without humans – got 88%. It means that doctors didn't change their opinions when working with AI.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Empathy and expertise

I’m repeating a message I have discussed before: the more digital our work becomes, the more human we need to be. For education, this means that universities need to focus on combining social skills and domain expertise.

Empathy, listening and connecting people are essential to give people confidence in good intentions in an AI environment. Domain expertise is essential to understand how algorithms can lead to better decisions. Without domain expertise, consumers, doctors, patients or colleagues cannot trust decisions either. AI bots are fast, more accurate and rational, but not empathetic, emotional or socially sensitive. It is precisely these skills that make people effective.

Frederik Anseel is a Professor of Management and Dean of UNSW Business School. He studies how people and organisations learn and adapt to change, and his research has been published in leading journals such as Journal of Applied Psychology, Journal of Management, American Psychologist, and Psychological Science. A version of this post was first published in De Tijd.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy