Why AI systems fail without human-machine collaboration
Research shows that aligning workers and AI through co-evolving sociotechnical design prevents transformation failure and protects productivity
When IBM Watson entered the healthcare sector with grand promises, the technology giant invested billions of dollars in developing AI systems that would revolutionise medical diagnosis and treatment. The results were devastating. Watson recorded an 80% failure rate in healthcare applications, with little to no positive impact on patient care. The culprit was not the technology itself, but a fundamental misunderstanding of how humans and machines work together.
This failure illustrates a critical blind spot in how organisations approach digital transformation. Companies pour enormous resources into developing sophisticated AI systems while neglecting the human element that ultimately determines success or failure. The consequences extend far beyond wasted investment, affecting worker wellbeing, safety, and productivity in ways that many executives never anticipate.

The technocentric trap that's costing businesses billions
The business world has fallen into what researchers call a "technocentric fallacy" – the assumption that technology alone drives successful digital transformation. This mindset dominates corporate decision-making, with approximately $1 billion spent daily on advancing technology globally, yet most implementations fail to deliver promised benefits.
The pattern repeats across industries. Companies commission AI systems focused primarily on business effectiveness and efficiency, with minimal input from workers who will actually use these tools. The result is technology that operates in isolation from the social fabric of work, creating friction rather than flow.
Consider the case of predictive policing systems that use historical data to forecast crime patterns. Initially, these systems appeared successful, reducing crime in targeted areas. However, researchers discovered a troubling pendulum effect. When crime decreased in predicted hotspots, the algorithm interpreted this as evidence that the area was no longer high-risk, leading to reduced police presence. Criminals quickly adapted, crime surged again, creating a cycle where "the police officers were sent 'from pillar to post' because developers did not understand how police officers used the algorithmic data in practice, and in turn, the police officers did not understand how their work practices unintentionally influenced the algorithmic data."
Learn more: How AI is changing work and boosting economic productivity
The breakdown occurred because both sides remained unaware of how their actions affected the system. This mutual ignorance created technology that undermined its own effectiveness.
The research behind better human-machine integration
A paper published in the Australian Journal of Management makes the case for revisiting and expanding the long-established sociotechnical systems theory of work design that advocates the social aspects of work (e.g., leadership, culture, task allocations) and technical aspects of work need to be jointly optimised to achieve quality work.
The research team, led by Professor Sharon Parker, Director of Curtin University's Centre for Transformative Work Design in the Future of Work Institute, included experts in technology (e.g., artificial intelligence and computer science) and those in social sciences (e.g., organisational psychology, economics, sociology) in the research network QWiDA – Quality Work in a Digital Age.
Prof. Parker and her colleagues, including UNSW Sydney’s Scientia Professor Toby Walsh and Associate Professor Catherine Collins, have formulated an ambitious and critical research agenda needed for quality work in the future with Digital 5.0.

“This paper lays the foundational framework for the QWiDA research network,” said Prof. Parker. “We introduce the concept of ‘co-evolving’ sociotechnical systems (CeSTS) as an integrating perspective for the future. Our coevolving perspective highlights that both human and technical roles are undergoing continuous and interdependent change as they interact in a dynamic system. Our perspective extends the ‘human in the loop’ imperative from earlier work in the 1950s coal mines in two ways. First, CeSTS incorporates a time perspective in which people and technology adapt together in a process of continuous learning. Second, CeSTS incorporates a multilevel perspective of how changes in work unfold from individuals right through to government regulation.”
Why traditional approaches fall short
Current approaches to technology implementation typically follow a linear path: design the technology, install it, then help workers adapt to it. This sequence fundamentally misunderstands how successful digital transformation occurs. The researchers advocate that technology design, development, and use must co-evolve together, with continuous adaptation from both human and machine elements.
How organisations think about AI transparency is a powerful illustration of the continuous adaptation between humans and machine elements that we need. While many companies advocate for greater algorithmic transparency as a solution to human-machine collaboration challenges, transparency is becoming increasingly elusive due to the self-learning nature of AI technologies – even designers of the technology often cannot articulate the rules underpinning algorithm-based decision-making. Thus, “transparency alone is not sufficient and will become increasingly impossible,” argues Prof. Parker. “Rather, organisations need to design systems that actively support human oversight and control, rather than simply providing information about how algorithms work.”
Learn more: Beyond chatbots: Navigating AI's industrial transformation
Another challenge is getting the right degree of human trust with technology. Sometimes humans over-trust AI systems, even when there is clear communication about the risks and uncertainty of relying of AI-assisted decision systems. This complacent approach to technology creates just as many problems as the opposite problem – mistrusting technology so much that people do not even attempt to use it.
The research also highlighted how individual factors influence technology adoption over time. Workers with high confidence in AI systems engaged less in critical thinking, while those with high confidence in their own abilities maintained more analytical approaches. This suggests that experience levels significantly affect how workers interact with AI tools, with implications for training and system design.
The co-evolution solution: A new framework for success
The researchers proposed a "co-evolving sociotechnical systems" approach that expands thinking across time and levels of analysis. Rather than treating technology implementation as a one-time event, this framework recognises that both human and technical elements continuously adapt and influence each other.

The approach requires consideration of multiple levels simultaneously: individual workers and their capabilities, team dynamics and collaboration patterns, organisational leadership and systems, and broader societal factors including regulation and industry norms. Success depends on alignment across all these levels, not just getting the technology right.
A/Prof. Collins explains: "Achieving co-evolving sociotechnical systems requires interdisciplinary collaboration, methods that can track dynamic and emergent change, and a multi-stakeholder approach that both informs research and shapes change in work." This means involving workers, managers, designers, and other stakeholders throughout the entire technology lifecycle, from initial design through ongoing adaptation.
Practical steps for leaders and organisations
The research provides concrete guidance for executives and managers implementing AI and other digital technologies. First, organisations must expand their focus beyond technical specifications to include work design considerations from the earliest planning stages. This means involving workers in technology design decisions, not just training them to use finished systems. “Our framework emphasises the importance of preserving human agency and control,” said Prof. Parker. “It is an imperative that technologies be designed to augment human capabilities rather than to simply replace human judgment, maintaining what researchers call ’human in the loop’ principles while accounting for the complexity of modern AI systems.”

Second, companies need to develop capabilities for continuous adaptation rather than treating implementation as a project with a defined endpoint. The co-evolutionary nature of human-machine systems requires ongoing monitoring and adjustment as both elements learn and change over time.
Third, organisational teams responsible for technology need to include both employees with technical and social expertise. Successful technology design and implementation require an understanding of human factors, organisational dynamics, and broader contextual influences, not just technical capabilities.
So, what happens if we do not take this approach? "Poor technology implementation can reduce worker autonomy, increase stress, and impair safety,” said A/Prof. Collins. “Conversely, well-designed systems can enhance worker capabilities and improve organisational outcomes.”
Prof. Toby Walsh expands, explaining that “poor technology implementation extends to wider societal issues, such as ethical violations and systemic bias,” he said.
“For example, many of us are aware of Australia’s robodebt disaster, but it is less well known that similar technical and ethical fails happened in the Netherlands with childcare benefits, in the US with unemployment benefits, in the UK with universal credit, in Canada with COVID payments, and in New Zealand with welfare payments.”
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
The research concluded with a stark warning about the current trajectory. "Industry and government are rushing to implement new technology without the evidence, care or safeguards needed to protect workers, or even to achieve the promised productivity and efficiency gains from the various technologies,” Prof. Parker observed. “The key message is that companies must recognise that work design quality is of paramount importance in Industry 5.0, with far-reaching consequences."
The message for business leaders is clear: successful digital transformation requires equal attention to social and technical factors. Companies that continue to pursue technology-first approaches risk not only wasting investment but also negative impacts on their workforce and broader organisational performance. The future belongs to organisations that can master the art of human-machine collaboration, creating systems where technology truly augments human capability rather than replacing human judgement.