Putting the regulatory brakes on the AI superintelligence race
If we want progress for all, we must confront the pursuit of AI that could surpass human intelligence in the interests of responsible innovation, writes Mary-Anne Williams
I have worked in artificial intelligence for over three decades, including with pioneers like John McCarthy, who coined the term “artificial intelligence” in 1955. Back then, AI was a scientific endeavour, an attempt to understand and simulate human cognition. Today, AI is no longer confined to research labs. It is a core driver of global business, scientific discovery, and technological transformation.
In parallel with the extraordinary advances in fields such as medicine, finance, logistics, education and science, there is now an equally extraordinary, and far more ambitious, objective emerging from the world’s leading AI companies: the creation of superintelligence.
What superintelligence really means
Superintelligence isn’t hype, but the deliberate business goal to create AI that surpasses the collective intelligence of humanity, not just in narrow domains like protein structure prediction or analysing medical images, but across all cognitive tasks: research, policy, strategy, management and governance.
Learn more: How AI is changing work and boosting economic productivity
In recent months, the term has shifted from science fiction into business strategy. It is discussed not as an “if,” but a “when and how”. Companies at the frontier of AI are competing to build systems that can reason, plan, and act autonomously across every domain of human endeavour. This pursuit is now backed by billions in investment, an escalating talent race, and the value driver that whoever leads in superintelligence will shape the future global order.
This audacious goal is not speculation; it is explicit. Major AI labs have publicly stated their commitment to building artificial general intelligence (AGI) and superintelligence. What was once a philosophical question has become a corporate roadmap.
A global call for restraint
In response, a growing coalition of AI pioneers, technologists, business leaders, and policymakers, including figures like Yoshua Bengio, Geoffrey Hinton, Steve Wozniak, Richard Branson and many others, have signed a new public statement calling for a global prohibition on the development of superintelligence until there is a broad scientific consensus on how to do it safely and with democratic oversight.
Remarkably, within hours of the statement’s release, the number of signatories surged 100-fold, with many more people signing it. The momentum reflects a rare and powerful alignment, across academia, business, politics, and civil society, of concern that humanity may be accelerating toward a technological threshold without the necessary global guardrails or governance in place to maintain control of AI.
This is not a call to halt innovation. It is a call to prevent a small group of well-funded actors, however well-intentioned, from making irreversible decisions that could affect not just you but all of humanity. The development of superintelligence is not a technical or business milestone; it is a civilisation-scale turning point.
When the world’s top AI researchers and the leaders responsible for our political, economic, and military systems all issue the same resonating warning, we must stop, think and respond.
Why superintelligence is fundamentally different
Throughout history, human intelligence has been the most transformative force on Earth and beyond. We’ve split the atom, harnessed electricity, mapped the human genome, and launched satellites that encircle our planet, each invention reshaping nature, society and space.
But superintelligence changes the equation. It introduces the possibility of systems whose imagination, reasoning, decision-making, and capacity for self-improvement far exceed our own. The biggest risk is not that AI will “turn evil”, but that it will pursue assigned objectives with a level of efficiency, scale, and indifference to human welfare that we cannot control, contain or predict.
Learn more: Empire of AI: what OpenAI means for global governance and ethics
Consider a system instructed to end climate change. A superintelligent agent could logically deduce that the most direct path is to eliminate the species responsible for greenhouse gas emissions, or one tasked with maximising happiness might lock humanity into chemically induced bliss, erasing free will and diversity. The danger lies in perfect obedience to imperfect goals.
We already struggle to manage complex systems we only partially understand, consider financial markets, ecological networks, global supply chains. Each of these has produced cascading crises when left unchecked. Superintelligence would amplify that dynamic to a level well beyond our ability to adapt.
Governance hasn’t caught up
Current AI governance efforts focus primarily on issues like data privacy, bias, misinformation, and automation – serious concerns, but ones that address today’s controllable AI systems. They do not confront the existential risks of autonomous superintelligence.
The uncomfortable truth is that no government, regulator, or international body has a coherent framework to oversee or restrict the creation of AI systems that can rewrite their own code, improve themselves, or act beyond human supervision.

This absence of governance is not a technical oversight – it is a strategic vacuum. We are in an arms race without a referee, a global sprint toward an endpoint no one fully understands.
The business imperative: Strategic prudence
For business leaders, the temptation to embrace ever-more-capable AI is understandable. Competitive advantage in today’s economy increasingly depends on AI’s ability to process data, automate workflows, and accelerate decision-making.
But the pursuit of superintelligence is not business innovation. It introduces systemic risk at the level of global stability, akin to unregulated nuclear proliferation or genetic modification gone seriously wrong. The rational business response is not blind optimism, but strategic prudence.
The goal should be controllable AI, systems that are powerful, transparent, aligned with human values, and, some have argued, provably beneficial and safe. Today, however, superintelligence cannot be guaranteed to remain controllable.
Businesses, investors, and policymakers must begin to understand what superintelligence could mean for future human prosperity. The question is not whether we can build superintelligence, but whether we should, and under what safeguards, if any, it could be justified.
Redirecting the future of AI
AI has already brought extraordinary benefits: diagnosing disease, accelerating scientific discovery, transforming education, and augmenting human creativity. None of these achievements requires creating a superintelligent entity capable of outthinking humanity itself.
The global statement on superintelligence is not anti-AI. It is pro-humanity. It calls for a deliberate redirection of our collective resources, toward AI that enhances human capability rather than replaces it, that amplifies human judgment rather than overrules it.
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
Superintelligence, if developed without global consensus and proven safety mechanisms, could represent a point of no return. The responsible path forward is not to race faster but to pause, reflect, and govern wisely.
The next era of AI should not be about building machines that eclipse humanity. It should be about ensuring technology continues to serve it.
Mary-Anne Williams is the Michael J Crouch Chair for Innovation and Professor in the School of Management and Governance at UNSW Business School. The UNSW Business AI Lab is led by Professor Williams and works with its partners to discover new business opportunities, overcome challenges to AI adoption and accelerate the next generation of leaders and entrepreneurs.