The strategic impact of AI on business transformation

AI is rapidly disrupting business. Leaders must adapt, harness AI’s potential and navigate the ethical and regulatory challenges of its exponential growth

The pace of artificial intelligence (AI) development has accelerated dramatically in recent years, with potentially far-reaching implications for businesses across industries. But how ready are current and future leaders to harness AI’s potential and strategically navigate their organisations towards a more prosperous future?

The rapid advancement of AI is set to significantly disrupt businesses and skills across industries, according to George Shinkle, a Professor in the School of Management & Governance at UNSW Business School and AGSM @ UNSW Business School Fellow at UNSW Sydney. In a recent presentation, Prof. Shinkle discussed the potential impacts of AI on employment, business strategies, and the need for psychological resilience in the face of accelerating change.

“What are you going to do in your organisation to wrestle with some of these kinds of challenges and even paradoxes? I think it’s an exciting time, but it’s also a really uncertain time,” said Prof. Shinkle, who recently spoke on the subject of ‘a strategic journey into the AI-disrupted future’ in conjunction with Arch_Manu (ARC Centre for Next-Gen Architectural Manufacturing), whose mission is to address the productivity, performance and sustainability issues in Australia’s Architecture, Engineering and Construction (AEC) sector through a sector-wide digital transformation.

Professor George Shinkle, School of Management and Governance at UNSW Business School-min.jpg
UNSW Business School's Professor George Shinkle said many leading tech companies are “racing like crazy to try to get to artificial general intelligence”. Photo: supplied

Exponential growth in AI capabilities

Prof. Shinkle emphasised the exponential nature of AI progress, noting that many struggle to grasp its implications. He cited investor Cathy Wood’s projections, which suggest artificial general intelligence (AGI) – AI with human-level capabilities – could arrive much sooner than previously thought: “Every launch of the new AI system, especially for open AI, changed that projection. The last announcement knocked the production down to eight years away. So, it was eight years away. It’s basically 2029/2030,” he said.

This rapid advancement is driven by massive investments in computing power and ongoing improvements in AI algorithms. Prof. Shinkle noted that many leading tech companies are “racing like crazy to try to get to artificial general intelligence” due to its potential to rapidly lead to even more advanced AI systems.

He also highlighted the challenge of comprehending exponential growth: “We don’t think that way. We don’t know how to naturally think in exponentials. And even the people working in the AI space didn’t predict some of the things that happened at the speed in which they did.” To illustrate this point, Prof. Shinkle referenced Moore’s Law, which has accurately predicted the doubling of computing power roughly every two years since the 1970s.

He noted that while Moore’s Law applies to electronic chips, AI capabilities appear to be advancing even more rapidly. “Moore’s law, the doubling every two years, robots have been doubling in their capability every year since about 1980. Every year, not every two years. And generative AI has recently been doubling every six months,” he said.

Elon Musk has suggested that AI could make human jobs “optional hobbies” in the future.jpeg
Elon Musk has suggested that AI could make human jobs “optional hobbies” in the future as AI advances on the world of work. Photo: CC BY-NC 3.0

Potential impacts on jobs and management

The rapid progress in AI capabilities raises questions about its impact on human jobs and even management roles. Prof. Shinkle referenced comments by Elon Musk suggesting that AI could make human jobs “optional hobbies” in the longer term.

He also shared a provocative scenario from Boston Consulting Group imagining AI systems filling C-suite roles by 2030, noting the consultancy argued: “that’s actually better, because the AIs could indeed be less emotional, more intelligent, more informed, and able to process more data than human counterparts.”

While such scenarios remain speculative, Prof. Shinkle emphasised that AI is already demonstrating human-level or superhuman performance in many domains. He cited research showing AI outperforming human clinicians in medical diagnoses, and referenced Ethan Mollick who stated: “Things we thought were uniquely human a year ago are going to be done by machines in the coming years. Often at a superhuman level, we need to be ready for that to occur.”

Psychological resilience and AI-related change

In light of this, Prof. Shinkle emphasised the need for individuals to develop greater psychological resilience to cope with accelerating change. He argued that the pace of technological advancement will require continuous learning and adaptation throughout one’s career.

He quoted Professor Yuval Noah Harari who stated: “The pace of change will be so rapid, that even when you’re 40, 50 or 60, if you want to stay relevant, you will have to reinvent yourself in radical ways to relearn things, again and again. And this will create tremendous psychological stress.”

Learn more: The UNSW Business AI Lab

This need for ongoing reinvention represents a significant shift from traditional career models, where skills learned in youth often remained relevant for decades. Prof. Shinkle suggested that both individuals and organisations – as well as societal institutions – need to prepare for this new reality of people needing to engage with learning and re-learning.

Timelines on potential developments in AI

Prof. Shinkle emphasised the exponential nature of AI progress, cautioning that its impacts may be underestimated in the long term. He noted that while the exact timeline is uncertain, the effects could be substantial within just a few years:

“My personal belief is I don’t think AI is reliable enough to replace humans very quickly. But agents like chatbots, in specific task will be. So, where we can define the task precisely – what needs to be done and how to go about doing it. Then we can certainly program an AI – particularly a RAG (Retrieval Augmented Generative) using company-specific data) AI to be capable of replacing or highly supporting augmenting humans,” he said.

This rapid advancement is expected to cause significant disruption across industries. Prof. Shinkle cited research predicting widespread impacts: “The World Economic Forum predicts that by 2028 44 per cent of workers skills will be disrupted, that does not mean jobs go away, skills will be disrupted. Because some things can be done,” he said.

Companies like Open AI are trying to navigate tensions between AI development and safety.jpeg
Companies like Open AI are trying to navigate tensions between rapid AI development and ensuring adequate safety measures. Photo: Adobe Stock

For Australia specifically, he referenced a Deloitte report indicating that a quarter of Australia’s economy faces significant and imminent disruption, with professional services facing a “short fuse with a big bang” in terms of AI’s impact.

The scale of disruption varies by country and economic context. The International Monetary Fund observed that the impact of AI varies “depending on what kind of country you’re in, and whether you’re in an emerging or a low-income country or an advanced economy,” he said. “So, in advanced economies like Australia, they’re saying that probably 60 per cent of jobs will be impacted by what AI can do in the coming few years.”

The importance of AI safety and ethics

While discussing the rapid advancement of AI, Prof. Shinkle also addressed the critical importance of AI safety and ethical considerations. He noted that as AI capabilities grow, so too do concerns about potential risks and unintended consequences.

There is also tension between rapid AI development and ensuring adequate safety measures, and Prof. Shinkle explained that companies like Open AI are trying to deal with security and ethical concerns. “We have a managerial tension, because the AI companies are in competition, and now the competition is much more intense,” said Prof. Shinkle, who explained that this competitive pressure can sometimes lead to safety considerations being deprioritised in favour of faster development.

Read more: Resilience in AI leadership: In conversation with Stela Solar

However, he emphasised that AI companies still have strong incentives to address safety issues: “While they may want to do much more in safety, and they’re all trying to deal with safety issues, because they have a legal responsibility, and a fear of legal claims against them if they don’t” he said.

Should governments regulate AI?

The challenge of regulating AI development is compounded by the rapid pace of technological advancement. Prof. Shinkle noted that government policies often lag behind technological developments: “We frequently observe that government policies and procedures lag what’s going on in the practice world, because experience helps to inform policy-making. Yet academics and technologists are working with the government to help them understand more, so they can try to get ahead of this curve. Governments across the world are establishing guidelines but it’s a difficult task to be precise and timely given the pace of change.”

This is made more complex by AI firms (often backed by influential investors and corporations with deep pockets) – some of whom are resisting government efforts to regulate developments in AI. In California, for example, the government recently made a number of amendments to a bill designed to prevent AI disasters (SB 1047) following pressure from Silicon Valley.

Adding to the complexity of AI regulation is the democratisation of AI development tools. “This cat is out of the box,” he said. “Because you don’t need to be a rocket scientist to do AI today. If you’ve got a powerful laptop and an internet connection, you can do it in your tiny apartment. You can build your own AI, based on the open-source code that is out there and easily available.”

This widespread availability of AI development tools makes it challenging to implement comprehensive safety measures across the field. It also underscores the importance of fostering a culture of responsible AI development within organisations and the broader tech community.

Government policies often lag behind technological developments.jpeg
Government policies often lag behind technological developments, which adds to regulatory challenges associated with AI development. Photo: Adobe Stock

Strategic implications for businesses

The presentation underscored the need for business leaders to closely monitor AI developments and consider their strategic implications. While the exact timeline remains uncertain, the pace of progress suggests AI could reshape many aspects of business sooner than many anticipate.

Prof. Shinkle emphasised the challenge this uncertainty poses for strategists: “One of the challenges from a strategy standpoint for me is that we don’t know how to tell how much time we have until some substantial progress is going to occur. Next, what we can say is, it looks like it’s happening pretty fast. And it’s speeding up. And it’s exponential,” he said. “So maybe we should pay attention to it.”

He urged business leaders to consider how AI might impact their industries and operations, and to begin planning for a future where AI plays an increasingly prominent role. He suggested that organisations should explore how they can leverage AI technologies like AI agents and RAG systems to enhance productivity and gain competitive advantages.

Given the pace and scale of potential disruption, Prof. Shinkle argued that businesses need to proactively adapt their strategies to leverage AI capabilities. He emphasised the importance of developing “absorptive capacity” – the organisational ability to recognise, assimilate and commercialise new external information and technologies.

Read more: James Cameron on how AI will impact creativity and innovation

He also cautioned against over-reliance on external consultants. “One of my biggest fears in this space is that many organisations have outsourced so much of their brains, that they no longer have the capability to do good things inside. And well, I love consultants, I was a consultant for 10 years. But if you outsource all of your brain to consultants, then you’re left with no brains in your own place,” stated Prof. Shinkle, who advised businesses to build internal capabilities to understand and implement AI. This internal capacity is crucial for identifying valuable AI applications specific to the organisation.

Time is ticking for adopting AI in business

Many organisations are looking to adopt AI into their operations with a view to securing competitive advantage, and Prof. Shinkle highlighted the importance of making a start in this area. “If you wait too late to start it will be really hard to catch up. So, it’s really expensive, and maybe even impossible to catch up if you wait too long,” he said.

To guide AI implementation, Shinkle outlined three levels of potential strategies from a recent BCG report:

  1. Deploy AI for specific tasks within the organisation
  2. Reshape entire processes end-to-end using AI capabilities
  3. Invent new products, experiences or business models enabled by AI

While most Australian firms are working on the first level, fewer have tackled process reshaping or new business model creation – areas that may offer significant competitive advantages, according to Prof. Shinkle, who called on businesses to increase their engagement with AI technologies and strategic planning. “Less than half of firms reported doing much more than dabbling,” which he found concerning given the potential for business disruption as well as opportunity.

Students can use AI to develop future scenarios in the classroom.jpeg
Professor Shinkle uses gen AI in the classroom to help students develop future scenarios, so they learn prompt generation and more strategic thinking. Photo: Adobe Stock

“If you’re a strategic manager in an organisation, imagine if you could boost every one of your employees’ IQ. What’s that worth to you? What more could you do? What improved services could you provide to your clients, for example, not just cut costs, but what more could you do if you have that kind of capability?” he asked.

This framing encourages leaders to think beyond cost-cutting and efficiency gains, and to consider how AI could enhance their organisation’s overall capabilities and value proposition. And while some specific AI technologies may be overhyped, he said the overall potential in the future of AI is likely underappreciated by many businesses.

AI in business and the classroom – now and in the future

To help organisations begin their AI journey, Prof. Shinkle discussed several practical tools and frameworks. One of the frameworks he highlighted was “futuring”, which involves zooming out in time and thinking about what could happen down the track, and then zooming back in “to decide what we should do in the short-term, to prepare for the plausible scenarios of the future”, he explained.

He also mentioned the use of business model analysis and strategic resilience planning as key components of AI strategy development, and highlighted the potential of AI to enhance strategic planning itself: “I do want to highlight one thing though: in my classroom, we’re now using gen AI to develop scenarios of the future,” he said. “Whereas two years ago, we trained students on how to do the traditional manual process of evaluating uncertainties and trends and building them into scenarios. Students can now create a really nice set of scenarios to help their strategic thinking using gen AI – with only a few prompts and some guidance on the process of iterations in the prompts.”

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

Prof. Shinkle wrapped up the presentation by stating that businesses need to remain competitive in this rapidly evolving landscape, and he urged them to critically examine whether they are paying enough attention to AI and its strategic implications. “The exponential pace of AI technology will require new ways of thinking, new ways of working, new ways of educating, and new ways of re-educating on an ongoing basis,” he concluded.

Arch_Manu is the ARC Centre for Next-Gen Architectural Manufacturing, whose mission is to address the productivity, performance and sustainability issues in Australia’s Architecture, Engineering and Construction (AEC) sector through a sector-wide digital transformation. The centre is a transdisciplinary government-funded research and training centre with strong industry partnerships through 25 industry-embedded PhD candidates across three Australian universities. Arch_Manu plans additional sessions on the arena of data security, data privacy, and ethical AI considerations.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy