Tomorrow's decisions need yesterday's wisdom and today's AI
While AI can reduce scenario planning from a days-long process to minutes, organisations still need human judgment to make sense of machine-generated futures
Organisations once spent days developing scenarios to prepare for plausible futures. Royal Dutch Shell, for example, used scenario planning in the 1970s to navigate the oil crisis, demonstrating how imagining futures helped companies make decisions when facing environments. Today, however, Shell uses machine-learning models to test alternative energy-demand trajectories, enabling faster iteration and deeper linkage between global data and storyline development.
This reflects an increasingly common trend in the business world, as organisations utilise AI for scenario planning. A recent World Economic Forum and OECD report, for example, found that two-thirds of business professionals use AI predominantly for trend analysis or clustering (69%), scenario development (63%) and horizon scanning (60%). The research, which took in 167 foresight experts from 55 countries in mid-2025, also exposed a gap in confidence across sectors. Some 93% of private sector respondents reported having skills to use AI in their foresight work, compared to only 53% of public sector respondents.
World events, such as geopolitical conflicts, economic shocks, pandemics, and technology breakthroughs, have intensified pressures for organisations to adopt foresight approaches that engage with uncertainty. Yet the availability of AI-generated scenarios creates a paradox: while AI excels at data-driven analysis, it often overlooks aspects and fails to capture the nuances of why organisations undertake scenario analysis in contexts, according to recent research.
“In that sense, AI is quietly democratising foresight,” said UNSW Business School Professor George Shinkle, who recently co-authored a research paper that explores how AI is assisting with the process of scenario planning. “Tools that were once the preserve of specialist consultants and a handful of strategy teams are now available to any manager with a ‘smart device’ and a question about the future. What used to be a two-day offsite can now start with a 20-minute AI conversation that surfaces a dozen plausible futures. The real differentiation moves from who can generate scenarios to who can interpret them and define action.”

Addressing gaps in strategy literature
Researchers at UNSW Business School recognised this gap and investigated two questions: how to include humans in the process when generating scenarios using AI, and how organisations should respond once they possess a set of scenarios.
In their resulting paper, Scenario analysis in the AI era: Redefining human involvement, the research team discovered that existing literature fell short in providing guidance. As the researchers noted, "the most critical question of what organisations should do once they have a set of plausible future scenarios, is ambiguous in the literature." This ambiguity proved particularly acute in strategy work compared to contingency and continuity planning.
The research team reviewed literature, examined practice discussions, and experimented through seven course deliveries in an Executive-MBA program at UNSW Business School’s Australian Graduate School of Management (AGSM), along with more than a dozen consultant-based scenario development sessions. This approach enabled them to improve outcomes and understanding through cycles of planning, acting, observing, and reflecting.
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
The investigation resulted in two contributions published in Organizational Dynamics. Lead author, Prof. Shinkle, collaborated with Adjunct Associate Professor Patrick Sharry and PhD candidate Chirag Gujarati (associated with Arch_Manu (ARC Centre for Next-Gen Architectural Manufacturing), whose mission is to address the productivity, performance and sustainability issues in Australia’s Architecture, Engineering and Construction (AEC) sector through a sector-wide digital transformation).
Three steps to AI-generated scenarios
The research generated a number of important outputs – the first of which is a scenario analysis guidance tool that provides useful approaches for enhancing AI-generated scenarios. The tool focuses on three aspects: prompting, testing, and tuning.
For prompting, organisations can choose from three approaches. The single-shot prompt asks AI to generate scenarios directly. The chain-of-thought method breaks the process into steps, first identifying trends, then uncertainties, and finally scenarios. The third approach adds a self-critique iteration where AI evaluates and refines its outputs.
Testing ensures scenarios remained relevant to organisations. The research team found that "the widely-accepted recommendation – for today – is to have humans-in-the-loop so that we can combine AI information with human intuition, creativity, and ethical oversight to create a more robust, trusted, and organisationally relevant analysis." For example, managers need to verify whether scenarios sufficiently include uncertainties, encompass the range of futures, and make sense within their context.

Tuning also allows organisations to adjust scenarios for purposes. Scenarios can be refined to focus on shorter or longer horizons, simplified or made more complex to match norms, or adjusted in tone to challenge thinking among leadership teams.
In practice, Prof. Shinkle said this three-step approach is already reshaping how organisations of all sizes work with the future. In one not-for-profit, for example, he said AI-generated scenarios helped a leadership team move beyond a narrow, risk-register mindset to see how multiple uncertainties could collide and threaten the organisation’s viability – and then to refine the scenarios into a smaller set that drove a more focused, constructive strategy conversation.
In another case, he explained how a mid-sized manufacturer used AI to recast bland “business as usual” supply-chain stories into a vivid worst-case scenario of a year-long geopolitical blockade, which finally jolted executives into investing in local sourcing and digital resilience. “AI gave us the rough clay very quickly,” said research co-author A/Prof. Sharry. “The value came from shaping that clay with the people who have to live in those futures.”
Moving from scenarios to action
The second research finding addresses a gap in literature: what organisations should do once they possess scenarios. The researchers’ five-category response framework provides what previous approaches in management literature have lacked – a process for determining responses based on both impact assessment and risk tolerance.
The framework outlines a three-phase approach. Phase one assesses each scenario's impact using four criteria: probability, repercussion on current business, urgency, and disruption to strategy. Each criterion has a rating from one to five, with totals providing an impact score.
Learn more: The strategic impact of AI on business transformation
Phase two evaluates risk tolerance by examining both risk appetite and risk capacity. Risk capacity refers to the amount of risk that an organisation can afford to take without suffering severe consequences. Risk appetite measures the level of risk an entity is willing to take. The distinction matters because organisations vary in both their ability to absorb risk and their willingness to take risk. Leadership style, culture, industry norms, stakeholder expectations, market conditions, regulatory constraints, and resources all influence risk tolerance.
Phase three categorises responses into five types: priority action (immediate action required), timely action (plan action within timeframe), safeguard (prepare contingency plans), monitor (scrutinise for signals), and ignore (no action needed currently).
This framework recognises that responses depend not just on scenario impact but also on risk tolerance. Two organisations facing scenarios, for example, might respond differently based on their resources, attitudes toward risk, and priorities. The research demonstrates this through examples, showing how organisations with contrasting risk tolerance levels might assign different response categories to the same scenarios.
Prof. Shinkle explained that the response framework arose from a simple but persistent question from executives in classrooms and boardrooms: ’Once we have the scenarios, what exactly should we do?’ He observed that existing strategy research said scenarios would “improve decisions”, but offered little guidance on how to prioritise actions when time, money, and attention are limited.

“Leaders were telling us they could now generate scenarios in minutes with AI,” said A/Prof. Sharry, “but they still lacked a disciplined way to move from an impressive slide deck to a clear set of strategic choices.” The response spectrum framework fills that gap by linking foresight directly to impact, risk appetite, and risk capacity, giving organisations a common language to decide which futures demand immediate investment, which call for safeguards, and which can legitimately wait.
Implications for business leaders
The research offers takeaways for organisations and their leaders. Firstly, AI has made scenario planning accessible and efficient, democratising access to tools that were previously confined to larger corporations or consulting projects. The efficiency gains from AI-generated scenarios can also free up executive time for contemplation of responses, enabling innovation and coherent actions across organisations.
However, organisations need effective approaches to prioritise responses across potential scenarios. Not every plausible future warrants action, and the response spectrum framework can help leaders allocate resources appropriately. The "ignore" category particularly recognises that organisational constraints sometimes necessitate prioritisation and responses at particular points in time.
Learn more: How the AI revolution is reshaping architecture’s frontlines
The authors are clear that AI will not replace foresight experts or seasoned strategists. Instead, Prof. Shinkle said it changes their role. AI can shoulder the heavy lifting of scanning signals, combining trends and generating large scenario sets, while human experts concentrate on framing the right questions, testing assumptions and navigating the politics of strategic choice. “AI can widen the lens and speed up the work,” Prof. Shinkle said, “but deciding which futures to bet on, and how much risk to carry, remains an organisation-specific choice rooted in its values, stakeholders and constraints.” For boards and executive teams, he said the opportunity is to treat AI as a standing member of the strategy room – one that augments human judgment rather than automating it.
The researchers proposed that scenario analysis will evolve from periodic planning exercises to integrated business capabilities. As AI systems continue developing, particularly with agentic AI that can autonomously monitor, update, and refine scenarios, the researchers predicted organisations could gain the ability to maintain awareness of emerging futures while reserving human judgement for interpretation, oversight, and choice.