Regulating AI in Australia: Challenges and opportunities

At a recent UNSW-led roundtable, experts tackled key gaps in AI regulation and how bridging them will affect Australia’s startup ecosystem

Australia is at a regulatory crossroads when it comes to artificial intelligence (AI), and it must urgently bridge the engineering gap between technology and the law to protect end users, according to industry leaders and experts at a recent regulatory roundtable.

As AI and, increasingly, generative AI (GenAI) tighten their hold on the public discourse, their impact on financial services is driving earnest behind-the-scenes efforts to create safe – and innovative – technological ecosystems. A leading initiative is the UNSW Fintech AI Innovation Consortium (FAIC), a joint initiative between the UNSW School of Computer Science and Engineering and UNOVA Knowledge Hub under the umbrella of the UNSW AI Institute, which brings together research and industry to facilitate critical discourse in the emerging financial services AI industry.

In response to the Australian government’s June 2023 release of a discussion paper seeking feedback about potential regulation of the use of AI, the FAIC convened a gathering of about 50 financial services industry leaders and researchers in July 2024 to discuss the opportunities and challenges of AI regulation and governance in Australia.


During the FAIC AI Regulatory Roundtable and in post-event interviews, participants from academic experts and stakeholders, including CSIRO’s Data61, Westpac and the Gradient Institute, agreed on the need to identify and address any gaps in the industry’s ability to engineer safe and responsible AI systems, in both early development stages as well as in adopting and deploying imported technologies. The event was hosted by UNSW Sydney's Professor Fethi Rabhi and Alan Hsiao, Senior Visiting Fellow at the School of Computer Science and Engineering, along with Associate Professor Felix Tan from the School of Information Systems and Technology Management at UNSW Business School.

Roundtable and other event participants noted that Australia is behind many of its peers on AI regulation and that while there are standards that provide guidance, better integration and clarity are vital. Meanwhile, building trust in the design of AI systems remains central.

As Scientia Professor Toby Walsh, Chief Scientist at UNSW’s AI Institute, pointed out, the opportunities and challenges of AI are likely to be as significant as its ballooning market. “This year, a billion dollars is being spent on AI every day," he said. "If you add up the investments, venture capital investments and government investments being poured into AI around the world, it’s about a billion dollars a day – that is the greatest gold rush in the history of capitalism." 

Growth and the importance of trust

The AI industry is estimated to be worth $20 billion, and four tech giants – Alphabet, Amazon, Meta and Microsoft – have pledged to spend close to $200 billion this year, mostly on capabilities for building, training and deploying generative AI models.

Learn more: The UNSW Fintech AI Innovation Consortium (FAIC)

Prof. Walsh pointed to the transformation of OpenAI, which turned a $100 million investment in training AI into a company worth $100 billion, as the company’s ChatGPT has made GenAI more accessible to more people, democratising it. But while the technology is moving fast, institutions are adapting slowly. This dynamic has created critical challenges for the financial services industry around AI adoption and management. Nonetheless, and especially with the recent passage of the EU AI Act, it is now apparent that it’s possible to regulate AI, he said.

An essential starting point for taming AI’s impact will involve building trust in systems, argued Stuart Banyard, Senior Product Lead in AI at CSIRO’s Data61, who cited Kate Crawford’s statement in her book Atlas of AI that artificial intelligence is “neither artificial nor intelligent”.

“It’s kind of a reflection of all the human endeavours, the histories and the classifications that have been captured on the internet over the course of its life,” Mr Banyard said. “When we look at it from that perspective, the types of things we can see potentially manifesting through those systems or trends is really just a tapestry of all the good and the bad that’s ever been written or captured.”

He said designers are confronted with a wide range of techniques and patterns and must prioritise building trust within the design of AI systems. A particular challenge is the engineering gap between, on one side, the current principles, standards and frameworks that affect AI, and the AI models themselves on the other.

Toby Walsh, Laureate fellow, Chief Scientist at UNSW.ai and Professor of Artificial Intelligence in the UNSW School of Computer Science and Engineering at UNSW Sydney.jpg
UNSW Scientia Professor Toby Walsh said that the opportunities and challenges of AI are likely to be as significant as its ballooning market, with $1 billion dollars being spent on AI every day. Photo: AGSM @ UNSW Business School

Novel software engineering techniques for responsible AI are, therefore, imperative. As Mr Banyard explained, rapid innovation promises a new era of prosperity but risks exacerbating trust issues, leading to further societal instability and political polarisation. He pointed to three opportunities for building trust with GenAI – through hyper-personalisation and providing consistent experience and citations – and argued for positioning GenAI as a core strategic asset in trust building.

The challenges of regulating AI

According to Raymond Sun, a technology lawyer at Herbert Smith Freehills who spoke about the global landscape of AI regulation, there are three key challenges for AI regulation, beginning with a taxonomy riddle. “How do you define AI for regulation?” he asked. “You don’t want to define it so broadly that it captures all software or so narrowly that it under-captures high-risk systems.”

The second challenge is economic, Mr Sun said: “How do you balance regulation and innovation and make sure you get a healthy balance?”

Finally, he said there is a structural challenge around country-specific institutions, with some countries, for example, maintaining free-speech protections in their constitutions. “Other countries have civil law systems versus common-law systems, which influences how they approach drafting AI regulations.”

Read more: How AI is changing work and boosting economic productivity

In its ongoing consultations on AI regulations, Australia is currently at a crossroads in considering the regulatory model to adopt. According to Mr Sun, the options range from the approach of countries like the US and China that have adopted narrow AI laws governing specific applications of AI, and that of the EU in adopting a broad, risk-based approach.

Mr Sun identified several critical questions for policymakers: what type of AI do we regulate, who should be regulated, what rights and controls do we establish, and how do we harmonise with existing laws?

Ecosystem impacts

Regulating AI would inevitably have significant impacts on Australia’s startup ecosystem. It’s a tricky balance for entrepreneurs, who tend to believe less regulation and governance means they can move faster.

But Alan Hsiao, a Senior Visiting Fellow at UNSW and founder of FAIC sponsor Cognitivo, had a solid rejoinder to that position: remember what has happened with social media over the last 10 years. “I think it would be good to have a level playing field with regard to legislation,” he said. “You don’t want somebody to have all the data, build all the algorithms, own all the world, and then have legislation come in and raise the bar at that point. You want that to happen right at the start so that everyone’s on an even playing field.”

Read more: How AI can help solve the pressing problems of banking and finance

Other feedback the government received to its discussion paper reflected this challenge, with diverging opinions on the scope of regulations. However, there was a response consensus around the need for a combined approach: updating existing regulations while introducing new, AI-specific regulation.

What happens with regulations and how that affects business is also impacted by ongoing changes and trends in the technology itself, according to Ian Lu, Account Executive at Databricks, another FAIC sponsor. “One of the big areas we see emerging is around what people probably see today as these monolithic models, like ChatGPT, and what people are accustomed to – that’s probably going to be an ecosystem that changes a lot in enterprise, with smaller models coming together, forming what we call a compound system, to generate a series of decisions that leads to business outcomes,” he said. “We’re going to see that increasingly more in the large enterprise ecosystems in particular. That’s a trend that I think is really going to revolutionise the way enterprises get value out of generative AI.”

And Nick Munro, Head of Innovation & FinTech at founding FAIC member Westpac, highlighted the value of academic and industry partnership in working out these challenges. “This is an emerging space. We’re talking about things like generative AI, blockchain and quantum computing, which are all coming towards us. Thinking about all these technologies is really challenging, as is drawing the right people into the business and drawing the right thought leadership,” he said.

Professor Fethi Rabhi, Director of Studies in Software Engineering at UNSW.jpg
UNSW Director of Studies in Software Engineering, Professor Fethi Rabhi, said explainability, clarity and ethics should be at the heart of AI software development. Photo: UNSW Sydney

“If we use AI as the example today, we’re able to do that,” he added. “We’ve got great talent coming out of UNSW; we’re able to bring together thought leaders in forums like this, where we’re getting insights that we can use in our business every day.”

AI policy development work

The roundtable event also featured an overview by Dr Ali Akbari, Director of AI Practice at the not-for-profit Gradient Institute, of the resources available for AI management best practices, as well as work by the International Standards Organisation (ISO) and the National AI Centre’s Voluntary AI Safety Standard. He noted that a national framework for assurance of AI, published in June, sets foundations and guidelines for a nationally consistent approach for the assurance of AI use in government, but leaves the details of mandated policies to relevant jurisdictions.

Dr Akbari also pointed out the challenges lawyers and technologists alike face in bridging new technologies and the law, but he noted that the 31 ISO standards published since 2018 can provide support.

As T.J. Chandler, Managing Director APAC at consortium sponsor Fivetran, pointed out, the concept of responsibility is critically important in AI regulation “because data itself is really powerful, and it can be used for all sorts of purposes, both good and bad”.

Read more: Protecting your privacy: should AI write hospital discharge papers?

“You might compare data to uranium, which has the capacity to generate many megawatts of electricity to power entire cities; it is also unstable, and potentially toxic with radioactive after-effects if not handled properly,” he added. “It is critical that we have responsibility, whether imposed from external regulations or from internal, voluntary controls, so that we can use data for the power that it contains to do good.”

Key questions, further actions

The roundtable participants also engaged in an open discussion of AI explainability, transparency, accountability and liability in which they questioned the suitability of existing approaches to explainability. They also noted that while there are some standards that provide guidance, they are not generally known about or routinely used by most organisations.

There are also crucial questions around end-user access to remedies, such as whether the user has access to evidence or whether a particular breach relating to AI caused a specific loss. And there are complexities relating to the exposure of intellectual property and trade secrets embodied within AI systems, the participants noted.

Having identified these gaps around engineering safe and responsible AI systems, particularly in the early stages of design, FAIC is planning further action to define what must happen to bridge them, working with industry partners. It plans to create resources for researching software engineering patterns that encompass responsible AI, translating research into industry-relevant solutions and training.

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

According to UNSW's Prof. Rabhi, Director of Studies in Software Engineering at UNSW, two topics from the roundtable stood out as leading themes. “One is that there’s such a big patchwork of regulations, and these regulations make it very hard to look at AI regulation because they link with technical standards. So there is, on the one hand, the law, and on the other hand, the technology, and what should sit between the two is a very important research topic, and it definitely gives us the motivation to investigate it further,” he said.

“The second one was the software engineering side: building systems, at the moment, do not consider these issues that are important for AI regulation, like explainability, but we should integrate explainability, clarity, and ethics at the heart of the software development cycle very early, and not leave it as an afterthought,” Prof. Rabhi concluded.

The FAIC AI Regulatory Roundtable was supported by industry partners, AWS, Cognitivo, Databricks, Fivetran, Herbert Smith Freehills and Westpac. The UNSW Fintech AI Innovation Consortium (FAIC) welcomes new industry partners and academics to contribute to existing projects or propose new projects that fit its vision and mission statement. The consortium also welcomes the exchange of ideas with other related research projects and initiatives. Please visit the UNSW Fintech AI Innovation Consortium website for more information.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy