Steer it – don’t fear it: navigating AI with confidence
When it comes to designing and implementing AI, business leaders have to navigate an incredibly complex landscape – but there are steps they can take to make the journey less risky
While it can be daunting, there are tools and guidelines that encourage trust that can help leaders steer through uncertainty – according to Stela Solar, Director, National Artificial Intelligence Centre CSIRO's Data61.
“An agreed set of guidelines, frameworks, principles or regulations facilitates greater innovation and business collaboration – and it's easier for business to do business. These guidelines need to encourage trust in innovation and the business community, while at the same time dealing with the ambiguity,” she said to leaders attending the AGSM @ UNSW Business School 2022 Professional Forum’s masterclass session.
But getting this balance right is an issue Ms Solar fears is currently causing paralysis in the business community. “We don’t want a hugely complex regulatory framework that outlines every single element. But at the same time, we need something to start that trusted innovation and accelerate it forward.”
A panel of industry experts and UNSW academics led participants through the ‘Embedding AI in your organisation’ Masterclass, using fact-based scenarios to examine how a product that utilises AI is developed and deployed.
Lyria Bennett Moses, Director of the Allens Hub for Technology, Law and Innovation, Faculty of Law and Justice, UNSW Sydney, facilitated the session, designed to help alumni and industry partners navigate the ethical dilemmas of AI. The masterclass encouraged participants to consider who should be consulted in product development – and what steps companies and investors should take to manage this complex landscape.
Weighing up the risks
The benefits of advancing technology like AI and machine learning are far-reaching. They help increase revenue and efficiencies, allowing organisations to create better products and optimise services, and improve customer experiences. They detect patterns we can’t see and can identify bias we can’t recognise. And it’s not just businesses that are seeing positive outcomes.
“AI is being integrated into products and services in truly extraordinary ways that are not only feeding economic development, but they are literally making our world more inclusive,” said Ed Santow, Former Australian Human Rights Commissioner and Industry Professor for Responsible Technology at University of Technology Sydney.
AI is already expediting medical research and diagnosis and is allowing doctors to perform complex procedures with more precision through the use of robotics. This new technology is helping people with disabilities live independently, while also accelerating the world’s transition towards more responsible energy generation.
Read more: AI: friend or foe? (and what business leaders need to know)
At the same time, AI has the power to cause significant harm to individuals and communities alike. There are many examples of organisations that hastily rolled out AI solutions without adequate consideration of ethical issues, leading to costly lessons or ‘at great cost’ to the organisations and their customers. For example, Australia’s Robodebt program’s algorithm raised debts against many legitimate welfare recipients. The debacle ended up costing the government $1.8 billion in a class action settlement.
Professor Mary-Anne Williams, UNSW Michael J Crouch Chair in Innovation and Deputy Director, UNSW AI Institute, suggested leaders think about the four biggest risks when developing and deploying AI, during the masterclass:
1. Economic risks: for instance, the potential for automation to negatively impact the labour market
2. Social risks: potential discrimination of groups
3. Security risks: cyber security systems can be hacked
4. Privacy risks: issues around mass surveillance
Joining the expert masterclass panel, Lee Hickin, CTO at Microsoft Australia, added that while trade-offs were inevitable, it’s important to categorise them and offer context to make sure they don’t halt progress. “You can have brand damage, and then there's significant damage,” he said.
“You can demean or segregate parts of society or prevent access to things in some way. It’s easy to get worried about it because it does offer these terrible potential scenarios. But we have to also consider that some risks are manageable or can be accepted in a business’s decision.”
Implementing responsible practices and ethical principles
So how can leaders navigate and steer the minefield that is the ethical landscape of AI? Mr Santow suggested there needs to be a mindset shift when procuring the technology. “The biggest point of failure is the idea that we, in business, government or academia, are not buying an AI tool, but a piece of magic. If we start with that false premise, we're likely to fail at every level – from procurement and design to implementation, integration and oversight of the system.”
Businesses also need to embed responsible technology practices. With many ethical frameworks available, Ms Solar recommended that AI leaders think about how they can have a common set of principles across the board. “So many of the frameworks look very similar. But why aren't they the same? It’s actually stifling business collaboration and creating great ambiguity,” she proposed. “But the real challenge lies in how we implement these frameworks on the tools.”
While there is no blueprint on how to overcome this, Ms Solar offered three core elements that will help leaders implement ethical principles when developing or using AI.
1. Co-creation and co-design: working closely with all stakeholders during this process, because more eyes will minimise risks.
2. Partnerships: your business can have ethical and responsible practices, but none of that counts if your partners don’t.
3. Diversity: it is the “compass through navigation”. Many different perspectives can come up with different pathways forward, helping navigate the risk and mitigate any potential blockers.
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
Empowering leaders to harness the potential of AI
Agreement in standards and rules facilitates clarity, greater innovation and collaboration. But before more regulation is introduced, Mr Santow suggests reviewing existing legal requirements. “AI doesn’t allow us to do new things – we’re just doing things in a new way. This means that most of the existing law is applicable. So first we need to get some clarity about what existing law requires of companies and governments in how they use AI.”
Standards can then help apply existing rules to help companies and government agencies understand their obligations and how they can be legally compliant. “The law is not intended to imagine every particular scenario and provide a rule for it. It creates gaps. And in those gaps is where ethical principles and frameworks have a role to play,” he said.
Prof. Williams agreed that while we don’t want regulation to be too far behind technology, it’s not time to create new rules just yet. Especially when there are existing guidelines in place that can help leaders get started. “I think the Australian AI framework is a great place to start – its eight principles can help you get going. It also allows you to take your own interpretation and build upon it. And if you do that in an inclusive way, you can navigate many of the complexities.”
Ms Solar invited leaders to embrace the challenges of the technology – and feel empowered to make decisions and drive innovation forward. “I encourage every leader to steer AI rather than fear it – and help shape the future of AI for us reach its full potential,” she said.