Responsible AI: How your business can steer, not fear, new tech

Download The AGSM Business of Leadership podcast today on your favourite podcast platform.


Learn how leading artificial intelligence experts and business leaders balance the risks and rewards of disruptive technologies

About the episode

Artificial intelligence becomes more powerful by the day – which means the dangers that come with it are growing too. If you’re using AI tools in your business, you need to know how to use them safely and responsibly. 

So where do you start? Dr Catriona Wallace, a world-leading AI expert and Adjunct Professor at AGSM, shares a framework for using AI ethically, and explains why that responsibility falls on the shoulders of leaders and not just tech teams. 

Stela Solar, Director of the CSIRO’s National Artificial Intelligence Centre, thinks leaders using AI ‘the right way’ are already experiencing a competitive advantage. That includes Dimitry Tran, who owns three healthcare businesses powered by AI. 

In this episode of The Business Of you’ll learn about how to use AI to get ahead without compromising on safety. If you want to dive deeper into using AI for business, listen to previous episodes of The Business Of, featuring Dr Catriona Wallace, Stela Solar and Dimitry Tran.

Want to know more? 

For the latest news and research from UNSW Business School and AGSM @ UNSW Business School, subscribe to our industry stories at BusinessThink and follow us on LinkedIn: UNSW Business School and AGSM @ UNSW Business School.

Transcript

Stela Solar: The more organisations use AI, the more benefits they experience, and the kind of benefits include higher customer satisfaction, faster decision-making, more innovative products and services and so on. My hope is that AI can help us become better for the planet.

Dr Juliet Bourke: Stela Solar from CSIRO’s National Artificial Intelligence Centre says AI has incredible potential for business leaders. But, are we ready for it?

Catriona Wallace: My experience is kind of the higher we go up in an organisation the less they know about this, but it's definitely something that should be at the board and at the executive team level.

Dr Juliet Bourke: That’s Dr Catriona Wallace...  she’s an AI and metaverse specialist, and an Adjunct Professor at AGSM. She’s worried that even though businesses are keen to implement A-I as quickly as possible, the people in charge don’t quite understand the technology.

Catriona Wallace: In speaking to some of the engineers, they tell us that often, the responsibility for doing ethical AI is pushed way down to them. And that they are required to know how to code ethically, or to make sure that the data sets have no bias. And they don't believe that the senior management really has any idea about this.  

Dr Juliet Bourke: Catriona says it's a dangerous situation...

Catriona Wallace: ...to be delegating your ethics and responsibility to your engineers, who are very well intended, I'm sure but also have huge pressure on them to be finishing code, to be shipping product, to be doing things efficiently to be working under their agile planning frameworks, et cetera, where it may not be the place where they start and have time to think about how they will do it ethically.

Dr Juliet Bourke: This is The Business Of, a podcast from the Business School at the University of New South Wales. I’m Dr Juliet Bourke, a Professor of Practice in the School of Management and Governance. AI has massive potential in the business world, but that potential comes with risk; and I mean serious, existential risk. So before we learn how to operationalise AI, we need to learn how to use it ethically.

Catriona Wallace: So, I believe and I think most of the big AI thinkers in the world share the same view that there will be a very light side to AI,  but it will have an equal dark side. And the dark side is largely because this type of technology is very difficult to understand, to explain, and also to control. But there's also a much bigger risk that is playing out at the moment. And there's been a very, very good book by Toby Ord, an Australian who is at Oxford University, called Precipice, and it talks about existential risk. And in this book, Toby Ord identifies there are around six core existential risks, an existential risks being will something destroy humanity, kill everyone by the end of the century, or will it severely reduce humanity's potential. And if we look at the existential risks, they are nuclear war, climate change, asteroid colliding with the earth, pandemic, bio-engineered disease, and artificial intelligence.

Now, these first five have a risk factor according to the academics in this field of about a one in a thousand to a one in a hundred thousand chance that any of these, including climate change, will destroy humanity by the end of the century. Artificial intelligence, however, is not a one in a thousand chance. It is a one in six chance that AI will cause or go near to causing the destruction of humanity by the end of the century. So, for me, there's a bigger core here. We absolutely need to start regulating, monitoring this technology because it's not just our businesses are at risk or not getting a credit card is at risk, there's far greater stakes. And AI is now regarded as one of the most serious threats to humanity unless it is controlled. And then, where's the leadership? It's not coming from the tech giants.

It comes from the business schools, organisations such as mine. It comes from your students. It comes from business leaders who need to step into this ethical leadership, start to learn about this, understand both the benefits and the risks that this technology is bringing.

Dr Juliet Bourke: So the risks are real. And the responsibility to mitigate them falls on the shoulders of leaders like you. But where do you even start? How do you actually use AI responsibly?

Catriona Wallace: There are good guidelines available now for enterprises or tech developers to start to look at what are the core principles of doing AI ethically responsibly. And thinking about the purpose of this framework is to help organisations avoid unintended harms. So the first principle is that AI must be built with humans,  society and the environment in mind, so it must not come at a cost to those three groups. Second principle is that AI must be built with human-centered values in mind. Third is the AI must be fair, it must not discriminate. Fourth is the AI must be reliable and safe. Fifth, it must adhere to privacy and security requirements.  

Dr Juliet Bourke: Now the next principle is about contestability. Catriona says someone must be able to challenge a decision AI is made against a person or a group. And she uses the example of an apple credit card launched in 2019. That in part used algorithms to determine the applicants credit limit the problem, the algorithm was much more generous with its credit for its male applicants. And it caught the attention of Apple co-founder and millionaire Steve Wozniak and his wife, who, despite having joint bank accounts, was issued far less credit.

Catriona Wallace: She's got her 10 times less credit than her husband, and she's pretty annoyed. So, Mrs. Wozniak goes, "Hey, I'm really unhappy about that. I'm going to contest this because I think I've been unfairly treated or unjustly treated." So, contestability. So, enterprises then must have a contestability path for, say, consumers in this case who have been unfairly treated.

Now, if you think about this, that's one Mrs. Wozniak, that application went global. Let's go on scale all of the other women in this case who felt unfairly treated, turning up to Goldman Sachs and Apple saying, "Right, we need to contest this. We're unfairly treated. What is the process?" The organisation is going to have to handle that. So, that's contestability, number six.

Then it gets more tricky. Number seven is the AI must be transparent and explainable. It's hard for the programmers to do that. The traditional AI is what we call black box AI which is sort of unexplainable AI, and what we're looking for in the future is organisations to be building white box AI. So, you can take the lid off, look in, and actually see how the algorithms are working. And so, transparency.

And then explainability. So, not only they have to show it, the company would have to be able to explain to Mrs. Wozniak what happened. This is how it made its decisions. Now, again, anyone who knows anything about machine learning knows that that's enormously difficult because as these machines are learning and adapting and learning and adapting each task, then they kind of take a bit of a path of their own, and sometimes it's enormously difficult for organisations' data scientists to explain what their algorithm has done.  

And then the last one is accountability. So, if that organisation has caused some damage or unfairness to Mrs. Wozniak, then they need to be accountable for that, and also the vendor who provided the technology that did the harm needs to be accountable, and likely there needs to be some reparation.

Dr Juliet Bourke: If AI poses an existential threat, like Catriona Wallace suggests, and it's difficult to use responsibly, why are so many businesses jumping on board with such enthusiasm?

Stela Solar: For companies, it has already been found that the more organisations use AI, the more benefits they experience, and the kind of benefits include higher customer satisfaction, faster decision making, more innovative products and services and so on. So suddenly what we realised is that the AI itself, it was never about the AI itself. It was about the business outcomes that it creates for organisations and the benefits that AI can bring to organisations is competitive advantage in the market.

Dr Juliet Bourke: Stela Solar from CSIRO says she's seen businesses streak ahead of the competition, when using AI the right way. She hopes we'll see the same outcomes on a global scale.

Stela Solar: And then there's also one final lens, which is the holistic lens of how AI can really benefit our society, our planet. My hope is that AI can help us become better for the planet. In particular, I see AI helping us do things a little bit differently, more efficiently.

For example, I just saw a use case where in the Agricultural sector, computer vision on agricultural machinery is able to do more precise pesticide treatments rather than having to spray the entire field and having much more consequence that way. it has optimised the design in a way that will create those efficiencies that we just could not have thought of before. And so I really hope that AI helps us do better for the planet and for our society.

You know I truly believe AI is only as good as we lead it, and that's what we're seeing even across industry, that when organisations adopt a mindful leadership approach to the way they're designing systems and developing AI technology that better outcomes are had, like outcomes that are creating futures where there might be less of the bias represented in the data and so right now it's imperative for leaders to actually step into the role of leading and shaping how AI is used across their organisation.

Dr Juliet Bourke: Operationalising AI requires leaders to continually stop and ask themselves, is this ethical? And what are the implications for our business, our customers and clients?

Stela Solar: What I believe is needed is more tangible, practical examples of how to actually implement this thing and that's where there is a gap right now, it's in the how. How to implement things responsibly? What is that checklist that people go through? What are the questions to ask? That is where the gap is. I do believe most folks have a positive intention in using AI, and unfortunately, most of the challenges with AI are inadvertent.

And there's one case study in particular, or example in particular that just came to mind which is the checklist approach. The use of the checklist in surgical procedures and how much improvement that created in terms of reducing error rates and infections and so on. And so when it comes to an organisation that is going down this AI path, it is important to create structured approaches to AI across the organisation. Some kind of a vetting of and standardisation of the platform that you decide for your organisation, having a centralised governance approach to AI is going part of that way to creating this checklist approach that's going to lead to a better place of AI outcomes.

Dr Juliet Bourke: Stelar says the key the business is to...

Stela Solar: ...steer it don't fear it. And so I would encourage everyone to really engage with AI technologies today, engage in a conversation also in how AI systems might be designed across your organisations. Because we are empowered to shape how this technology evolves to shape how we decide to use this technology. AI is not something that's just landed on this planet. It's a tool that we are deciding to use and build and shape in a meaningful way for us. And so please really lean into the AI technology, learn what it is contribute to how it is designed and used across your organisation so that it can create that value for business and for community.

Dr Juliet Bourke: AI will be one of the most powerful influences on our planet. And we need to make sure that that influence is a good one. As Catriona Wallace demonstrated, there are already advisory bodies compiling clear, practical, and actionable principles for ethical AI use. But what does ethical AI use look like… right now, in the real world?

Dimitry Tran: What we do is we provide a co-pilot that can detect findings alongside the doctors. For example, sign of pneumonia on a chest x-ray, or sign-up stroke on a CT brain, and that will help the clinician to make more accurate diagnosis on a timelier manner.

Dr Juliet Bourke: This is Dimitry Tran, he runs three health-care technology companies that use A-I. In a previous episode of The Business Of, Dimitry shared how tools are improving workflows for healthcare professionals today.

Dimitry Tran: I was recently talking with a clinician in Australia, a radiologist, and she told me that every day she starts her day staring down a list of 500 cases that has been backlogging since yesterday and since overnight. She said she lose the will to work because she knows that she can try very hard, and bring it down to 200, but that tomorrow you start her day again with 500. I think each of those cases are someone mother, someone father, someone loved one that require the best of care yet our resources are so stretched that we are having someone who have to deal with hundreds of cases every day and I think that is where AI plays such an important role.

Our AI can process those 500 cases in a few seconds, and then we allow the clinician to sort their work list like an Excel worksheet, to sort and say "Which cases?" Maybe Case 399 is a case that contained a critical finding, the stroke patient that need immediate care the next minute.

Dr Juliet Bourke: In an industry like healthcare, there are regulatory boards that assess the technology to make sure that it's fit for the sensitive work that it's doing. But even if it gets past approvals sometimes just like technology, it's ever-evolving.

Dimitry Tran: I think when it comes to clinical care, the first hurdle to pass is regulatory. And the regulators are very sophisticated around AI. They have amazing data scientists, they have, you know, a panel of PhDs. So they asked very deep question recently, we actually see a lot of AI that even past regulatory approval and get into user hand and stop, and it has we call performance drift over time, because the users use it in a different way, therefore, the population that the data was trained on. So I think this is one of the things that we're still coming to grips with globally as the healthcare industry is how to monitor AI performance in real time, this is not a one off check and you’re safe, and from this point on, you can sell to whoever and wherever you want. The AI technology needs to be monitored in real time. That is the feedback that we need to go back and collect more data to keep training AI for the better. So this continuous learning, I think, is a key element of any AI business model.

Dr Juliet Bourke: And then it's about moving that continuous learning into the C-suite.

Dimitry Tran: So the whole executive team do not need to become AI expert. We all can learn a little bit about AI, but I think having someone with deep enough knowledge on how to build AI at the decision table, I think, is important. I think AI has been around long enough that there are those people who have been through one or two deployment of AI, or development of AI, that can bring great insight to a discussion, because it's so easy to generalise AI into, "Oh, it's going to change everything. Oh, it's going to be so risky." I think having someone with that experience around the table, I think, would be very helpful to shape the conversation or strategy, or any decision that organisation going to make in this field.

Dr Juliet Bourke: AI will be the defining technology of the next era in business – but if we’re not responsible, the consequences could be catastrophic. AI tools are already helping leaders achieve incredible outcomes, but that success will be short-lived – and ultimately for nothing – if we’re not careful.

The Business Of podcast is brought to you by the University of New South Wales Business School, produced with Deadset Studios. To stay up-to-date with our latest podcasts, as well as the latest insights and thought leadership from the Business School, subscribe to BusinessThink.

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy