A closer look inside AI at Google and Microsoft
With companies increasingly handing decisions over to AI, how are tech giants like Google and Microsoft navigating the ethical challenges posed by machines?
Artificial Intelligence (AI) permeates our everyday lives, deciding things like who gets welfare, who is shortlisted for a job, and even detecting diseases in plants on farms. But AI also presents a myriad of challenges, especially for the businesses designing AI systems.
In the last or so 30 years, AI has left the laboratory and appeared in our everyday lives. “That’s the funny thing about AI – it is touching people’s lives but they may not even realise it,” says Toby Walsh, Scientia Professor of AI in the School of Computer Science and Engineering at UNSW Sydney, leads the Algorithmic Decision Theory group at Data61.
Prof. Walsh was recently named one of the 14 prestigious Australian Laureate Fellows, being awarded $3.1 million by the Australian Research Council to continue his research into understanding how to build AI systems that humans can trust.
The challenges AI presents today
Every time you read a story on your Facebook or Twitter feed, it is AI that is recommending those stories to you, while a third of all the movies you watch on Netflix are also coming from algorithms, explains Prof. Walsh. But even such harmless-sounding activities passed over to machines can have corrosive effects on society, he warns.
Indeed, his research is uncovering just how wide-ranging the adverse effects can be. “They can create filter bubbles, where we end up with either fake news or the possibility of elections being tampered with,” he says.
While AI has the ability to make society a fairer and more just place by taking away menial jobs, it can also be used to hurt the cohesion of society. In fact, Prof. Walsh says he is more worried about more mundane misuses of AI today than he is of superintelligent AI in the future. "I'm much more worried about the increasing inequality that automation is driving in society,” he explains.
Algorithms can be just as biased as humans and this is one of the biggest challenges facing AI today. "Worse, they are not accountable in any legal way and they are not transparent,” continues Prof. Walsh. So in his research, Prof. Walsh is looking at how to build AI and verify that AI systems make fair decisions, which can be traced, explained and audited, and are respectful of people’s privacy.
But what about companies that are already using AI, namely the Big Five (Facebook, Amazon, Apple, Microsoft, and Google) – how do such powerful tech behemoths design AI processes that people can trust?
Google designs AI based on seven principles
In March 2018, Google announced it had partnered with the Pentagon on 'Project Maven' where it helped analyse and interpret drone videos via AI. Following this, the tech giant released a set of seven AI principles that form an ethical charter, guiding all development and use of AI at Google, and which prohibits its use in weapons and human rights abuses.
But some say many of the considerations since 2018 have not gone away. The Human Rights Watch recently published a report calling for a ban on “fully autonomous weapons". Indeed, the potential to use AI for good is immense, but such powerful technology raises equally powerful questions about the use and impact of this technology on society, admits Google Australia’s Engineering Director Daniel Nadasi.
"Unless people trust that they will be treated fairly and that this technology will benefit them and the people they care about, they won't feel comfortable having it in their lives"
Daniel Nadasi, Engineering Director at Google Australia
He warns anyone developing AI should hold themselves to the highest ethical standards and carefully consider how this technology will be used for the benefit of society as a whole. Indeed, for the past few years, Google has been using AI for myriad tasks like identifying items in an image, automated translation and making smart suggestions for your emails.
“AI can help us solve problems for billions of people – from breaking down language barriers with apps like Google Translate to improving food safety and detecting air quality, to helping doctors detect diabetic eye disease in India and Thailand… but unless people trust that they will be treated fairly and that this technology will benefit them and the people they care about, they won't feel comfortable having it in their lives,” explains Mr Nadasi.
So AI systems should be designed following general best practices for software systems, such as privacy and security, together with considerations unique to AI. “The AI principles help make sure that we continue to develop this technology responsibly for the benefit of everyone,” explains Mr Nadasi.
“They include both principles for what AI should be, e.g. socially beneficial and secure, as well as applications we will not pursue."
And as part of its move towards AI, in the first 12 months after the principles were written, Google reviewed over 100 projects for compliance with its principles, released 12 new tools, published 75 research papers on responsible AI, trained thousands of Googlers in machine learning fairness, and hosted 100 workshops and research conferences engaging over 4,000 stakeholders.
“Businesses creating AI systems should work to make these systems understandable to the people who use them and to put as much control as possible in the user’s hands. This can be built into the software development process right from the design phase," says Mr Nadasi.
Microsoft's approach to AI has three stages
Microsoft uses AI across a broad range of our business; from finance and capacity planning in our core business all the way up to predictive text in Outlook and design ideas in the PowerPoint office tool. It builds AI systems from the input of three diverse teams that come together to build the tools it delivers through its cloud services.
But Microsoft was recently called up for issues concerning face recognition and bias and decided not to sell its facial recognition software to police until there is a federal law regulating it – following similar moves by Amazon and IBM.
“We have an Ethics & Society team that brings a diverse, non-IT lens to our development and they consider the societal impact, inclusivity, and human experience of AI systems. This ensures that we start from a people point of view and human experience at the heart of the process,” explains Lee Hickin, National Technology Officer at Microsoft Australia.
“Next we have a Technology & innovation approach that is across our engineering, research and business teams that looks at the potential for what’s possible in technology, hardware and software and explores what we can do to deliver something unique and valuable to the market,” he continues.
“Finally – we have a team that looks at the responsible application of AI, this is where we can also work with customers and partners to share tools, learnings and guidance on the responsible use of AI tools in solutions,” he says.
The four pillars upon which trust is built, including accountability, transparency, fairness and safety, are also key considerations in the process, and Mr Hicking adds that businesses interested in AI should look to where AI can improve their customers' experience of their product and where can AI help optimise and improve the supply chain.
Trust is the core of successful AI
Google says it expects AI to have a permeating and positive impact on business similar to other transformative technologies, such as cloud computing. “With the foundational discoveries in computer science over the last few years, AI, in particular, is uniquely positioned to help businesses solve a broad range of meaningful problems for their customers and for society more broadly,” says Google’s Mr Nadasi.
The benefit to consumers and individuals will be in the form of more accessible, more specific and more valuable interactions with businesses and government, according to Microsoft’s Mr Hickin.
“We see a future in which AI-driven by data at cloud scale – will be the creative force behind human ingenuity, it will allow consumers to access the best and right tools or services at the time they need them. Conversely, it will allow governments and businesses to deliver their services to more people and reduce the cost and effort required to do so," he says.
But of course, maintaining trust with your customers is what good business is built on – regardless of whether AI is there or not.
"For AI to be deployed in a responsible, trustworthy way, it is clear that we need action at many levels. And whilst I welcome the actions of Big Tech companies to develop AI principles and frameworks, we also need better regulation," says Prof. Walsh.
"In high stake areas like facial recognition, we already see many of the company will not sell its facial recognition software for police use until a national law is in place – players like Microsoft and Amazon are calling for such regulation."
For more information on AI ethics, please contact Toby Walsh, Scientia Professor of AI in the School of Computer Science and Engineering.