Three useful things to know about data, AI and the privacy debate
Organisations must develop ethical, socially responsible, trustworthy, and sustainable data business models to ensure consumers’ privacy is protected in an increasingly AI-driven world
The increasing use of Artificial Intelligence (AI) and new surveillance technologies is creating global alarm. In Australia, data breaches are on the rise, particularly in the financial and healthcare sectors. Information mishandling, opaque, excessive and pervasive surveillance, and the increasing use of facial recognition and other biometrics are just some of the latest developments causing concern. As a result, Australian consumers are becoming increasingly sceptical of the activities of businesses that collect, handle and share revealing information about their activities, interests and preferences. These concerns were initially focused on the activities of global digital platforms. But today, they include online and offline activities of a much broader range of businesses of all sizes and across all sectors of the Australian economy.
Data privacy is top of mind for many Australians. According to the findings of the 2020 Australian Community Attitudes to Privacy Survey, 70 per cent of respondents considered protecting their personal information to be a significant concern in their lives. However, while 24 per cent said they believed the privacy of their personal information was well protected, 40 per cent said it was poorly protected. The majority (83 per cent) also said they would like the government to do more to protect their data privacy. With the adoption of AI skyrocketing over the last few years and predicted to continue, the focus of consumers and regulators upon opaque, excessive and unreasonable collection, use and sharing of revealing information about individuals is causing businesses to undertake expensive rearchitecting of their data systems, data handling processes, and data project governance and assurance.
According to Peter Leonard, Professor of Practice for the School of Information Systems & Technology Management and the School of Management & Governance at UNSW Business School, and immediate past chair of the Australian Computer Society’s AI Ethics Committee and a member of the NSW Government’s AI Review Committee, many companies share data with other service providers in multiparty data ecosystems to provide a service. However, these business models and associated data architectures were often not designed to be privacy and security by design and default, said Prof. Leonard. Moreover, he said retrofitting privacy and security by design into existing business models and data architectures is expensive and surprisingly complex.
But part of the reason why business is struggling with privacy issues is that it has taken the Australian government over two decades into the 21st century to start a serious discussion about making the Australian Privacy Act fit for purpose, explained Prof. Leonard. To this day, he said, many politicians, businesses, and government agencies are still not taking data privacy seriously.
How does AI affect privacy?
The most difficult area to address in AI and data privacy is the issue of data profiling. For example, insurers can use AI for profiling to avoid taking on high-risk clients, undermining the pooling of risk that enables premiums to be affordable across a broad base of insured persons. Data privacy, consumer protection, and insurance sector-specific laws do not address profiling-enabled targeting and thereby ensure consumers are treated equitably.
“Many of the concerns around AI, for example, are about the use of profiling to differentiate in treatment of individuals, either singly (fully ‘individuated’) or as members of a segment inferred to have common attributes. There may be illegal discrimination (intentional or accidental) against individuals that have protected attributes (such as race, religion, gender orientation). More often, differentiation between individuals is not illegal but may be regarded as unfair, unreasonably or simply unexpected. AI enables targeted differentiation to be automated, cost-effective and increasingly granular and valuable for businesses”, explained Prof. Leonard.
Read more: Can the law truly protect consumers from data profiling?
“Then there’s a raft of other issues around nurturing, or at least not eroding, digital trust of persons with whom a business deals, and about how you use data about individuals without them feeling that you’re being ‘spooky’, or otherwise unreasonable, in the data that you’re collecting” he continued.
Another issue is that is no longer good enough to simply comply with data privacy law, explained Prof. Leonard. Trust and privacy concerns go hand in hand, but without adequate laws and guidance, businesses are having to fill in the gaps. “You have to start to guess where the law may go and fill gaps in the law in thinking about what is a responsible way to act or an ethical way to act? This is difficult for businesses to do and will require them to consult a broader range of stakeholders, including experts who can think about corporate social responsibility and ethics in the digital age,” he said.
How can businesses protect users’ data?
While it is clear that Australia’s privacy laws need reform to address modern problems in an increasingly digital world, reform is complex and contested, especially given that data privacy is inherently multifaceted and complex, explained Prof. Leonard. Part of the problem is that data privacy is only one piece of the puzzle. While privacy laws have yet to catch up to the myriad of issues consumers face today to ensure adequate protection, the problems aren’t all relating to privacy directly. So the balancing of interests and incentives underlying good data privacy practice and the complex interaction of privacy regulation elements make it deceptively difficult to get privacy law reform right, said Prof. Leonard.
Prof. Leonard recently published a design manifesto for an Australian Privacy Act “fit for purpose in the 21st century” to address this issue. The paper Data privacy, fairness and privacy harms in an algorithm and AI enabled world was one of 205 submissions lodged with the Australian Attorney-General’s Department in response to the AGD Discussion Paper on reform of the Privacy Act. In the paper, Prof. Leonard has put forward several recommendations surrounding reform of data privacy law, focusing on proposals for reform of the Australian federal Privacy Act 1988 (C’th) (Australian Privacy Act) and comparable State and Territory data privacy and health information statutes.
“Many of the issues are actually issues around data governance, how you make this, how management make decisions about how data is used, and how you architect your data holdings so that you can make the right decisions have the right controls and safeguards in place to do all of this, without creating business models that are going to blow up in your face,” continued Prof. Leonard.
Read more: AI: what are the present and future opportunities for business leaders?
“The laws are important, but the law, in my experience, is usually less than a third of the issue that I’m addressing when I’m advising businesses around advanced data analytics and AI,” he said. “Proper understanding of the limitations of the AI requires considerations of explainability and understanding of humans as to the limitations of the AI or the algorithm.”
Going forward, he urged businesses to consider whether crucial decisions might undermine users’ trust and whether their business models are sustainable for the long-term, given laws are responding to concerns of consumer advocates and citizens around these new uses of data at an ever-changing pace.
“More often than not, the issue isn’t the AI, or the algorithm itself, but rather the over-reliance, the overuse of it, in a range of circumstances… where it was inappropriate to use it,” he said, citing the government’s Robodebt debacle as a perfect illustration of this. So businesses will also need to invest enough time in thinking about assurance and governance to ensure reliable decision-making by humans that depend upon data and algorithms.
“The issues to be addressed are not black and white issues of can I, can’t I? Instead, they’re much more complex issues around “what should a responsible organisation do, or not do?” he concluded.
Human decisions are crucial in an AI-driven world
Agreeing with the recommendations put forward by Prof. Leonard, Rob Nicholls, Associate Professor in Regulation and Governance at UNSW Business School, said one of the fundamental ways humans can avoid potential issues with automation is to ensure that when using AI, its intention is that of tool to support for making decisions, but not as a tool to make decisions.
“One of the critical things, particularly in the government’s or regulator’s use of AI, is that it needs to be a decision support tool, but the decision is a human decision,” he said.
While Australia’s current privacy laws are inadequate for the problems faced by businesses and consumers in the modern world, more laws aren’t necessarily the answer. Instead, it’s fundamentally more important to consider how data uses can adversely affect humans or be socially beneficial, said A/Prof. Nicholls.
“Businesses must think about: What are you using this for, is it in support of a decision? Why is that important? Because it’s still a CEO’s head on the line… the decision is made by a person. It’s a decision support tool, not pure automation. And I think it’s imperative to distinguish between the two. And where the biggest risk in business comes is when you haven’t drawn that distinction,” he said.
“AI regulation needs joined-up policy,” said A/Prof. Nicholls. “We need to be able to address data protection and privacy protection concurrently. That is a coherent policy approach across all these issues. Being able to walk and chew gum at the same time is critical and, sadly, very absent,” he said.
Peter Leonard is a Professor of Practice for the School of Information Systems & Technology Management and the School of Management and Governance at UNSW Business School and immediate past chair of the Australian Computer Society’s AI Ethics Committee and a member of the NSW Government’s AI Review Committee. Rob Nicholls is an Associate Professor in the School of Management and Governance at UNSW Business School and the faculty lead for the UNSW Institute for Cyber Security (IFCYBER).