How organisational goals can undermine responsible AI implementation
New research reveals how organisational goals influence AI ethics, with growth-focused objectives driving compromises on fairness and accountability
AI bias isn’t just a theoretical concern – it’s a real-world problem with serious consequences. In 2015, for example, Google’s image recognition system mislabelled Black people as gorillas, exposing flaws in AI fairness testing. The company quickly disabled the categorisation but struggled with a true fix, ultimately blocking its AI from identifying gorillas at all. Similarly, Amazon’s AI recruitment tool favoured male candidates because it was trained on resumes from a male-dominated industry, penalising terms like “women’s” and reinforcing gender bias. Amazon scrapped the tool in 2018, highlighting how AI can inherit and amplify societal inequalities without rigorous oversight.
These failures have raised significant concerns about the societal impact of AI systems deployed without thorough testing for fairness and bias. While these are obvious failures of responsible AI deployment by some of the largest tech companies in the world, deploying AI for most organisations often involves trade-offs between organisational performance and responsible AI, according to Sam Kirshner, Associate Professor in the School of Information Systems and Technology Management at UNSW Business School.

“For instance, prioritising model accuracy to maximise ad revenue may come at the expense of explainability, making it harder to ensure fair decision-making. Similarly, enhancing consumer targeting might increase effectiveness but could infringe on privacy, leaving organisations to grapple with these competing demands in their pursuit of growth,” he explained. Despite mounting evidence of these risks, organisations face intense market pressure to accelerate AI deployment.
New research from UNSW Business School explores why organisations often compromise on responsible AI practices despite knowing the risks. The study, Preventing promotion-focused goals: The impact of regulatory focus on responsible AI, which was published in Computers in Human Behaviour: Artificial Humans, found that companies focused primarily on growth and success were more likely to cut corners on AI ethics compared to those focused on preventing failures and maintaining standards.
"Unlike previous technological advancements, AI stands apart through its ability to autonomously make decisions and learn beyond its initial programming and datasets," said co-authors A/Prof. Kirshner and UNSW Business Information Systems alumna Jessica Lawson, a UNSW Co-op Scholar who was awarded a University Medal and received First Class Honours for her thesis on this topic. This autonomous learning capability creates unique risks when AI systems are deployed without proper safeguards.
Read more: How AI is changing work and boosting economic productivity
"Firms often face intense pressure to rapidly transition AI systems into market-ready products to gain competitive advantages; however, the stakes are far higher with AI than with traditional software," the research paper said. The challenge lies in balancing the drive for innovation and market leadership with the ethical imperative to ensure AI systems are fair, transparent, and accountable.
How growth-focused goals compromise AI ethics
The research demonstrated that organisations with "promotion-focused" goals emphasising success and achievement were more likely to compromise on responsible AI practices compared to "prevention-focused" goals related to preventing losses.
Through a series of four experimental studies involving over 400 people, the researchers examined how organisational goals influenced decisions about AI implementation. Participants were presented with scenarios involving trade-offs between AI system performance and ethical considerations like fairness and explainability.

“There is tremendous variation in the ethical values of managers and executives regarding AI across industries,” A/Prof. Kirshner said. “For example, it is easier for manufacturing firms to champion human-centred AI principles (i.e., ensuring AI systems respect human rights, diversity, and individual autonomy) compared to marketing firms that rely on personal data to target and sell products, often encouraging purchases that may not align with consumers' actual needs. Jess and I realised that pre-determined organisational goals may shape managerial moral stances and decisions.”
The rise of unethical pro-organisational behaviour
"Organisations that exhibit AI-based unethical pro-organisational behaviour with a prevention-focused goal would experience cognitive dissonance due to the inconsistency between their goal of minimising risks and unethical behaviour," the researchers explain. This internal conflict often led to a stronger emphasis on responsible AI values.
In contrast, organisations with promotion-focused goals seeking rapid growth experienced less internal conflict when compromising on AI ethics. The study found this created a concerning feedback loop: "Promotion-focused goals can introduce a feedback loop, suggesting that initial unethical choices due to promotion-focused goals can foster an environment that triggers further unethical pro-organisational behaviour."
Read more: Resilience in AI leadership: In conversation with Stela Solar
The researchers observed this pattern across multiple scenarios. When organisations prioritised rapid growth and market leadership, they were more likely to justify compromises in AI fairness and transparency as necessary trade-offs for achieving business objectives. These initial compromises then made it easier to justify further ethical shortcuts in future AI deployments.
The study found that prevention-focused organisations, by contrast, were more likely to delay AI deployment to address potential ethical issues, even when facing competitive pressures. This approach, while potentially slower, helped establish more robust ethical frameworks for AI implementation.
Not only did subtle changes in organisational identity and goals (e.g., "striving to be a leader and one of the largest firms in digital marketing and digital retail" versus "maintaining its position as a well-regarded firm in digital marketing and digital retail") lead to noticeable differences in ethical decisions, but A/Prof. Kirshner observed they also altered individuals’ own perceptions of acceptable ethical conduct.
“For example, participants acting within a promotion-focused organisational context deemed it less important ’for the firm’s AI system to be inclusive and accessible, and not involve or result in unfair discrimination against individuals or groups’ compared to those operating under prevention-focused goals,” he said. “This suggests that organisational goals can shape not just strategy but also the moral compass of those making critical decisions.”

Practical implications for business leaders
The research provides clear guidance for organisations implementing AI systems. "Firms looking to improve their responsible AI use should create prevention-focused goals and mindsets for business units designing and developing AI," recommend the researchers. This includes establishing clear ethical guidelines and creating accountability mechanisms.
For business leaders, the research highlights the importance of carefully examining organisational goals when implementing AI systems. Rather than rushing to deploy AI for competitive advantage, organisations should establish clear ethical frameworks and testing protocols.
"Our research illuminates focus areas for organisations looking to administer AI responsibly," the researchers said. "Firms' objectives play a critical role in establishing responsible AI values and decisions taken by employees, with goals focusing on maximising success significantly worsening responsible AI attitudes."
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
The study recommends organisations take a holistic, human-centred approach to AI implementation. This includes creating prevention-focused goals that emphasise thorough testing and ethical considerations, establishing clear accountability mechanisms, and fostering a culture that values responsible AI practices.
”First determine your firm's ethical values using Australia's AI Ethics Principles and considering all stakeholders. Ethical values must drive a firm's goals when launching AI products and services – not be retrofitted to justify them,” A/Prof. Kirshner concluded.