How Accountability Changes for Leaders When AI Leads
Topic: Leadership, Strategy
Format: Article
Published Date: March 2026
Organisations worldwide have been swift to adopt AI, but not as proactive at governing it. As algorithmic systems assume greater influence over hiring, resource allocation, and strategic direction, that gap in governance is becoming the defining test of leadership in the age of artificial intelligence.
Artificial intelligence has evolved from a peripheral business tool to becoming the nerve centre of organisational decision-making. From pharma to retail, AI tools are reshaping recruitment, workflows and strategic direction—core enterprise pillars that were once entirely governed by human judgement. In a sense, AI has emerged as a key participant in the decision-making process.
Take the recent example of AstraZeneca, which signed a $555 million partnership with Algen Biotechnologies to pair CRISPR gene-editing with AI systems. This partnership goes beyond simple data analysis–it empowers AI to decide which targets to pursue, which therapeutic pathways to prioritise, and how to allocate resources across a drug pipeline.
The scale of this deal showcases the role artificial intelligence will play in future corporate endeavours. This increasing importance of AI in strategic decision-making creates a unique challenge that top leadership must now grapple with: how to design oversight, accountability, and ethical guardrails for AI systems that influence core strategic choices. Another nuance to consider is that AI does not interpret context, subtlety, or fairness as humans do—its decisions are riddled with biases, privacy concerns, and opaque models that require supervision.
Companies today have been quick to embrace AI’s speed and predictive power, but have been far slower to tackle its ethical implications. McKinsey’s 2025 The State of AI report finds that while 78% of companies have adopted some form of AI, only 28% of CEOs and 17% of board members oversee AI governance.
This is why ethical leadership is non-negotiable for companies that rely on AI. Today’s leaders are not only managing technology, they are stewarding systems that learn, generalise, and act in ways that profoundly shape human opportunity. Leaders must look beyond leveraging AI simply to maximise efficiency or scale. Today, it is imperative that they must also uphold values that ensure AI-driven decisions remain aligned with core organisational principles.
The ethical fault lines of AI
One of the most perilous risks of AI is algorithmic bias, a direct consequence of models trained on historical datasets that unintentionally introduce inequities into the decision-making process. In early-stage drug discovery, for example, models trained on incomplete or non-representative clinical data can skew results in a way that entire patient groups are marginalised. In fact, a 2025 study published in the International Journal of Medical Informatics found that algorithmic bias in healthcare can lead to models often exacerbating disparities. AI-based triage and risk-assessment systems may misclassify the severity of illness across demographic groups, like ethnic minority groups or female patients.
Beyond healthcare, similar risks arise in hiring, credit scoring, law enforcement, and retail. Amazon’s experimental recruitment algorithm had to be scrapped in 2018 after it was discovered to be penalising CVs that had terms associated with women’s colleges. This is because the model was trained on historical hiring patterns dominated by men. While the harm caused may not be intentional, such incidents show how AI can magnify existing inequities without early ethical intervention.
Another critical challenge lies in model opacity. As complex architectures like large language models, deep neural networks, and agentic systems become commonplace, the decision logic becomes harder to trace. Leaders may rely on solutions without fully understanding the assumptions driving them. This creates two problems—it weakens an organisation’s ability to determine errors, and deeply complicates accountability. When a model behaves unexpectedly, who must be held responsible: the data scientist, the leader who authorised deployment, or the algorithm itself?
The impact of systemic AI errors are also more far-reaching than lapsed human judgement. A flawed spreadsheet may affect a department. An inaccurate model powering triage, safety or even pricing decisions can affect thousands or millions in real time. Take the example of real estate company Zillow. In 2021, the company shut down its home-flipping programme, Zillow Offers, after its pricing algorithm overestimated home values. The model’s median error rate reached up to 6.9% for off-market homes, leading Zillow to buy tens of thousands of properties at inflated prices and absorb a $304 million write-down in a single quarter. One of the anomalies the algorithm failed to price in was the impact of COVID-19 on property prices, an unprecedented event which may have been considered by human analysts. The case illustrates how algorithmic misjudgments can rapidly trigger organisation-wide failures, erode trust, and create significant financial and reputational consequences.
Beyond these operational risks, AI also introduces broader societal and ethical concerns that leaders must tackle. The rise of deepfakes threatens credibility and trust while privacy risks can escalate beyond what users can reasonably consent to. Even the environmental footprint of AI is growing: training a single large model can consume as much energy as what powers hundreds of households. And unresolved moral dilemmas—from autonomous vehicles deciding between harmful outcomes to AI agents navigating ambiguous trade-offs—demand leadership that can set clear ethical boundaries before deployment.
Building the next generation of ethically responsible leaders
Confronting these ethical fault lines demands a new generation of ethically responsible leaders who view AI as an organisational tool that must be governed with intention and foresight. Leaders must insist on a governance architecture that embeds fairness, accountability, transparency, and safety into every stage of the AI lifecycle.
This could include instating rigorous bias-testing and model-audit protocols as standard practice. Techniques such as counterfactual fairness testing, representative data sampling, and continuous monitoring can uncover discriminatory patterns early. In addition, ethical governance must move beyond episodic signalling to be continuous, iterative, and embedded into everyday workflows.
Next, leaders must insist on transparency in AI models that make their reasoning coherent. Tools like model cards, lineage documentation, and interpretability dashboards can empower executives and regulators to understand why a system behaves the way it does, thereby reducing the risk of a black box where information is lost in opaque systems.
To ensure clear accountability, companies must create governance committees with cross-functional representation that includes legal, technical, ethics, and operations personnel who have real authority to question deployments. Microsoft’s Responsible AI Standard and DBS Bank’s AI governance framework offer examples of organisations that have initiated such safeguards.
But governance alone is not enough. Organisations need leaders who can also effectively communicate how AI decisions are made, how data is used, and where human judgment prevails. Ethical leadership also demands that concerns are heard without fear of retribution, especially when they relate to potential harm. Providing early access to AI tools through sandboxes, training programmes, and participatory review processes can empower employees to raise potential red flags.
Finally, ethical leadership requires adaptability. AI is evolving quickly, and leaders must stay abreast with emerging regulations, global standards, and evolving technological capabilities. Policies and governance mechanisms must be revisited regularly, instead of being treated as static documents.
“The next era of leadership will be defined not by how well executives deploy AI, but by how well they discipline it,” said Professor Saharsh Agarwal, Assistant Professor of Information Systems, ISB. “As algorithmic judgement permeates boardrooms, leaders must learn to differentiate between what AI can optimise and what it should never decide. The responsibility for the next generation of leaders is to cultivate organisations and cultures that rigorously challenge pattern-driven inferences and assumptions, ensuring AI models aid human judgement rather than replacing it.”
Ethical leadership as the new strategic advantage
The rise of AI is a decisive turning point in organisational leadership. As algorithms increasingly shape strategic decisions, leadership can no longer afford to ignore the ethical implications.. New age leaders need to cultivate the ability to govern AI with clarity, courage, and, perhaps most importantly, moral discipline.
Ethical leadership needs to support the growth of AI tools that are trustworthy, equitable, and aligned with long-term value creation. Companies that embed fairness, transparency, accountability, and continuous supervision into their AI strategies will be better equipped to navigate uncertainty, earn public trust, and differentiate themselves in an increasingly automated world.
References:
- https://knowledge.insead.edu/leadership-organisations/age-intelligence-questions-shape-new-world
- https://professional.dce.harvard.edu/blog/ethics-in-ai-why-it-matters/#Ethical-Challenges-in-AI
- https://arxiv.org/html/2410.18095v2
- https://www.edstellar.com/blog/ethical-leadership-in-the-age-of-ai
- https://emeritus.org/in/learn/ethical-leadership-is-key-to-success/
- https://itrevolution.com/articles/the-hidden-key-to-ethical-ai-leadership-its-not-what-you-think/
- https://www.ivey.uwo.ca/executive-education/insights/2024/12/ethical-and-strategic-leadership-in-the-age-of-ai-the-frontier-of-research-and-practice/
- https://insideainews.com/2021/12/13/the-500mm-debacle-at-zillow-offers-what-went-wrong-with-the-ai-models/
- https://edition.cnn.com/2021/11/09/tech/zillow-ibuying-home-zestimate
- https://www.ft.com/content/c4b5153f-be07-454d-911f-31bb011f09ae
- https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
- https://www.sciencedirect.com/science/article/pii/S1386505625000553
- https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/the%20state%20of%20ai/2025/the-state-of-ai-how-organizations-are-rewiring-to-capture-value_final.pdf
