Algorithmic accountability: who is responsible when AI makes the wrong call?
This article is part of my ongoing series on the role of AI in corporate governance, where I explore how emerging technologies are reshaping responsibilities, risks, and regulatory expectations in the corporate world.
As a Digital EU Ambassador and serial entrepreneur with over 26 startups under my belt, I am deeply invested in the ethical dimensions of technology, particularly AI. The rapid integration of artificial intelligence into critical sectors—healthcare, finance, recruitment, and public services—has brought unprecedented efficiency and innovation. However, it has also introduced complex challenges regarding accountability when AI systems err, leading to reputational damage, financial loss, or even harm to individuals.
Who is accountable?
When an AI system makes a detrimental decision—such as denying a loan, misdiagnosing a patient, or unfairly screening out job applicants—determining who is responsible becomes a legal and ethical puzzle. Is it the developer who designed the algorithm, the organization that deployed it, or the data provider whose information trained the model? This ambiguity in accountability poses significant risks for businesses and individuals alike.
Regulatory frameworks addressing AI accountability
The European Union's AI Act
The European Union has taken a pioneering step with the introduction of the AI Act, which came into force on August 1, 2024. This regulation establishes a comprehensive legal framework for AI within the EU, categorizing AI systems based on their risk levels: unacceptable, high, limited, and minimal. High-risk AI systems, such as those used in critical infrastructure or employment decisions, are subject to stringent requirements, including conformity assessments and post-market monitoring to ensure ongoing compliance and safety.
OECD AI principles
On an international scale, the OECD AI Principles, adopted by 47 countries, advocate for AI systems that are innovative and trustworthy, respecting human rights and democratic values. A key principle emphasizes that AI actors should be accountable for the proper functioning of AI systems and for the respect of the principles based on their roles and the context.
ISO/IEC 42001 standard
To provide organizations with a structured approach to AI governance, the ISO/IEC 42001 standard offers guidance on establishing, implementing, maintaining, and continually improving an AI management system. This includes addressing ethical concerns, transparency, and accountability throughout the AI lifecycle, ensuring that AI technologies are developed and deployed responsibly.
Preparing for AI auditability and liability
For organizations deploying AI systems, proactive measures are essential to navigate the complexities of accountability:
- Implement robust governance structures: Establish clear policies and procedures that define roles and responsibilities for AI system development and deployment.
- Conduct regular audits: Perform thorough assessments of AI systems to ensure they function as intended and comply with relevant regulations and ethical standards.
- Ensure transparency and explainability: Develop AI systems whose decision-making processes can be understood and scrutinized by stakeholders, including end-users and regulators.
- Engage in continuous monitoring: Monitor AI systems post-deployment to detect and address any issues promptly, adapting to new data and changing environments.
- Foster a culture of responsibility: Encourage all stakeholders involved in AI systems to take ownership of their roles in ensuring ethical and accountable AI deployment.
The Ethical imperative
As I always state in all my articles, beyond regulatory compliance, there is an ethical imperative for organizations to ensure that AI systems do not perpetuate biases or cause harm. This involves critical evaluation of training data, algorithms, and outcomes to identify and mitigate potential risks. Ethical AI deployment is much more than avoiding negative consequences; it's about actively contributing to societal well-being.
Conclusion
As AI continues to permeate various aspects of society, establishing clear lines of accountability is paramount. Regulatory frameworks like the EU's AI Act, international principles from the OECD, and standards such as ISO/IEC 42001 provide valuable guidance. However, this lies ultimately with organizations to implement these guidelines effectively, ensuring that AI systems are not only innovative but also ethical and accountable.
By embedding accountability into the core of AI development and deployment, we can harness the full potential of AI technologies while safeguarding against their risks, fostering trust, and promoting responsible innovation.
Sources I used to write this article:
Helping health & insurance tech teams scale without chaos. Strategic product partner & certified ICF coach with 11 years of hands-on delivery. UX, process, team clarity & smart team augmentation — without the noise.
1wGreat read, Nicolas Babin! Ethical #AI is not just a checkbox anymore, it’s becoming part of how systems are built and trusted. We see this most clearly when teams shift from asking “Can we build it?” to “What happens after launch?” That’s where design and governance really meet. Where do you think the biggest accountability gaps still remain — in tools, people, or policies?
Founder and Principal | Tech Media Marketing Consultant | Host of SaugaTalks Podcast
1wThanks for sharing, Nicolas!