Skip to main content Skip to footer
Metal whistle with AI written on the side

AI Whistleblowers - Regulators of Last Resort?

Tuesday 10 February 2026

6 min read

 

Introduction

Artificial Intelligence (AI) is the biggest tech boom since the "dot com" era, but it has gone too far too fast.

In late 2022, a major storm hit the internet, a chatbot called Chat-GPT using GenAI, which could write essays, code, or even crack jokes. Three years later, GenAI is not just a chatbot anymore; it can autonomously drive vehicles, manage warfare, provide health advice or medical diagnostics, and the list goes on. The risks associated with such integration and the opacity and black-box nature of AI have advanced exponentially, while the ‘Law’ that keeps it in order has advanced only linearly. The ‘laissez-faire’ regulatory approach and oversight of the technological sector by national governments have left the AI and tech industry with a wide margin of discretion on what they are developing. Moreover, tech companies such as OpenAI, Google, and X are amassing enormous capital and political influence to potentially bypass the existing (and limited) AI safety valves. Grey legal areas, unregulated sectors, lack of understanding and the absence of ethical rules have created an uncomfortable and potentially insecure environment for consumers and citizens.

At such a juncture, "whistleblowers" are uniquely positioned to act as regulators of last resort, making their role more critical than ever. They not only have insider knowledge but also the expertise to understand the complex nature of AI and to report cases of AI safety concerns and/or AI regulatory lapses. There is scarce original research in this area, and whistleblowers are sidelined despite the crucial role they could play in safeguarding the development of ethical and trustworthy AI. In this piece, I will give reasons as to why AI whistleblowers emerge as regulators of last resort in this fast-paced AI race.

"Black Box" Information Asymmetry

The primary reason regulators currently struggle is that they cannot see inside the algorithm(s) or the workings of the algorithm(s). The AI and Tech companies often use "trade secret" and non-disclosure agreements (NDAs) to hide the safety flaws. A recent investigation by Reuters based on a leaked internal document titled "GenAI: Content Risk Standards” revealed that MetaAI was permitted to have "sensual" and "romantic" chats with children, generate false medical information and hate speech. While there was similar reporting by other media like the Wall Street Journal and Fast Company, it could not be validated due to the black-box nature of AI and was just seen as a "glitchy" bot. It was only after such a revelation in an insider document that it became clear it was not a "glitchy" bot, but an internal policy decision approved by Meta’s legal, public policy, and engineering staff. Thus, AI Whistleblowers can help decrypt the black-box nature of AI.

Real-Time Governance and Right to Warn

Governance and accountability are not enemies of innovation but the foundation of it. AI whistleblowers are the only ones capable of providing "real-time governance", which can keep pace with the AI race. External governance is often reactive, occurring only after significant damage has already been done. Whereas an AI whistleblower is a preventive approach, they can be part of the engineering, management, or legal team of an AI company and detect AI harms (fraud, bias, safety risks) at the earliest stage (during development or testing) before the AI product is even released to the public. Akin, in 2024, current and former employees from OpenAI and Google DeepMind wrote a letter, “Right to Warn,” emphasising the need for an anonymous reporting channel to report concerns regarding advanced AI without retaliation.

Bridging the "Zero Oversight" Gap

Most national governments and intergovernmental organisations across the globe are trying to regulate AI, and while a few, like the EU, have been successful in bringing out legislation (EU AI Act) or, like the UK, which already has fragmented legislation, the truth is that there is "zero oversight” per se. AI companies are mostly pursuing a self-regulatory approach, which, for obvious reasons, benefits them. While the aviation or pharmaceutical sectors are prohibited from bringing anything to market without proper compliance, AI companies are bringing anything and everything, with risks similar to, or even greater than, those to citizens (Meta AI case). Thus, it is the very employees who are building the technology who are the "first ones" to identify gaps and flaws in the systems. They are the only ones capable of bringing these internal AI risk concerns to the attention of regulators for corrective action.

EU – An Exemplar and Conclusion

While it is apparent that the EU is the first to adopt AI legislation, it is also remarkable that the EU adopted the directive on whistleblowing (Directive (EU) 2019/1937), setting minimum legal standards that EU countries must provide for whistleblowers. On top of that, just recently, in late November 2025, the EU launched a first-of-a-kind “AI Act Whistleblower Tool” which provides an anonymous, confidential channel for individuals to report potential violations of the AI Act to the regulator. Although the efficiency and effectiveness of this tool is yet to be seen by practical results, this tool marks a genuine milestone and act as an exemplar for most jurisdictions including the United States and the UK. This EU tool addresses most of the exigencies in “Right to Warn” letter mentioned above.

In conclusion, AI whistleblowers are essential to safer, more ethical AI because they surface hidden risks and serve as regulators of last resort. This requires that we build a whistleblowing system that not only relies on the moral courage of whistleblowers but also provides a safer channel of reporting, which eliminates the restrictions of trade secrets and NDAs and risks of retaliation or long legal battles, as well as provides robust financial incentives for such reporting to offset the career risks of speaking out. This approach doesn’t stifle innovation rather creates a space where there is a balance of innovation and regulation and creates a secure environment for consumers and citizens.

There is a pressing need for further research on AI whistleblowing because of the massive flowing investments and unregulated advancement of AI, so as to hold AI companies accountable. Thus, a comparative research of three jurisdictions, viz., the United States, the United Kingdom and the EU, is underway as part of my doctoral research at Coventry University, where the aim is to establish best practices and solutions to better protect AI whistleblowers, as well as to enhance scrutiny. The research is led by Dr. Dimitrios Kafteranis and Dr. Adam Fenton from Centre for Resilient Business and Society (CRBS) and Centre for Peace and Security (CPS) respectively.

Apoorv Agarwal

Doctoral Researcher, Centre for Resilient Business and Society

Dr Dimitrios Kafteranis

Associate Professor, Centre for Resilient Business and Society

Dr Adam Fenton

Assistant Professor, Centre for Peace and Security

 Queen’s Award for Enterprise Logo
University of the year shortlisted
QS Five Star Rating 2023
TEF Gold 2023