The Illusion of AI Governance
There is a theatre unfolding before us.
The actors: regulators, policymakers, AI companies.
The script: ethical AI, responsible innovation, compliance frameworks. The performance: convincing enough, if one does not look too closely.
For beneath the surface, power remains exactly where it always was. AI regulation, as it stands, is a polite request for companies to behave, with the occasional reprimand when they do not. It is a game in which regulators pretend they can keep up, companies pretend they are committed to ethics, and the public pretends that someone, somewhere, is controlling it all.
The dominant AI governance model is, in reality, a sophisticated form of self-regulation, dressed up with compliance checklists and voluntary standards to create the illusion of oversight. Which is a rather clever way of saying:
Companies assess their own AI systems. They confirm their ethics. Regulators intervene only after harm—when permitted. Fox, meet hen house.
This is not governance. This is corporations policing themselves while policymakers attempt to grasp technologies that are already reshaping the world. And yet, we continue to play along.
If AI companies can build, deploy, and profit from their systems before proving they are safe, ethical, or even remotely aligned with human interests, what reason would they have to slow down? What would compel them to stop? They will not. They never have.
The only force that shifts corporate behaviour is power. Not philosophy, not voluntary codes of ethics, not well-meaning advisory boards. Regulation is only effective when there are consequences that companies genuinely fear, when the cost of doing the wrong thing outweighs the cost of doing the right one. At present, the incentives remain entirely misaligned.
AI companies should not be permitted to deploy first and deal with consequences later. They ought to be required to prove, in advance, that their systems are safe, ethical, and transparent. AI audits should be external, independent, and tied to real legal accountability. If an AI system discriminates, manipulates, or causes harm, there should be personal liability. Why should the executives of financial firms face consequences for reckless algorithms, while those leading AI companies do not?
Even with the most robust regulatory framework imaginable, a central problem remains: enforcement. Governments are often too slow, underfunded, or compromised by industry influence. International agreements, such as those championed by the European Union, may establish strong standards—but enforcement falters when AI models are trained or deployed across jurisdictions with little or no oversight. As for consumer pressure, it is largely impotent in the absence of real alternatives. Individuals cannot meaningfully shape the systems that govern their lives if opting out is impossible.
Ethics without leverage is little more than public relations.
What exists today is not governance. It is a performance. Until real power shifts, until AI ethics is tied to something companies cannot ignore, this will remain a game. A sophisticated one, an interesting one, but a game nonetheless.
So the question is not merely who has the leverage to intervene, but whether they will ever choose to use it.
A first, realistic step? The self-regulation model must be dismantled at its foundation. We don’t let airlines certify their own planes. We don’t let clinics approve their own trials. AI companies should not be trusted to mark their own homework. At the very least, independent third-party oversight should be required for AI in critical areas such as hiring, finance, and healthcare, before deployment rather than after harm has already been done.
Far from revolutionary or utopian, this is a modest redistribution of power—one that shifts AI governance from trust to enforceable obligation. Which, given the current state of affairs, would be radical enough.