The Illusion of AI Governance:
There is a theatre playing out before us.
The actors: regulators, policymakers, AI companies.
The script: ethical AI, responsible innovation, compliance frameworks.
The performance: convincing enough – if one does not look too closely.
Beneath the surface, power sits exactly where it always has. AI regulation, in its present form, is little more than a courteous request for companies to behave, punctuated by the occasional scolding when they do not. It is a game in which regulators act as though they can keep pace, companies act as though they are wedded to ethics, and the public acts as though someone, somewhere, is at the helm.
The prevailing model of AI governance is, in truth, an elaborate version of self-policing, dressed in compliance checklists and voluntary codes to give the appearance of oversight. Or, put more plainly:
Companies inspect their own systems. They vouch for their own ethics. Regulators arrive only after damage – and only when allowed. Fox, meet hen-house.
This is not governance. It is corporations guarding themselves while policymakers try to grasp technologies already reshaping the world. And still we play along.
If AI companies can launch, profit, and scale before showing their systems to be safe, ethical, or even faintly in step with human interests, why would they slow down? What, short of real force, would stop them? They will not. They never have.
The only thing that changes corporate behaviour is power – not philosophy, not a voluntary code of ethics, not a well-meaning advisory board. Regulation works only when there are consequences companies genuinely fear; when the cost of doing wrong is higher than the cost of doing right. At present, the balance is inverted.
AI firms should not be allowed to deploy first and tidy up later. They should be made to prove, before anything is released, that their systems are safe, can stand ethical inspection, and will bear outside scrutiny. Any audit worth the name should come from the outside, answer to no corporate paymaster, and carry real legal weight. If a system discriminates, manipulates, or causes harm, there should be personal consequences. Why should bank executives face sanctions for reckless algorithms while those running AI companies walk away untouched?
Even with the strongest of frameworks, enforcement is the weak seam. Governments move slowly, work with too few resources, and are too easily swayed by industry. Agreements – including those championed by the European Union – may read well on paper, yet they falter when models are built or deployed across borders where meaningful oversight barely exists. As for consumer pressure, it is feeble in the absence of real alternatives. One cannot shape the systems that govern one’s life if opting out is not a choice.
Ethics without leverage is public relations.
What we have now is not governance; it is a performance. Until power shifts – until AI ethics is tied to something companies cannot ignore – the whole exercise remains a game. A sophisticated game, an absorbing one perhaps, but still a game.
The question is not simply who holds the leverage, but whether they will ever use it.
A first, practical step? Strip out self-regulation at the root. We do not let airlines sign off their own aircraft. We do not let clinics approve their own trials. We should not let AI companies mark their own homework. At the very least, independent oversight should be mandatory for AI in high-stakes areas such as hiring, finance, and healthcare – before deployment, not after harm.
This is neither a revolution nor a utopia. It is a modest shift in power – moving AI governance from trust to enforceable duty. Which, in present circumstances, would be radical enough.
© 2025 Eva Dias Costa · CC BY–NC–ND 4.0 International