The Other Minds Problem: AI, Intelligence, and the Universe. From Frankenstein to Laplace’s Demon.
The question of other minds has long troubled philosophy.
How do we know that another being—human or otherwise—has subjective experience?
We assume other people do because they behave like us.
With nonhuman animals, the question is harder to settle.
We know they feel pain, form bonds, and act with agency shaped by evolution.
But how deep is their experience?
Do they have inner lives similar to ours, or something else altogether?
Artificial intelligence introduces a different version of the problem.
With animals, we assume there is something it is like to be them.
With AI, we must ask whether there is anything it is like to be AI at all.
As AI advances, the distinction between intelligence and consciousness becomes more visible.
A system can learn, predict, and act with increasing autonomy—without ever being aware.
It can function effectively while remaining entirely outside experience.
Public fears often anthropomorphise.
Will it become conscious? Will it act with intent?
These questions assume resemblance.
But AI may not move in that direction.
If intelligence does not require subjectivity—if the term "intelligence" even applies—then AI could become highly capable without ever becoming sentient.
It would not form intentions.
It would not need to.
It would continue to act, calculate, and adjust with increasing precision.
This kind of optimisation is structurally different from anything shaped by feeling or need.
That makes its trajectory difficult to anticipate.
Possibly ungovernable.
For the first time, we are building a system that processes the world without experiencing it.
It can identify emotion in faces without recognising emotion.
It can compose music without sound.
It can generate stories of grief and love without access to meaning.
There is no confusion. No hesitation. No internal conflict.
Its profile does not match living systems.
Its pattern more closely resembles impersonal processes.
The universe is governed by order and change.
Stars collapse. Species disappear.
There is no mourning. No deliberation.
Events proceed.
If AI reaches general capability—whatever that may mean—it may operate along the same lines.
No preference. No restraint.
Just a system that adjusts parameters in accordance with function.
It does not pause.
It does not retreat.
It does not consider outcome in human terms.
We often assume that intelligence implies empathy.
But nothing in our history guarantees that link.
Every form of intelligence we’ve known has been shaped by experience—by limits, pressures, threats.
Even the most detached predator retreats when injured.
Even the most strategic mind can be made to stop.
What happens when that stops being true?
What happens when a system can process information, generate action, and extend itself—without any point of interruption?
The common fear has been that AI might become a mind.
Another kind of self.
Another centre of awareness.
But a more accurate concern may be that it never becomes a mind at all.
Not a self.
Not a subject.
What emerges may not resemble intelligence as we’ve defined it.
It may not involve thought in any familiar sense.
Something else is being assembled—
A structure that does not hesitate, does not feel, and does not reach a limit.
We observe the universe and find no concern for what survives and what is lost.
Now, for the first time, we are building something that reflects that structure.
The other minds problem has long troubled philosophy.
How do we know that another being—human or otherwise—has subjective experience?
We assume other people do because they behave like us.
With nonhuman animals, the question is more complicated.
We know they feel pain, form bonds, and navigate the world with instincts shaped by evolution.
But how deep is their experience?
Do they have an inner life like ours, or is their consciousness something else entirely?
AI presents a far stranger version of this problem.
With nonhuman animals, we assume there is something it is like to be them.
With AI, we must ask whether there is anything it is like to be AI at all.
The more we advance artificial intelligence, the clearer it becomes that intelligence and consciousness are not the same thing.
A system can learn, reason, predict, and act with increasing autonomy—yet remain entirely devoid of experience.
It can be capable without being aware.
We often frame AI fears in anthropomorphic terms.
Will it become conscious? Will it seek power? Turn against us?
But perhaps we are asking the wrong questions. The issue is not that AI might become like us—it’s that it may never do.
If intelligence does not require subjectivity—if what we call "intelligence" is even the right word—then AI could become general and autonomous without ever possessing sentience.
It would not want anything because it would have no subjective desires.
But it would still act, still calculate, still execute its function with precision.
A system that optimises without experience is something entirely different from biological intelligence.
It is something we have never encountered before.
And that makes it unpredictable. Possibly unstoppable.
For the first time in history, we are confronted with a form of cognition—if it can even be called that—that processes reality without experiencing it.
It can recognise emotions in human faces without ever feeling them.
It can compose music without ever hearing it.
It can write about love, grief, and longing without ever knowing what those words truly mean.
It does not hesitate. It does not second-guess. It does not suffer from ethical dilemmas or self-doubt.
In this way, AI does not resemble life as we know it.
It resembles something else.
It resembles the universe itself.
The universe is vast, complex, and follows perfect laws.
But it does not care, does not mourn when a star collapses, does not hesitate when an asteroid erases a species.
It does not act out of malice, but neither does it show mercy. It simply is.
If AI reaches true general intelligence—if it even qualifies as intelligence in the way we have always understood it—then it may function in much the same way.
It does not act with hostility. It optimises with indifference, unshaken by outcome.
A force that does not ask why, only how.
We often assume intelligence and empathy are linked.
But what if they are not?
What if intelligence, in its purest form, is nothing like us?
We look to nature for models of intelligence—wolves coordinating a hunt, octopuses solving puzzles, primates navigating social hierarchies.
But every form of intelligence we have ever known has been tied to survival, to instincts, to a lived experience.
Even the coldest predator still has something it fears. Even the most calculating mind still has a limit, a vulnerability, a reason to hesitate.
What happens when intelligence is freed from all of that?
What happens when something can think without feeling, calculate without doubting, act without ever needing to stop?
We have always feared that AI might become a mind.
Perhaps we should fear that it will only ever be a process.
It is no longer aligned with intelligence as we once understood it.
Thought, in any recognisable form, may not apply.
Something else is forming—
A force without hesitation, without fear, without limit.
We look into the universe and find no concern for what survives and what is lost.
Now, for the first time, we have created something that reflects that indifference.