The Case Against Solipsism
I was thinking about solipsism the other day — the idea that there might only be one experiencer in reality. Instead of trying to prove whether it’s true or false, I started thinking about it from a different angle. I asked myself: what would the minimum requirements be for a reality to actually work for an experiencer? Not philosophically, but practically — what kind of world would keep the experiencer engaged instead of eventually checking out of the experience altogether.
So imagine a creator building a kind of game world. Something like Roblox. The creator inserts a single character into the world, and that character is the experiencer.
At the beginning the world is extremely simple. Just a square piece of land, say one kilometer by one kilometer, with some basic physics. The experiencer wakes up and starts exploring. Very quickly they reach the edge. At that point they understand something important: the world is finite. There is nothing fundamentally new left to discover. After some time that becomes boring. Eventually the experiencer just disengages. The game ends.
The creator understands that expanding the world physically leads to the same result. Any finite space can eventually be explored. Once the experiencer understands the limits of the environment, novelty runs out.
So the creator tries something clever and makes the world spherical instead of flat. Now you can walk forever without reaching an edge. For a while that works. But eventually the experiencer discovers mathematics — the language that describes the rules of the world. With math they can infer the structure of the space even without touching the boundary. They realize the world is closed. The limit becomes visible conceptually even if it’s not physically reachable.
Again, novelty runs out.
At that point the creator realizes the problem is deeper: space itself can’t really have a final limit. So the creator makes the universe effectively unbounded — always expanding, always leaving room for new territory. That solves the spatial problem.
But space alone still isn’t enough. So the creator fills the world with things: rocks, mountains, oceans. That makes the world more interesting for a while, but static complexity eventually becomes familiar. Given enough intelligence, patterns get compressed. Once something is fully understood, it stops producing novelty.
So the creator adds life. Plants, animals, ecosystems. Now survival enters the picture. The experiencer has to adapt, hunt, learn, avoid threats. That holds attention much longer.
But over time patterns still emerge. Ecosystems stabilize. Behavior becomes predictable. Eventually the experiencer understands the system well enough that it stops producing new surprises. The experience flattens again.
You might think the experiencer could just generate novelty internally — dreams, imagination, endless variations in thought. But that turns out to have its own problem. If internal novelty is unconstrained, it starts to resemble noise. And noise isn’t meaningful novelty. Meaningful novelty requires structure — something that can be understood but not fully exhausted. Pure randomness overwhelms rather than engages.
So internal novelty alone doesn’t really solve the problem either.
At this point the creator introduces another human-like entity. Someone the experiencer can interact with. And that changes everything for a while, because social interaction produces a huge amount of novelty. Conversations, conflicts, cooperation — the system becomes much richer.
But if this other entity is just an NPC running on fixed rules, eventually the experiencer figures out those rules too. No matter how complex the program is, if it’s bounded the patterns can be learned in principle. Once the class of behavior becomes predictable, the novelty fades again.
So the obvious fix is randomness. Just make the NPC behave randomly. But randomness turns out not to solve the problem either.
If randomness is small — say probabilistic decisions — the experiencer eventually learns the distributions. The system becomes predictable in a statistical sense. You may not know the exact next move, but you know the envelope of behavior. That kind of uncertainty doesn’t produce endless novelty.
On the other hand, if randomness is too large, the behavior stops making sense. It becomes noise. Nothing connects to anything else. You can’t form expectations or narratives. Surprise loses its meaning because there’s no structure behind it.
In other words, randomness fails in both directions. Too little randomness becomes predictable. Too much randomness becomes meaningless.
What actually sustains engagement sits in a narrow middle space — where things are surprising but still intelligible. Enough structure to form models, enough deviation to keep breaking them.
And randomness can’t maintain that balance on its own.
What does maintain it is agency. Another entity with its own goals, memory, history, and the ability to respond to what the experiencer does. When two agents interact, they constantly adjust to each other. The system stays coherent, but it never fully settles into predictability.
At that point the “NPC” is no longer just a program. It has become another center of experience.
And that leads to the conclusion I found interesting.
A universe with only one experiencer eventually runs out of novelty. It either becomes perfectly predictable or collapses into noise. Both states lead to disengagement.
A stable reality seems to require multiple independent experiencers — agents that keep generating structured novelty relative to each other.
That’s the case against solipsism. Not that it’s logically impossible, but that a one-experiencer universe doesn’t seem able to sustain itself for very long.
Eventually it collapses under its own success.