Universal Prediction in Biological Systems, AI, and Determinism
A personal theory, refined, examined, and connected to EarthTalk.
Original Date: 2025-11-22
Last Edit: Date: 2025-12-27
1. My Starting Point
As I continuously conduct experiments on fungi, many questions arise in my head. Naturally, I spend a lot of time thinking about them and trying to model what I observe. This particular line of thought emerged around the concept of self.
One evening, while I was out with my wife at a show, I found myself observing the performers on stage. My attention drifted beyond the performance itself to imagining their lives offstage—who they were as people, how they became who they are, and what chain of events led them to that specific stage at that specific moment.
Fascinated by this, it occurred to me that if the universe is deterministic, then there must have been a fixed—likely simple—set of rules applied to them from the moment they were born until that day. A stimulus–response process (governed by rules) would then be responsible for them being exactly who they are, standing on that stage at that time, as their specific “self.”
This is an interesting thought because, if true, it implies that the universe is computational—and therefore deterministic. In that case, they could not have been anyone else or anywhere else. Working backward, I realized that anyone could be anyone in principle: if I were born with their DNA (initial conditions) and exposed to the same stimuli for the same duration—birthplace, parents, environment, and an unquantifiable number of other variables—then I would literally be them, standing on that stage on that day.
On one hand, this is a very comforting thought. Taken seriously, it removes judgment of others. Judgment is, in essence, an opinion about someone (good or bad), but if neither they nor we could have been any other way, such judgments lose their meaning. We could have literally been them.
More interestingly, I arrived at a working conclusion: the “self” is ultimately a dynamic system that, given initial conditions and a set of inherent rules, evolves as a stimulus-based response machine in a recursive loop. In overly simplified terms:
Response = f(stimulus, state)
where each response encodes a new state. That is the entire loop.
This, however, is not where I started.
One of my very first experiments with fungi involved applying a simple NaCl stimulus at preselected times during the day and recording the electrical activity of the fungus over several days. The results were far from conclusive.
In the beginning—during my first three runs—there was a clean, visible response (a spike train) shortly after the stimulus was applied. I was excited and set out to repeat the experiment with additional blocks. And as science has it, if only it were that straightforward.
It wasn’t.
The new blocks failed to reproduce the effect, and they failed in a way that complicated things further. Sometimes I saw a response, sometimes I didn’t. Sometimes the response was mild, sometimes strong. For obvious reasons, no firm conclusions could be drawn.
What I had implicitly expected was a simple stimulus–response machine. But life does not behave that way. That assumption was too naive. There had to be at least one missing variable.
I believe that missing variable was state.
Hence:
Response = f(stimulus, state)
This is now my working hypothesis.
If the combination of stimulus and state is finite, then the system is, in principle, predictable. Returning to the human analogy, we know with a high degree of confidence that the following are true:
- Two people can perceive the same stimulus differently.
- The same person can perceive the same stimulus differently at different times.
State, as a parameter, can explain this variability.
A system is predictable when it includes:
- Internal state: ( S )
- Stimuli: ( X )
- Actions: ( A )
- State update rule:
$$
S_{t+1} = f(S_t, X_t)
$$
Action is what the system does as a consequence of being in a state.
The issue with this overly simplistic model is that we cannot measure either the state or the stimuli with a high degree of accuracy.
Using a human analogy, the variables that affect one’s state could range from the exact number of photons hitting the retina at a given moment—their frequencies, timing, and sequence—to something as subtle as the tone or wording of a sentence spoken by a friend. Because we can never measure all variables that constitute either the stimuli or the internal state, perfect determinism is inaccessible in practice.
As a result, the formulation above cannot hold strictly in empirical experiments. The best we can do is approximate the system. This forces us to move beyond a deterministic mapping and introduce an explicit element of probability.
Instead of predicting exact next states, we must model distributions over possible outcomes, conditioned on what we can observe.
- Probabilistic state-transition model:
$$
M_\theta : P(S_{t+1} \mid \hat{S}_t, \hat{X}_t, A_t)
$$
where $$ ( \hat{S}_t ) $$ and $$ ( \hat{X}_t ) $$ represent imperfect, partial observations of the true internal state and stimuli.
I assume that in the short term this model can work, but if I push it further, the notion of increasing returns from complexity theory becomes relevant. In such systems, the distribution space itself does not remain stable indefinitely. The decisions a fungus makes are manifested in physical reality: hyphae that occupy a particular state at a particular time and receive a particular stimulus—whether through randomness or determination—may enter a positive feedback loop. Over time, this reinforcement can shift the underlying distribution.
This is still incomplete.
For theory-level reasoning, I’m going to drop the distribution and assume a non-probabilistic outcome (for now). I see one very important issue in the model: I’ve been treating stimuli as something external, but it’s unlikely that the system can truly distinguish between external and internal stimulation. That distinction probably exists at some layer of awareness, but for the system itself, both kinds of perturbations are just “inputs.” Because of that, I feel (though I can’t justify it cleanly yet) that stimuli can be absorbed into state. So the equation really becomes:
$$
S_{t+1} = f(S_t)
$$
Let’s go a step further: what is action? Action is a DNA-encoded physical manifestation of the matter in the system, expressed as a function of $S_t$. Different $S_t$ values will produce different (finite) sets of actions, but in turn, those actions will change $S_{t+1}$. So it seems that action can be merged into $S_t$ in the same way. I know this is a bit dirty. My intuition says it’s right, but I still don’t have the neural pathways built to explain it properly.
So the unified and most fundamental formula must remain:
$$
S_{t+1} = f(S_t)
$$
Everything else is perspective:
- “Stimulus” = incoming state perturbation
- “Action” = outgoing state perturbation
- “DNA” = constraints on allowed state transitions
There is more, but I’ll leave it for next time. More learning is necessary—more thinking, more state changes to my own “self”—to reach a state of understanding 🙂