Turing machines were initially conceptualised as an abstraction of the human way of solving problems : loading things into memory, carrying out instructions, updating memory, etc. But as it turned out, this machine is also the limit of the kinds of computation that can be done by any entity, human or machine, as far as we know. We haven't been able to build something more powerful than a Turing machine.
Suppose, for the sake of this discussion, that the universe is exactly modeled by a 4D manifold structure. We will assume the universe is classical. The entities on the 4D spacetime are fields and particles. When I say "exactly modeled", I mean, for the sake of discussion, assume that this were the final theory of physics. There is no experiment that will ever falsify this one.
Now, if this is true, then any finite region of the universe is packing an ungodly amount of information at any moment of time. Specifically, these are the values of the fields on the points in that region. Furthermore, time evolution of this region corresponds to carrying out a computation on this region's data.
Then if we were to think of this universe itself as some machine, then it carries out far more powerful computations than a Turing machine. This is because the values of the fields in a region could correspond to non computable, or even non describable functions. (Note that I'm not saying that a human experimenter can't approximate the process using computable/describable functions. We can do that because we are only interested in finite measurement accuracy. I'm instead talking about the universe viewed as a computer).
My main question- If the universe viewed as a computer is more powerful than a Turing machine, the next question is, what kinds of emergent computers could be hosted by this universe? By emergent computer, I mean something like your smartphone. This computer's memory and computations are ultimately an abstraction of the memory and computations of the universe at the fundamental level. For instance, the data stored in this computer's memory chip is ultimately stored by the universe in the form of field/particle states. And any computation that this computer does is emergent out of the time evolution of the underlying field/particles.
So, if the underlying substrate machine does computations that a Turing machine machine can't do, one would expect that there can be emergent computers that are more powerful than a Turing machine. For instance, consider the abstract description of the memory of such a computer : it's possible that this is a countably infinite number of bits. This is possible because the substrate of the universe (fields and particles) can easily store that information in a finite region of space.
So, can we say for sure that these machines can't exist in a universe that's exactly modeled by continuous structures at the fundamental level?
P.S. I am restricting the definition of emergent computer to only mean computers that are confined to a finite region of space, and their computations are confined to a finite interval of time. This is because if I allow unbounded space and time, then even our everyday computers are more powerful than a Turing machine (as they can solve the halting problem by computing for an infinite time).
P.S. The conclusions that I arrived at say that the reason we haven't (and cannot) invent emergent computers that are this powerful is because our own brain is a specific kind of emergent computer whose memory and computations correspond to "finite set of bits" and "finite set of computation steps". So, while the substrate might support more powerful computers, our brains can't conceive and implement those modes of computation in an external device.