patternMinor
Undecidable problems limit physical theories
Viewed 0 times
undecidablelimittheoriesphysicalproblems
Problem
Does the existence of undecidable problems immediately imply the non-predictability of physical systems? Let us consider the halting problem, first we construct a physical UTM, say using the usual circuit based construction. Then there can be no decidable physical theory which can determine, given any input setting of the circuits, whether the circuit will halt. This seems a triviality, but doesn’t this give us a weak sort of unpredictability without reference to quantum or chaotic considerations? Moreover we can strengthen the above argument by noting that there is nothing special about the circuit based UTM, so we have that the behavior of a physical system is in general undecidable at any level where a UTM can be constructed.
Edit: as pointed out by both Babou and Ben Crowell, my suggested circuit construction is merely an LBA. As I argued in the comments, I find it easy, and intuitive to imagine a machine which is physical but is not linearly bounded. Simply construct a machine (robot) which can mechanically move left/right on an input arbitrarily many times, and assume it has a finite, but non expiring power source. Now we also run into the problem that the universe is finite, but that lets us conclude either that the universe is finite, or the originally hoped for consequence must be true (that would still be a surprising conclusion to arrive at from the above argument).
Edit: as pointed out by both Babou and Ben Crowell, my suggested circuit construction is merely an LBA. As I argued in the comments, I find it easy, and intuitive to imagine a machine which is physical but is not linearly bounded. Simply construct a machine (robot) which can mechanically move left/right on an input arbitrarily many times, and assume it has a finite, but non expiring power source. Now we also run into the problem that the universe is finite, but that lets us conclude either that the universe is finite, or the originally hoped for consequence must be true (that would still be a surprising conclusion to arrive at from the above argument).
Solution
This was initially intended as a comment, as it side-steps a bit the
question. But I think it does answer in its own way.
What is known, or attempted so far, shows that connecting computation theory with physics can
be a pretty subtle endeavour, and I am afraid that the approach
suggested in the question is probably a bit too crude. I am not sure it
is much better than the classical argument that, everything being
finite, all we need is finite state automata theory, and that studying
Turing machines is a waste of time. (Not my view of things)
Why should such issues be addressed with caution
I should probably motivate the above comparison with the finite
automata argument. My perception is that
computability is, maybe even more than complexity, an asymptotic theory: what
matters is what occurs at infinity. But we do not know whether the
universe is finite or infinite. If it is finite, then what would be
the point of considering infinite computations. The following concerns
physics, and I am not a physicist. I do my best to be accurate, but
you have been warned.
We often see the Big Bang as a "time" when the whole universe was a
very tiny something, with a very small size. But if it had a size at
some point, how did it transform into something infinite at a later time. I am not trying to say it is impossible ... I do not have the
slightest idea. But it could be that it always was infinite.
Then, let us consider the universe as infinite. Does it help us? Well,
we have some problems with the speed of light. If we consider what may
be relevant here (where we are), we have to consider that we can be concerned only by
a part of the universe that is included in a finite sphere. The radius
$r$ of that sphere is such that the relative speed of two points at
distance $r$ due to expansion is equal to the speed of
light. According to what we currently know, without a future variation
in expansion speed, nothing outside that sphere will ever be of
concern to us. So the universe is finite for us for all practical
purposes. Actually, things are even worse if you consider the contents
of this relevant universe: it is shrinking (unless there is some
creation process). The reason is that the sphere is expanding beyond
its own diameter, carrying with it some of its content that becomes
irrelevant too. Remark: that sphere is not what is called the
observable universe (which is dependent on the age of the universe), it is much larger.
Thus, not only "our" universe is finite, but its resources might be
shrinking. It is possible that in so many billions
years, only our galaxy might be still relevant to us (assuming we
still exist), with the Andromeda galaxy which will hit the Milky Way
before then.
Well, I do not know what is considered established at this time, but
it shows at least that assuming infinity is a big assumption.
However, is it the case that physical limitations prevent us from
using computability theory. All that can be concluded from the above
is that it may be unreasonable to draw physical conclusions from the
theoretical work on Turing Machines and the halting problem.
However the concerned techniques may also give useful results when
applied to devices or formalisms that are not Turing-complete. I would
not try to go into details, if only because algorithmic complexity is
not my area, but I would guess that, if the structure of the universe
is discrete, complexity could be in some form relevant to the behavior
of some phenomena. Of couse, this is only wild speculation on my
part. Some of the research I reference below is related to such
discreteness issues.
Some examples of work relating physics and computation theory
There is a significant body of work trying to tie computation and
physics, most of which I barely know of. So, please, do not rely on anything I
might say, but simply take it as pointers to search for potentially
relevant work.
A good part of that work is concerned with thermodynamic aspects, such
as the possibility of reversible computing with no energy cost. I
thinks this ties with functional programming as it is side-effects
that cost energy (but do not trust me). You may take wikipedia as an
introduction, but Google will yield many references.
There is also work trying to tie Church-Turing thesis and physics,
involving information density among other things. See for example:
-
The physical Church-Turing thesis and the principles of quantum
theory
-
Around the Physical Church-Turing Thesis:
Cellular Automata, Formal Languages, and the
Principles of Quantum Theory,
-
Physics and Church-Turing Thesis.
I vaguely recall seen other
interesting takes on this, but it escapes me right-now.
Then you have Lamport's work on clocks synchronization and relativity
in distributed systems.
And, of course, you have quantum computing that apparently changes
some (achievable) time-complexities, though it does not affect
computability.
Another take is Wolfram's work on modelling physical
question. But I think it does answer in its own way.
What is known, or attempted so far, shows that connecting computation theory with physics can
be a pretty subtle endeavour, and I am afraid that the approach
suggested in the question is probably a bit too crude. I am not sure it
is much better than the classical argument that, everything being
finite, all we need is finite state automata theory, and that studying
Turing machines is a waste of time. (Not my view of things)
Why should such issues be addressed with caution
I should probably motivate the above comparison with the finite
automata argument. My perception is that
computability is, maybe even more than complexity, an asymptotic theory: what
matters is what occurs at infinity. But we do not know whether the
universe is finite or infinite. If it is finite, then what would be
the point of considering infinite computations. The following concerns
physics, and I am not a physicist. I do my best to be accurate, but
you have been warned.
We often see the Big Bang as a "time" when the whole universe was a
very tiny something, with a very small size. But if it had a size at
some point, how did it transform into something infinite at a later time. I am not trying to say it is impossible ... I do not have the
slightest idea. But it could be that it always was infinite.
Then, let us consider the universe as infinite. Does it help us? Well,
we have some problems with the speed of light. If we consider what may
be relevant here (where we are), we have to consider that we can be concerned only by
a part of the universe that is included in a finite sphere. The radius
$r$ of that sphere is such that the relative speed of two points at
distance $r$ due to expansion is equal to the speed of
light. According to what we currently know, without a future variation
in expansion speed, nothing outside that sphere will ever be of
concern to us. So the universe is finite for us for all practical
purposes. Actually, things are even worse if you consider the contents
of this relevant universe: it is shrinking (unless there is some
creation process). The reason is that the sphere is expanding beyond
its own diameter, carrying with it some of its content that becomes
irrelevant too. Remark: that sphere is not what is called the
observable universe (which is dependent on the age of the universe), it is much larger.
Thus, not only "our" universe is finite, but its resources might be
shrinking. It is possible that in so many billions
years, only our galaxy might be still relevant to us (assuming we
still exist), with the Andromeda galaxy which will hit the Milky Way
before then.
Well, I do not know what is considered established at this time, but
it shows at least that assuming infinity is a big assumption.
However, is it the case that physical limitations prevent us from
using computability theory. All that can be concluded from the above
is that it may be unreasonable to draw physical conclusions from the
theoretical work on Turing Machines and the halting problem.
However the concerned techniques may also give useful results when
applied to devices or formalisms that are not Turing-complete. I would
not try to go into details, if only because algorithmic complexity is
not my area, but I would guess that, if the structure of the universe
is discrete, complexity could be in some form relevant to the behavior
of some phenomena. Of couse, this is only wild speculation on my
part. Some of the research I reference below is related to such
discreteness issues.
Some examples of work relating physics and computation theory
There is a significant body of work trying to tie computation and
physics, most of which I barely know of. So, please, do not rely on anything I
might say, but simply take it as pointers to search for potentially
relevant work.
A good part of that work is concerned with thermodynamic aspects, such
as the possibility of reversible computing with no energy cost. I
thinks this ties with functional programming as it is side-effects
that cost energy (but do not trust me). You may take wikipedia as an
introduction, but Google will yield many references.
There is also work trying to tie Church-Turing thesis and physics,
involving information density among other things. See for example:
-
The physical Church-Turing thesis and the principles of quantum
theory
-
Around the Physical Church-Turing Thesis:
Cellular Automata, Formal Languages, and the
Principles of Quantum Theory,
-
Physics and Church-Turing Thesis.
I vaguely recall seen other
interesting takes on this, but it escapes me right-now.
Then you have Lamport's work on clocks synchronization and relativity
in distributed systems.
And, of course, you have quantum computing that apparently changes
some (achievable) time-complexities, though it does not affect
computability.
Another take is Wolfram's work on modelling physical
Context
StackExchange Computer Science Q#42930, answer score: 7
Revisions (0)
No revisions yet.