principleMinor
Kolmogorov complexity vs purposefully inefficient Turing machines
Viewed 0 times
inefficientpurposefullymachinesturingcomplexitykolmogorov
Problem
It's a theorem that, although the Kolmogorov complexity of a string is relative to the Turing machine you're working with, it differs by at most a constant (basically the amount of space it takes to write an interpreter for Turing machine A inside of Turing machine B).
My question is: how does this theorem deal with the issue of Turing machines with purposefully inefficient encodings?
For example, suppose we have a universal Turing machine $M_{terrible}$ that requires its input to be repeated five times. It starts by verifying that the input is in fact of the form
My question is: how does this theorem deal with the issue of Turing machines with purposefully inefficient encodings?
For example, suppose we have a universal Turing machine $M_{terrible}$ that requires its input to be repeated five times. It starts by verifying that the input is in fact of the form
x ++ x ++ x ++ x ++ x, then executes the universal Turing machine $N$ on just x. Shouldn't this force the Kolmogorov complexity of strings relative to $M_{terrible}$ to be 5 times larger than they are relative to $N$? Why doesn't this violate the theorem?Solution
You are stating the theorem incorrectly. The theorem states that the Kolmogorov complexity is the same up to a constant for essentially optimal description languages, which are those in which you can describe running a Turing machine on an input $x$ using $|x| + O(1)$ bits. Given this definition, it is a simple matter of playing with definitions to see that all such description languages result in the same notion of Kolmogorov complexity, up to an additive constant.
Context
StackExchange Computer Science Q#54643, answer score: 5
Revisions (0)
No revisions yet.