HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

How does a single-track Turing machine simulate a multi-track Turing machine?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
trackmultisingledoeshowturingsimulatemachine

Problem

It's easy to see how a multi-track Turing machine can simulate a single-track Turing machine; it does so by ignoring all but the first track. But how does it work the other way? I need a specification of a transition function that does the job. If there are $k$ tracks, then we can think of symbols as being vectors and arrange them one after another in the tape; but again, what's the transition function like in the equivalent single-track machine?

Solution

If $\Sigma = (x_1,...,x_n)$ is the alphabet of the $m$-tracks $TM$, just use an expanded alphabet $\Sigma' = \Sigma \times ... \times \Sigma$ for the single-track $TM'$ ($|\Sigma'| = n^m$).

Every vector $\bar{x}_i$ of $m$ symbols from $\Sigma$ can be mapped to a unique alphabet symbol $u_i$ in $\Sigma'$: $\bar{x}_i = (x_{i_1},x_{i_2},...,x_{i_m}) \rightarrow u_i \in \Sigma'$

Hence every transition of $TM$ $(q_h,(x_{i_1},x_{i_2},...,x_{i_m}))\rightarrow (q_k,(x_{j_1},x_{j_2},...,x_{j_m}),dir)$
can be mapped to an equivalent transition in $TM'$ where the "read vector" $\bar{x_i}$ and "write vector" $\bar{x_j}$ are replaced with the corresponding alphabet symbols in $\Sigma'$: $(q_h,u_i)\rightarrow (q_k,u_j,dir)$

Context

StackExchange Computer Science Q#3529, answer score: 4

Revisions (0)

No revisions yet.