HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Confused between turing-completeness and universal approximation - are they related?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
approximationareuniversalandrelatedconfusedcompletenessbetweenturingthey

Problem

I am trying to de-knot a point of confusion in my mind regarding "turing-completeness" and the "universal approximation theorem".

The context here is deep neural nets: So, consider two types of networks: a recurrent neural net, (RNN), and a feedforward net, say a pure convolutional neural net (CNN).

I understand that the RNN is turing complete. I also know that a CNN is a universal approximator, but is NOT turing complete.

What I am trying to understand is the link between universal approximation, and turing-completeness: Is there a link? Why is a CNN a universal approximator but NOT turing complete? Does a machine being turing-complete automatically make it a universal approximator? Trying to uncouple the two concepts.

Thanks!

Solution

A CNN can approximate a function on a fixed number of input variables, say $n$ of them. The set of functions on $n$ input variables isn't "Turing-complete". For instance, a boolean function $f:\{0,1\}^n \to \{0,1\}$ is always computable, as it can be computed by a program that just hardcodes the truth-table of $f$; and the set of such functions is not "Turing-complete".

Complication: CNN's actually approximate continuous functions $f:\mathbb{R}^n \to \mathbb{R}$... but a similar point remains (at least if the input is bounded).

Turing-completeness isn't really connected to universal approximation. For one thing: Turing-completeness talks about languages, which are subsets of $\{0,1\}^*$ and thus refers to discrete entities. Universal approximations talks about functions $f:\mathbb{R}^n \to \mathbb{R}$, and thus refers to continuous entities.

To qualify for "universal approximation", it's enough to be able to approximate all functions of $n$ variables (for each function, there exists a neural network that approximates it), so it talks about functions on inputs of bounded length. Turing-completeness requires the ability to compute all computable functions, which is a set of functions that has no fixed upper limit on the number of variables, i.e., it is a set of functions on inputs of unbounded length. Universal approximation could thus, in some sense, be considered "weaker" than Turing-completeness (though strictly speaking they are incomparable; neither implies the other).

Context

StackExchange Computer Science Q#68820, answer score: 5

Revisions (0)

No revisions yet.