HiveBrain v1.2.0
Get Started
← Back to all entries
patternMajor

Why are functional languages Turing complete?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
whyarelanguagesfunctionalcompleteturing

Problem

Perhaps my limited understanding of the subject is incorrect, but this is what I understand so far:

-
Functional programming is based off of Lambda Calculus, formulated by
Alonzo Church.

-
Imperative programming is based off of the Turing machine model, made
by Alan Turing, Church's student.

-
Lambda calculus is as powerful and able as the Turing Machine,

meaning they are equivalent in computational power.

If functional programming is based off of Lambda Calculus and not the Turing machine, then why are some (or all) of them described to be Turing complete, and not Lambda complete or something like that? Is the term "Turing completeness" special in any way to Turing machines, or is it just a word?

Lastly, if imperative languages are based off of the Turing Machine, and computers are basically Turing machines, without infinite memory, does that mean they perform better than functional programming languages on our modern PCs?

If that's the case, then what would be the equivalent of a lambda calculus machine?

I know this seems to be 3 separate questions, but they're all closely related, and each is dependent on the previous question being a valid question to begin with.

Solution

In a nutshell:

What characterizes imperative programming languages as close to Turing
machines and to usual computers such as PCs, (themselves closer to
random access machines (RAM) rather than to Turing machine) is the
concept of an explicit memory that can be modified to store (intermediate results). It is an automata view of computation, with a concept of a
state (comprising both finite state control and memory content) that can change as the computation proceed.

Most other models are more abstract. Though they may express the
computation as a succession of transformation steps of an original
structure, these transformation are applied in a sort of intemporal
universe of mathematical meanings. This may preserve properties, such
as referential transparency, that may make mathematical analysis
simpler. But it is more remote from natural physical models that rely
on the concpet of memory.

Thus there are no natural functional machines, except in a larger sense
as explained below, since software is not really separable from
hardware.

The reference to Turing as the yardstick of computability comes probably from the fact that his model, the Turing machine was closest to this physical realizability constraint, which made it a more intuitive model of computation.

Further considerations:

There are many models of computation, which were designed to capture in
the most general possible way the concept of a computation. They
include Turing machines, actually in many different flavors, the
lambda calculus (flavors too), semi-Thue rewriting systems, partial
recursive function,
combinatory logic.

They all capture some aspects of the various techniques used by
mathematicians to express or conduct computations. And most have
been used to some extent as the basis of some programming language
design (e.g. Snobol for rewriting systems, APL for combinators, Lisp/Scheme for lambda calculus) and can often be combined in diverse ways in modern programming languages.

One major result is that all these computation models were proved
equivalent, which lead to the Church-Turing thesis that no physically
realizable models of computation can do more than any of these models.
A model of computation is said Turing complete if it can be proved
to be equivalent to one of these models, hence equivalent to all of
them.

The name could have been different. The choice of the Turing machine
(TM) as the reference is probably due to the fact that it is probably
the simplest of these models, mimicking closely (though
simplistically) the way a human computes and fairly easy to implement
(in a limited finite form) as a physical device, to such an extent
that Turing machines have been constructed with Lego sets. The basic idea requires no mathematical sophistication. It is probably the simplicity and
realizability of the model that gave it this reference position.

At the time Alan Turing created his computing device, other proposals
were on the table to serve as formal definition of computability, a
crucial issue for the foundations of mathematics (see
Entscheidungsproblem). The Turing proposal was considered by the
experts of the time as the one most convincingly encompassing known
work on what calculability should be (see Computability and
Recursion, R.I. Soare, 1996, see section 3.2). The various proposals were proved equivalent, but Turing's was more convincing. [from comments by Yuval Filmus]

It should be noted that, from a hardware point of view, our computers
are not Turing machines, but rather what is called Random Access
Machines (RAM), which are also Turing complete.

Purely imperative language (whatever that might mean) are probably the
formalisms used for the most basic models, such as Turing machines, or
the assembly language (skipping its binary coding) of computers. Both
are notoriously unreadable, and very hard to write significant
programs with. Actually, even assembly language has some higher level
features to ease programming a bit, compared to direct use of machine
instructions. Basic imperative models are closed to the physical
worlds, but not very usable.

This led quickly to the development of higher level models of
computation, which started mixing to it a variety of computational
techniques, such as subprogram and function calls, naming of memory
location, scoping of names, quantification and dummy variables,
already used in some form in mathematics and logic, and even very
abstract concepts such as reflection (Lisp 1958).

The classification of programming languages into programming paradigm
such as imperative, functional, logic, object oriented is based of the
preeminence of some of these techniques in the design of the language,
and the presence or absence fo some computing features that enforce
some properties for programs or program fragments written in the
language.

Some models are convenient for physical machines. Some others are more
convenient for a high-level description of algorithms, it that ma

Context

StackExchange Computer Science Q#44305, answer score: 29

Revisions (0)

No revisions yet.