patternMinor
if an argument of a lambda only passes itself if it is further evaluated, is runtime always finite?
Viewed 0 times
itselffiniteargumentevaluatedalwaysruntimefurtherpassesonlylambda
Problem
In order for a lambda expression to run forever, there must be at least one lambda in the expression
in which an argument is passed to itself. For example the following runs forever.
$$
(\lambda x.xx)(\lambda x.xx)
$$
However, I am curious if an expression could run forever if arguments were only passed to themselves
like in $\lambda x.x(xy)$ in that each time it is passed, it becomes further evaluated. I suspect that
this is true (that it can be evaluated in finite time) however am unsure how to prove this formally.
Informally the idea is that each recursive call is guaranteed to leave the lambda in a further reduced state
since the original cannot be preserved.
So my question is:
What is a formal proof or counterexample to this claim?
A few notes:
The opposite is definitely not true, expressions of the form $\lambda xy.(xy)x$ can run forever. For example,
the following runs indefinitely.
$$
(\lambda xy.(xy)x)(\lambda xG.(Gx)G)(Lx.x)
$$
Also, if when passed to itself both parts are passed an additional argument, such as $\lambda xy.(yx)(yx)$ it
can run forever. For example
$$
(\lambda xy.(yx)(yx))(\lambda x.x)(\lambda xy.(yx)(yx))
$$
Additionally, I am assuming all lambdas within the expression meet the criteria, not just some subset of them.
So something like the following would not be a counterexample
$$
(\lambda x.x(xy))(\lambda x.xx)(\lambda x.xx)
$$
Edit:
As chi pointed out below, I had not given a very good definition of the criteria for this question, so I've added one.
A lambda is reduced in X if it can be written in the following form (as ``)
An expression E is reduced if for every lambda L in E, for every argument A in L, L is reduced in A.
And the more precise question becomes: does every reduced expression terminate?
Edit 2:
I did change the above formulation sli
in which an argument is passed to itself. For example the following runs forever.
$$
(\lambda x.xx)(\lambda x.xx)
$$
However, I am curious if an expression could run forever if arguments were only passed to themselves
like in $\lambda x.x(xy)$ in that each time it is passed, it becomes further evaluated. I suspect that
this is true (that it can be evaluated in finite time) however am unsure how to prove this formally.
Informally the idea is that each recursive call is guaranteed to leave the lambda in a further reduced state
since the original cannot be preserved.
So my question is:
What is a formal proof or counterexample to this claim?
A few notes:
The opposite is definitely not true, expressions of the form $\lambda xy.(xy)x$ can run forever. For example,
the following runs indefinitely.
$$
(\lambda xy.(xy)x)(\lambda xG.(Gx)G)(Lx.x)
$$
Also, if when passed to itself both parts are passed an additional argument, such as $\lambda xy.(yx)(yx)$ it
can run forever. For example
$$
(\lambda xy.(yx)(yx))(\lambda x.x)(\lambda xy.(yx)(yx))
$$
Additionally, I am assuming all lambdas within the expression meet the criteria, not just some subset of them.
So something like the following would not be a counterexample
$$
(\lambda x.x(xy))(\lambda x.xx)(\lambda x.xx)
$$
Edit:
As chi pointed out below, I had not given a very good definition of the criteria for this question, so I've added one.
A lambda is reduced in X if it can be written in the following form (as ``)
:= [[all valid variable names except X]]
:= |
:= |
:= (lambda . ) | | ()
:= (X ) | (X )
:= (lambda . ) | ( ) | | X
:= (lambda X . )An expression E is reduced if for every lambda L in E, for every argument A in L, L is reduced in A.
And the more precise question becomes: does every reduced expression terminate?
Edit 2:
I did change the above formulation sli
Solution
(This is not a proper answer, but I wanted to share some thoughts)
To have a definitive answer one would need to precisely define what's allowed and what's not.
I noticed this, however, which perhaps could help. The term $M = \lambda xy. x(xy)$ is typeable in the simply-typed lambda calculus, which is strongly normalizing.
Instead, your other two examples, $\lambda xy. (xy)x$ and $\lambda xy.(yx)(yx)$ are not typeable in STLC.
Perhaps exploiting this, and formalizing everything precisely, one can indeed prove that using $M$ "alone" does not lead to non termination. It's hard to say, since the STLC constrains the calculus significantly.
To have a definitive answer one would need to precisely define what's allowed and what's not.
I noticed this, however, which perhaps could help. The term $M = \lambda xy. x(xy)$ is typeable in the simply-typed lambda calculus, which is strongly normalizing.
Instead, your other two examples, $\lambda xy. (xy)x$ and $\lambda xy.(yx)(yx)$ are not typeable in STLC.
Perhaps exploiting this, and formalizing everything precisely, one can indeed prove that using $M$ "alone" does not lead to non termination. It's hard to say, since the STLC constrains the calculus significantly.
Context
StackExchange Computer Science Q#95346, answer score: 2
Revisions (0)
No revisions yet.