snippetMinor
How to prove equivalence relation in this case?
Viewed 0 times
thiscaseprovehowrelationequivalence
Problem
I am working on $\lambda$-terms and trying to prove the $=$ is an equivalence relation on $\lambda$-terms. My problem is proving reflexive relation.
$\frac{}{\theta \vdash x = x}$
$\frac{ \theta,x \# N, y \# M \vdash M = [y := x]N }
{ \theta \vdash \texttt{$\lambda x.M$ $=$ $\lambda y.N$} } $
$\frac{\theta \vdash M_1 = M_2 \quad N_1 = N_2 }
{ \theta \vdash \texttt{$M_1 \, N_1$ $=$ $M_2 \,N_2$} }$
I put two restrictions on such $\lambda$-terms.First, bound variables are distinct.
For example, there are no two terms such as $\lambda x.M$ and $\lambda x.N$.
Second, multiple bindings of a variable is not allowed.
For example, $\lambda x. \lambda x.M$ should be written as $\lambda y. \lambda x.M$.
To sum up, every bound variable should be distinct.
$[y := x]N$ means variable $x$ replaces variable $y$ in term $N$.
$x \# N$ means $x$ does not occur in $N$.
$\theta$ is a set of $\#$.
Also, $\alpha$-equivalence is assumed for the terms.
For example, $\lambda x.x = \lambda y.y$
I tried to prove that the $=$ shown in above rules is an equivalence relation on such terms.
For equivalence relation, I have to prove the following three relations.
reflexive: $\theta \vdash M=M$.
symmetric : $\theta \vdash M=N$ implies $\theta \vdash N=M$.
transitive: $\theta \vdash M=N$ and $\theta \vdash N=P$ implies $\theta \vdash M=P$.
The proof of reflexive relation is the following.
when $M$ is a variable such as $x$, then $x = x$.
when $M$ is an application such as $M_1 \,N_1$), then I have
$M_1 \,N_1$ = $M_1 \,N_1$, so it is true.
when $M$ is an abstraction such as $\lambda x.M$, from $\lambda x.M = \lambda x.M$, I have $ x \# M \vdash | M=[x:=x]M $, which is not true becuase $x \in M$.
Also, as I said, there are no two terms such as $\lambda x.M$ and $\lambda x.M$, so I cannot show $M=M$ for an abstraction.
Since $\alpha$-equivalence is assumed for terms. I assume that $M=\lambda x.M_1 =\lambda y.M_2$. Therefore, I will have $x \# M_2, y
$\frac{}{\theta \vdash x = x}$
$\frac{ \theta,x \# N, y \# M \vdash M = [y := x]N }
{ \theta \vdash \texttt{$\lambda x.M$ $=$ $\lambda y.N$} } $
$\frac{\theta \vdash M_1 = M_2 \quad N_1 = N_2 }
{ \theta \vdash \texttt{$M_1 \, N_1$ $=$ $M_2 \,N_2$} }$
I put two restrictions on such $\lambda$-terms.First, bound variables are distinct.
For example, there are no two terms such as $\lambda x.M$ and $\lambda x.N$.
Second, multiple bindings of a variable is not allowed.
For example, $\lambda x. \lambda x.M$ should be written as $\lambda y. \lambda x.M$.
To sum up, every bound variable should be distinct.
$[y := x]N$ means variable $x$ replaces variable $y$ in term $N$.
$x \# N$ means $x$ does not occur in $N$.
$\theta$ is a set of $\#$.
Also, $\alpha$-equivalence is assumed for the terms.
For example, $\lambda x.x = \lambda y.y$
I tried to prove that the $=$ shown in above rules is an equivalence relation on such terms.
For equivalence relation, I have to prove the following three relations.
reflexive: $\theta \vdash M=M$.
symmetric : $\theta \vdash M=N$ implies $\theta \vdash N=M$.
transitive: $\theta \vdash M=N$ and $\theta \vdash N=P$ implies $\theta \vdash M=P$.
The proof of reflexive relation is the following.
when $M$ is a variable such as $x$, then $x = x$.
when $M$ is an application such as $M_1 \,N_1$), then I have
$M_1 \,N_1$ = $M_1 \,N_1$, so it is true.
when $M$ is an abstraction such as $\lambda x.M$, from $\lambda x.M = \lambda x.M$, I have $ x \# M \vdash | M=[x:=x]M $, which is not true becuase $x \in M$.
Also, as I said, there are no two terms such as $\lambda x.M$ and $\lambda x.M$, so I cannot show $M=M$ for an abstraction.
Since $\alpha$-equivalence is assumed for terms. I assume that $M=\lambda x.M_1 =\lambda y.M_2$. Therefore, I will have $x \# M_2, y
Solution
I will only prove reflexivity $A=A$. We proceed by induction on the structure of $A$.
If $A\equiv x$, then $x=x$ follows from rule 1.
If $A\equiv \lambda x. M$, then by induction hypothesis we can assume $M=M$ is provable. Now, take any $y$ not free in $M$. We have $M \equiv [y:=x][x:=y]M$ by construction. Let $N \equiv [x:=y]M$. Rule two states
$$
\dfrac{
x\# N, y\# M \vdash M=[y:=x]N
}{
\vdash \lambda x.M = \lambda y.N
}
$$
We indeed have $x\# N$, since the $x$ variable was renamed as $y$ in $N \equiv [x:=y]N$. We also have $y\# M$ by our choice of $y$. Induction hypothesis states $M=M$, which is $M=[y:=x]N$. Hence, the conclusion of the rule holds, which is $\lambda x. M = \lambda y. [x:=y]M$. If terms are identified up-to $\alpha$, then this equality is the same as $\lambda x.M = \lambda x.M$.
If $A=MN$, the induction hypotheses and rule 3 suffice.
If $A\equiv x$, then $x=x$ follows from rule 1.
If $A\equiv \lambda x. M$, then by induction hypothesis we can assume $M=M$ is provable. Now, take any $y$ not free in $M$. We have $M \equiv [y:=x][x:=y]M$ by construction. Let $N \equiv [x:=y]M$. Rule two states
$$
\dfrac{
x\# N, y\# M \vdash M=[y:=x]N
}{
\vdash \lambda x.M = \lambda y.N
}
$$
We indeed have $x\# N$, since the $x$ variable was renamed as $y$ in $N \equiv [x:=y]N$. We also have $y\# M$ by our choice of $y$. Induction hypothesis states $M=M$, which is $M=[y:=x]N$. Hence, the conclusion of the rule holds, which is $\lambda x. M = \lambda y. [x:=y]M$. If terms are identified up-to $\alpha$, then this equality is the same as $\lambda x.M = \lambda x.M$.
If $A=MN$, the induction hypotheses and rule 3 suffice.
Context
StackExchange Computer Science Q#71545, answer score: 3
Revisions (0)
No revisions yet.