patternMinor
Implementing mathematical theory of arithmetic in Haskell via Curry-Howard correspondence
Viewed 0 times
howardtheorycurryviahaskellcorrespondenceimplementingmathematicalarithmetic
Problem
I have to ask for forgiveness in advance if the whole question doesn't
make a lot of sense, but unfortunately, I have no better intuition as
of right now and this seems like the best starting point I could come
up with.
I'm reading Tarski's book "Introduction to Logic: and to the
Methodology of Deductive Sciences", I'm on Part VII Chapter 43
"Primitive terms of the theory under construction; axioms concerning
fundamental relations among numbers". It begins by laying out an
elementary mathematical theory of arithmetic of real numbers.
The primitive terms are:
number" would be written as x ∈ N
Our next task consists in the derivation of a number of theorems from
the axioms adopted by us:
Now I have to stop and finally explain my intentions. I've heard many
times about a so-called "Curry–Howard–Lambek correspondence", and it
made sense whenever I've read examples of it. I am also not
comfortable with manually writing out the reasoning to prove the
theorems, I'd like the machinery of going through reasoning steps to
be checked by a compu
make a lot of sense, but unfortunately, I have no better intuition as
of right now and this seems like the best starting point I could come
up with.
I'm reading Tarski's book "Introduction to Logic: and to the
Methodology of Deductive Sciences", I'm on Part VII Chapter 43
"Primitive terms of the theory under construction; axioms concerning
fundamental relations among numbers". It begins by laying out an
elementary mathematical theory of arithmetic of real numbers.
The primitive terms are:
- real number (denoted as variables like
x,y)
- is less than (
- is greater than (>
)
- sum (+
)
- N
meaning "the set of all numbers". Thus, an expression "x is a
number" would be written as x ∈ N
- is not less than (
≮)
- is not greater than (
≯)
Among the axioms of the theory under consideration, two groups may be
distinguished. The axioms of the first group express fundamental
properties of the relations less than and greater than, whereas those
of the second are primarily concerned with addition. For the time
being, we shall consider the first group only; it consists, altogether,
of five statements:
- AXIOM 1. For any numbers
x and y (e.g., for arbitrary elements
of the set N) we have: x = y or x y
- AXIOM 2. If
x- AXIOM 3. If
x > y, theny ≯ x
- AXIOM 4. If
x
- AXIOM 5. If x > y
andy > z, thenx > z
Our next task consists in the derivation of a number of theorems from
the axioms adopted by us:
- THEOREM 1. No number is smaller than itself x ≮ x
- THEOREM 2. No number is greater than itself: x ≯ x`
Now I have to stop and finally explain my intentions. I've heard many
times about a so-called "Curry–Howard–Lambek correspondence", and it
made sense whenever I've read examples of it. I am also not
comfortable with manually writing out the reasoning to prove the
theorems, I'd like the machinery of going through reasoning steps to
be checked by a compu
Solution
Proofs in Haskell?
Okay, first let's talk about the Curry-Howard correspondence. This says that one can view theorems as types and proofs as programs. However, it says nothing about which specific logic a particular programming language represents.
In particular, Haskell lacks dependent types. That means that it can't express statements with "forall x" or "exists x". So immediately there's no way to encode your axioms.
The problem is that Haskell's types only let you refer to other types, not to specific values of those types. Your axiom says "forall x and y, Property holds for x and y": it's reasoning not about $Nat$ as a class, but about specific values of that class.
There are ways to fake it in Haskell, by encoding values with a type-level representation, but I'd strongly recommend against those for a beginner.
What does the Curry-Howard Correspondence Actually Mean?
What Curry-Howard is actually saying is not that we can write a program that implements a checker for proofs. What is says is that we can view a proof as a form of a program in a purely-functional language. Specifically, a proof is like a program that takes proofs of its assumptions as input, and produces proofs of its conclusions as outputs.
The key is that propositions are like types, and proofs are like programs. So in your example, it doesn't make sense to have an implementation of your axioms, since axioms are propositions that you take as true, without proof. An implementation would correspond to a proof of them.
The real magic of Curry-Howard comes when we introduce dependent types. The key is the dependent function type: instead of just having $A \to B$, we have $(x:A) \to B$, where $x$ can occur in $B$.
There's a lot going on here. First, we've blurred the line between types and values. In the same way that the type constructor
The classic example of this is $Eq$ for representing proofs of equality. For any values $x,y$, $Eq\ x\ y$ is the type of proofs that $x = y$. So we have a type indexed on values.
Secondly, we've now given names to our arguments in the type, not just in the implementation. So the order of arguments very much matters, because the types on the right can refer to the names on the left. They depend on the names, thus the name dependent types.
What this is really saying is that the return type of the function can change, depending on the value of the argument given.
Curry-Howard then says that, if we produce a well typed function of type $(x:A) -> B$, this is the same as saying "for all $x$ that satisfy $A$, $B$ holds".
The dual of this is dependent pairs, which code "exists" proofs. Instead of having the tuple type $(A,B)$, we now have $(x:A, B)$, where $B$ can refer to $x$. This means that the type of the thing on the right can depend on the value of the thing on the left. This is a constructive form of an existential proof: the way you prove "there exists an x satisfying B" is by saying "here's the x, and here's the proof it satisfies B".
Where to learn more?
If you want to prove things in a formal setting, I'd recommend
Coq, Agda, Idris, or Isabelle, since they have the machinery
for dealing with dependent types and proving things.
Some good free resources:
Some nice articles/papers/tutorials on the topic include:
The Idris book is popular, but not free:
Type Driven Development in Idris
Okay, first let's talk about the Curry-Howard correspondence. This says that one can view theorems as types and proofs as programs. However, it says nothing about which specific logic a particular programming language represents.
In particular, Haskell lacks dependent types. That means that it can't express statements with "forall x" or "exists x". So immediately there's no way to encode your axioms.
The problem is that Haskell's types only let you refer to other types, not to specific values of those types. Your axiom says "forall x and y, Property holds for x and y": it's reasoning not about $Nat$ as a class, but about specific values of that class.
There are ways to fake it in Haskell, by encoding values with a type-level representation, but I'd strongly recommend against those for a beginner.
What does the Curry-Howard Correspondence Actually Mean?
What Curry-Howard is actually saying is not that we can write a program that implements a checker for proofs. What is says is that we can view a proof as a form of a program in a purely-functional language. Specifically, a proof is like a program that takes proofs of its assumptions as input, and produces proofs of its conclusions as outputs.
The key is that propositions are like types, and proofs are like programs. So in your example, it doesn't make sense to have an implementation of your axioms, since axioms are propositions that you take as true, without proof. An implementation would correspond to a proof of them.
The real magic of Curry-Howard comes when we introduce dependent types. The key is the dependent function type: instead of just having $A \to B$, we have $(x:A) \to B$, where $x$ can occur in $B$.
There's a lot going on here. First, we've blurred the line between types and values. In the same way that the type constructor
List takes an argument to become a type, like List Int, we now can have type constructors that take values as arguments.The classic example of this is $Eq$ for representing proofs of equality. For any values $x,y$, $Eq\ x\ y$ is the type of proofs that $x = y$. So we have a type indexed on values.
Secondly, we've now given names to our arguments in the type, not just in the implementation. So the order of arguments very much matters, because the types on the right can refer to the names on the left. They depend on the names, thus the name dependent types.
What this is really saying is that the return type of the function can change, depending on the value of the argument given.
Curry-Howard then says that, if we produce a well typed function of type $(x:A) -> B$, this is the same as saying "for all $x$ that satisfy $A$, $B$ holds".
The dual of this is dependent pairs, which code "exists" proofs. Instead of having the tuple type $(A,B)$, we now have $(x:A, B)$, where $B$ can refer to $x$. This means that the type of the thing on the right can depend on the value of the thing on the left. This is a constructive form of an existential proof: the way you prove "there exists an x satisfying B" is by saying "here's the x, and here's the proof it satisfies B".
Where to learn more?
If you want to prove things in a formal setting, I'd recommend
Coq, Agda, Idris, or Isabelle, since they have the machinery
for dealing with dependent types and proving things.
Some good free resources:
- Software Foundations
- Certified Programming with Dependent Types
- Lean You An Agda and Achieve Enlightenment
Some nice articles/papers/tutorials on the topic include:
- Propositions as Types
- The Power of Pi
- Tutorial Implementation of Dependently Typed Lambda Calculus
The Idris book is popular, but not free:
Type Driven Development in Idris
Context
StackExchange Computer Science Q#87977, answer score: 6
Revisions (0)
No revisions yet.