patternMinor
Functor laws and natural transformations in Haskell
Viewed 0 times
transformationsfunctornaturalhaskellandlaws
Problem
As I've been struggling to get a deeper understanding of monads in Haskell, I started reading about functors and their counterparts in category theory. Keep in mind that I have no background in the latter.
According to Milewski(1), any polymorphic function of the type
If
which I imagine is what leads to the wrapping-unwrapping analogy. In fact, if we render
which quite literally means that applying the transformed
It follows that
So is it correct to say that requiring the functor laws on a type constructor is equivalent to requiring that its data constructors satisfy the naturality conditions or something along these lines? And if not, then what is the purpose of the functor laws from a "practical" point of view?
Edit: I just wanted to clarify that I don't really need an answer to the previous question. What really troubles me is that most tutorials on Haskell's fu
According to Milewski(1), any polymorphic function of the type
F a -> G a, where F and G are functors, automatically satisfies the naturality condition, although I have yet to figure out why. Anyways, this should hold in particular when F is the identity functor, which if understand correctly would correspond to the aforesaid function having the type a -> G a. Now, for simplicity let:data G a = C aIf
G is a functor, then since C :: a -> G a, C is a natural transformation. Viceversa, suppose C is a natural transformation. Thus, for all functions f:(fmap f) . C == C . fwhich I imagine is what leads to the wrapping-unwrapping analogy. In fact, if we render
C x as {x} as in "wrapped x", then the previous can be re-written as:(fmap f) {x} == {f x}which quite literally means that applying the transformed
f to the wrapped value is the same as unwrapping, applying f to the unwrapped value and re-wrap it. By the previous equality:law 1) (fmap id) {x} == {id x}
== {x}
law 2) (fmap (f . g)) {x} == {(f . g) x}
== {f (g x)}
== (fmap f) {g x}
== (fmap f) ((fmap g) {x})
== ((fmap f) . (fmap g)) {x}It follows that
G is a functor.So is it correct to say that requiring the functor laws on a type constructor is equivalent to requiring that its data constructors satisfy the naturality conditions or something along these lines? And if not, then what is the purpose of the functor laws from a "practical" point of view?
Edit: I just wanted to clarify that I don't really need an answer to the previous question. What really troubles me is that most tutorials on Haskell's fu
Solution
I'm having a little trouble exactly what question you're asking, but I think it might help if we went through the connection between homomorphisms and free theorems.
But here's the executive summary of what I think you want to know:
First off, let's get a bit of intuition about what the parametricity theorem states. An informal way of stating it is this: For any function which has a "forall" in its type, that "forall" really does mean "for all types". The function cannot make any assumptions whatsoever about the type.
To explain what this means when we involve types with an algebra, we need the language of homomorphisms. So let's review what a homomorphism is. (Sorry if this is revision for you, but I think it's worth going through this.)
Let's consider monoids. A monoid is a set $M$ along with two operations $0_M$ and $+_M$ which obeys the laws:
$$\forall x \in M, x +_M 0_M = 0_M +_M x = x$$
$$\forall x,y,z \in M, x +_M (y +_M z) = (x +_M y) +_M z$$
A monoid homomorphism is a function between two monoids $f : M \rightarrow N$ which "respects" the two operations:
$$f(0_M) = 0_M$$
$$f(x +_M y) = f(x) +_N f(y)$$
In Haskell, we might represent monoids as a type class:
If
Remember that the monoid operations on the left-hand side of these "laws" are different from the monoid operations on the right-hand side. The left-hand side are the operations for
Now consider the following typeclass (which isn't the same as Haskell's
Again, there are laws that any reasonable "less than" operator must obey. But what I want to concentrate on here is the
What does it mean for a function to "respect" this algebra? It essentially means that the
Think about what this means semantically: this law is stating that for any two elements,
OK, now back to the parametricity theorem. The free theorem for the function:
is this, written out in a slightly fuller form:
We can say the same thing with type classes:
The free theorem here is, essentially:
Which is, of course, just that
The complication with
Categories are algebras just like any other. A category has two operations, $1$ and $\circ$, which satisfies the laws that you know and love. So a category homomorphism $F$ is a function which satisfies:
$$F(\hbox{id}) = \hbox{id}$$
$$F(g \circ f) = F(g) \circ F(f)$$
If you think of a functor between the "Haskell category" (objects are Haskell types, morphisms are Haskell functions) and a Haskell
Now here's the important part: The fmap operation is the homomorphism precondition that you need for free theorems involving
Consider, for example, something with the same type as
The free theorem for this function is:
From this you can conclude a bunch of interesting things, such as that if
But here's the executive summary of what I think you want to know:
- The fmap laws are not redundant when you have free theorems. On the contrary, you need fmap to state the free theorem for any function that involves a Haskell
Functor.
- Specifically, if a function has the type of a natural transformation, its free theorem is the naturality condition expressed in terms of fmap.
First off, let's get a bit of intuition about what the parametricity theorem states. An informal way of stating it is this: For any function which has a "forall" in its type, that "forall" really does mean "for all types". The function cannot make any assumptions whatsoever about the type.
To explain what this means when we involve types with an algebra, we need the language of homomorphisms. So let's review what a homomorphism is. (Sorry if this is revision for you, but I think it's worth going through this.)
Let's consider monoids. A monoid is a set $M$ along with two operations $0_M$ and $+_M$ which obeys the laws:
$$\forall x \in M, x +_M 0_M = 0_M +_M x = x$$
$$\forall x,y,z \in M, x +_M (y +_M z) = (x +_M y) +_M z$$
A monoid homomorphism is a function between two monoids $f : M \rightarrow N$ which "respects" the two operations:
$$f(0_M) = 0_M$$
$$f(x +_M y) = f(x) +_N f(y)$$
In Haskell, we might represent monoids as a type class:
class Monoid m where
mzero :: m
mplus :: m -> m -> mIf
M1 and M2 are instances of Monoid, a Monoid homomorphism f :: M1 -> M2 must obey the laws:f mzero = mzero
f (mplus x y) = mplus (f x) (f y)Remember that the monoid operations on the left-hand side of these "laws" are different from the monoid operations on the right-hand side. The left-hand side are the operations for
M1 and the right-hand side are the operations for M2.Now consider the following typeclass (which isn't the same as Haskell's
Ord typeclass, but is close enough):class Ordered a where
lessThan :: a -> a -> BoolAgain, there are laws that any reasonable "less than" operator must obey. But what I want to concentrate on here is the
Bool part.What does it mean for a function to "respect" this algebra? It essentially means that the
Bool has to be the same. That is, if A and B are both instances of Ordered, an Ordered homomorphism f :: A -> B must obey the law:lessThan (f x) (f y) = lessThan x yThink about what this means semantically: this law is stating that for any two elements,
f doesn't change their relative ordering. That is what it means for f to "respect" the ordering algebra.OK, now back to the parametricity theorem. The free theorem for the function:
sortBy :: forall a. (a -> a -> Bool) -> [a] -> [a]is this, written out in a slightly fuller form:
for all f :: A -> B,
If lt :: A -> A -> Bool is a function such that
for all x, y :: A, lt x y = lt (f x) (f y)
Then
for all xs :: [A], sortBy lt . map f = map f . sortBy ltWe can say the same thing with type classes:
sort :: forall a. Ordered a => [a] -> [a]The free theorem here is, essentially:
for all f :: A -> B,
if f is an Ordered homomorphism
then sort . map f = map f . sortWhich is, of course, just that
f doesn't change the ordering of elements, then performing the substitution map f and then sorting is the same thing as sorting them performing the substitution.The complication with
Functor is that it's a constructor class, so the intuition about what a Functor homomorphism should be is a little less clear. But let's step back a moment and talk about category homomorphisms.Categories are algebras just like any other. A category has two operations, $1$ and $\circ$, which satisfies the laws that you know and love. So a category homomorphism $F$ is a function which satisfies:
$$F(\hbox{id}) = \hbox{id}$$
$$F(g \circ f) = F(g) \circ F(f)$$
If you think of a functor between the "Haskell category" (objects are Haskell types, morphisms are Haskell functions) and a Haskell
Functor category (objects are Haskell functors and morphisms are Haskell functions between these objects), then this is exactly the Functor laws.fmap id = id
fmap (g . f) = fmap g . fmap fNow here's the important part: The fmap operation is the homomorphism precondition that you need for free theorems involving
Functor.Consider, for example, something with the same type as
fmap but isn't necessarily fmap:instance Functor F where ...
somethingLikeFmap :: forall a b. (a -> b) -> (F a -> F b)The free theorem for this function is:
if q . f = g . p,
then somethingLikeFmap q . fmap f = fmap g . somethingLikeFmap pFrom this you can conclude a bunch of interesting things, such as that if
somethingLikeFmap id = id then `fmap = somethingLikeFmapCode Snippets
class Monoid m where
mzero :: m
mplus :: m -> m -> mf mzero = mzero
f (mplus x y) = mplus (f x) (f y)class Ordered a where
lessThan :: a -> a -> BoollessThan (f x) (f y) = lessThan x ysortBy :: forall a. (a -> a -> Bool) -> [a] -> [a]Context
StackExchange Computer Science Q#119777, answer score: 2
Revisions (0)
No revisions yet.