### Monads for continuation passers

I once quipped that continuations, syntax-case and monads, once you understand them you will harvest boundless strength from the cosmos itself

. Since we already covered Syntax-case, let’s focus on monads now. Because what I’m going to contend is that they are pretty much the same as continuations anyway.

Monads are something that is horribly mistified while it’s actually pretty simple. I’d say continuations are far more difficult. What people typically call a monad

is actually a *monadic value*. A monad in Haskell is simply a type class over unary type constructors, functions with kind `* -> *`

, that may sound super difficult but what it effectively comes down to is that it’s a datatype `m`

on which two functions are defined:

return :: Monad m => a -> m a;

(>>=) :: Monad m => m a -> (a -> m b) -> m b;

Call these inject

and bind

for sake of argument because these names make more sense. A monad isn’t a type. It’s a unary type constructor paired with these two functions. We can turn basically any type constructor into a monad provided we pair it with these two functions defined for it.

Assume we have our random monad M. What the types of these functions say is that `return`

takes a value of any type `a`

and returns a value of type `M a`

. the other one is a bit more complex. It takes a value of type `M a`

, a function which goes from `a`

to `M b`

for any type b and it eventually results into… well you can guess that.

Now, in order for it to actually be a Monad, inject and bind have to satisfy the three holy monadic laws on pain of death and horrible runtime errors. These are basically left identity, right identity and associativity masked in a very convoluted way:

return x >>= f === f x -- left identity

m >>= return === m -- right identity

m >>= f >>= g === m >>= ((>>= g) . f) -- associativity

Well, that doesn’t look much like left and right identity and associativity does it? Well, that’s because it isn’t that over the bind function, it’s that over the monadic composition function `(>=>)`

, which we can define like so:

(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c);

f >=> g = \x -> (return x >>= f >>= g);

Seems a bit convoluted, but let’s remember that if we had monadic composition as primitive then bind would easily defined as

m >>= f = (id >=> f) m

As in, bind is essentially monadic composition composed on the identity argument, it’s a special case thereof where the first function doesn’t do anything interesting. The monadic laws in terms of monadic composition make a lot more sense:

return >=> f === f -- left identity

f >>= return === f -- right identity

(f >>= g) >>= h === f >>= (g >>= h) -- associativity

So yeah, this makes a bit more sense, injection is supposed to be the identity of monadic composition and monadic composition is supposed to be associative. So that is the definition of a monad, a data container which has some weird composition defined on it which composes functions which inject into that container in some way. Injection/return is supposed to be the most basic identity of such functions.

And this is used to do I/O? Well, yes and no, it *can* be used for I/O amongst other things. You see, the problem with Haskell is that it doesn’t define an evaluation order, which makes it hard to get any I/O done, don’t we need some evaluation order for that? Well, let’s make our very own monad to illustrate this:

data Box a = Box a;

instance Monad Box where {

return = Box;

(Box m) >>= f = f m;

};

-- let's also make a function that gets things out out the box:

safeUnBox :: Box a -> a;

safeUnBox (Box a) = a;

The most basic monad we can ever imagine, and it trivially satisfies the monadic laws of identity, they are written right in the definition, injection is simply the data constructor `Box`

. Associativity is a bit harder to prove and I can’t be ars… ehh excercise to the reader something something. Monads are effectively data containers with injection and binding defined on them, so we can use this to write our own little factorial function, without do-notation because that’s for pussies:

factorial :: Int -> Box Int;

factorial n =

if (n <= 0) then

return 1

else

(return (n - 1)) >>=

factorial >>=

(return . (* n)) >>=

return;

I’ve added a couple of superfluous parentheses to make it clear that return is a normal function, not a statement that binds more tightly than the binding operation. Those familiar with the continuation passing style should (else you fail and should uninstall your life n00b) notice the similarity between the continuation passing style:

[define (factorial n cont)

(<=-cps n 0 [lambda (smaller-than-zero?)

[if smaller-than-zero?

(cont 1)

(--cps n 1 [lambda (pred-of-n)

(factorial pred-of-n [lambda (factorial-of-pred-of-n)

(*-cps n factorial-of-pred-of-n cont)])])]])]

Now, the structural identicality is not complete because I didn’t bother to put booleans in a Box, only integers, and I used currying a lot rather than explicit lambdas. Also, I’ve used normal versions of other opertaions which are not monadic, but we can fix this:

(<=!) :: Int -> Int -> Box Bool;

n <=! m = return (n <= m);

(*!) :: Int -> Int -> Box Int;

n *! m = return (n * m);

(-!) :: Int -> Int -> Box Int;

n -! m = return (n - m);

factorial :: Int -> Box Int;

factorial n =

(n <=! 0) >>=

(\isSmallerThanZero ->

if isSmallerThanZero then

return 1

else

(n -! 1)) >>=

(\predOfN ->

(factorial predOfN) >>=

(\factorialOfPredOfN ->

(n *! factorialOfPredOfN) >>=

return)));

These have got to be the most awful ways ever to write a factorial. But look at what we have. Are both not structurally identical. What bind essentially does is taking a value in a box and passes that value onto its continuation which is the second argument of bind. In order for our normal functions to still work however we have to ensure they don’t normally return values, but return them in a box. This is why Monads and continuation passing style both model computation in a very similar way. You eliminate normal mathematical recursive expression building and turn the flow of computation inside out in both. You reduce it to single functions which call a future function with the result they produce and don’t ever actually return their result in a normal way to the function that called them.

But what is it good for? What do we all remember about continuation passing style? Yeah, it allows you to make evaluation order explicit. If we have `(+ (* 1 2) (- 1 2))`

we have no control if the multiplication is evaluated before the subtraction, an implementation can figure that out, but in CPS we do. Because monads are so similar to CPS you can define bind in such a way that it forces an order. Which can be any order by the way as long as the laws hold. You can make all sorts of philosophsical explanations that a value in the IO type is some kind of description of a computation which the runtime then executes. But what it fundamentally comes down to in the deep bowels of GHC is that it’s a hack to have impure side effect causing functions and still use them. Every function which has side effects returns its value in a box. This box is called `IO`

, not `Box`

but it does more or less same thing. The box is designed in such a way that the right side of bind always needs the left side to be evaluated by passing a dummy value around creating an artificial dependency. By simply putting every primitive impure function’s type as such that its return values are boxed in this little useless fictive box. You can ensure that you can control their evaluation order. But there’s one catch, remember that little function `safeUnbox :: Box a -> a`

? You don’t have it, at least, you shouldn’t have, but you do have it. It’s called `Unsafe.IO.unsafePerformIO :: IO a -> a`

. The supposed nonexistence of that unboxing function is integral to making the evaluation order defined of those impure functions. To keep the order defined they have to make sure that you can never get the contents of the box except via the binding function.