Are monads practical?

Bruno Ribeiro asked a great question about the practical uses of monads. Are they useful? Why are they used so much in Haskell? In this episode, we briefly go over the history of monads in Haskell and how they allow you to do imperative programming in a pure functional language.

Transcript

Eric Normand: What is the practical use of monads? In this episode, we're going to talk about what Haskell uses monads for, and we're going to discuss whether they are useful, and especially in practical terms.

My name is Eric Normand. I help people thrive with functional programming.

In a recent episode, I gave an example of a real world monad. I gave the example of a book, I believe, and someone asked the question, "But what does it have to do with programming? Why do we use this at all?" It's a fair question.

This person was Bruno Ribeiro. He wanted to understand why it's useful in code. Let's get into it.

As a preface, because I'm not programming in Haskell very much these days, I rarely use monads myself.

I'm not really even recommending that you use them, unless you are using a language like Haskell or Scala that has decided that monads is the way to do things.

I'm not a monad advocate. I just wanted to say that. It's a useful concept to understand, but mostly because people talk about it so much that it's to be able to participate in the conversation in an intelligent way.

People talk about it and you should understand what they're talking about — unless you're doing Haskell. Then, you should go deep into monads because that's the way they do it. I would say that about a lot of features of languages.

If you are into a lisp, I'd say go deep into macros. Understand how macros work. I wouldn't say that to someone doing Haskell. You don't need to know macros.

A little bit of history. It's going to be explanatory. This is why Haskell uses monads, but how did it get there?

Haskell has this goal of functional purity. There's even the slogan of, "Avoid success at all costs." That's like a cheeky way of saying, "We're not going to make compromises for..."

People say, "Oh, to be practical, you need to have side effects and you need to allow them." They're saying, "No, we don't want to do that," especially at the beginning before we understand what we have here.

I appreciate this because there were a lot of languages that already had side effects, had made that compromise, like ML.

Haskell had a different goal, which was to be a lingua-franca for pure functional programming for researchers. At the same time, they wanted to do effects. They wanted to, they needed to.

At first, Haskell programs would just calculate something and then the last thing it would do is just print out the answer.

It wouldn't be able to interact with you and read a file, things like that. It would just be input at the beginning, run the program, output, print it out.

Eventually they started adding, "You can do effects in this way, this way." There was this IO type that contained the effects, but when you're doing IO, the sequence of steps matters. You need to be able to say, "First read the input from the user, and then calculate the answer, and then print it out."

You can't do them in a different order. You need some way of guaranteeing the order. In most languages, that is a basic language feature. The steps are executed in order.

In Haskell, it's a lazy language, and you could do it. You could make it so that you would execute them in order because there is a dependency. Once you read the input, you can't work. You can't get the input before you read it.

There is some dependency there, but you'd have to write the code so that it was passing the results to the next function. You'd have these deeply-nested functions. It was awkward. They're like, "The longer the sequence, the more nested you'd have to get." It was not pretty.

They were looking for a solution to this, and Philip Wadler wrote a paper that showed how you could represent a whole bunch of things from imperative languages, like input and output state, some mutable state and exceptions, exception throwing and catching exceptions using monads.

This was a concept from category theory and he was showing how you could take this and apply it in Haskell.

Haskell that mean people like it, still a little awkward, so they add a notation called do notation. In Haskell, you type do and then you go to the next line and you can just start writing a sequence of statements and it will convert that into a chain of monadic bind calls.

It's a syntactic sugar but what it allows you to do is write this nice, imperative-looking sequential code.

There's even a version of it that has semicolons so they looks like C. You can do all your IO, and it runs in order, and it's really nice. It feels very much like you're writing imperative code but under the hood, it's all functional.

We'll get back to that. What does it mean that it's functional under the hood? We'll get back to that.

Suffice it to say that they did not violate the purity goals. The language now could do IO. It was convenient to write sequences of steps when you needed it. It was pure.

Haskellers, when they're learning Haskell, they have to learn monads. I had to learn monads when I was doing Haskell. It comes up. There's a type called IO and this type is where the box that all effects get pushed into. All the reading of files and then writing to standard output and errors, they are all get thrown into IO.

They're handling that is. They all gets thrown into IO. If you are not an IO, it's very pure. Everything is pure, it's lazy, it's all nice. Then you get into IO and now you're into what I will call actions. Outside of IO, it's all calculations. Inside of IO, you've got actions.

As you know, actions can be calling calculations. I would say, if you are going to draw one line, that's the line to draw. The stuff that's pure and then the stuff that's IO. That's like that's great.

If you turn that into a monad, that type into a monad, it means you can use do notation and you have this nice sequential thing.

You don't even have to know about monads at first. Like you just say, "Oh, this do notation, this is where I put all my imperative code."

The thing is in this paper, monads were shown to be useful for other things besides just IO.

I said before, it can do state, immutable state in a pure way because it's not really immutable. It's still using immutable data structures. It's just the name of the value through those sequence of steps stays the same but the value can change, just like a mutable variable. You can create mutable variables using a monad.

You can also create error handling, your custom error-throw-catch situations using a monad. There's other monads that the other things you can do, continuations, that kind of thing. They were shown to be really useful for this.

But then, it brings up this question. Is that useful? Because these are things that other languages already have. They have mutable state. They have sequencing of input and output. Why is that useful?

Alan Kay has on the record said that this is a cluj, like you are trying to stay functional in a situation where you don't need to be functional.

At the same time, it is a pure way of doing these things and because it's pure you get all these other benefits like lazy. Since it's mathematically rigorously defined, you can do algebra on it. Because monads are category, you can rotate it into another category.

I don't know how to do all that stuff, but it's high level stuff that is possible because it's pure, because you can understand it at a syntactic level, has mathematical laws. There's all this stuff that you can do with it that I don't honestly quite understand.

It's not stuff I do in my day to day, but I could also imagine using a library that relied on that and be thankful that I was inside of that monad.

You got these two sides at the same situation. One is like, "Well, we already had those things in our language. We already had mutable state. We already had side effects. We already had sequencing of steps. We had all that stuff. So, we don't need them. We don't need monads."

Then at the same time, you have this other realm that you can't get to if you don't use the monads. This is the trade-off right here. In Clojure there's a library where you can do monads as well as you can do them in Clojure. I always wonder why? Why do you want this?

What is the appeal of bringing monads into a place where you already have all the things that people are using monads for? It feels a little bit like just chasing after Haskell. Like, "Oh, they use it, so it must be good. Let's do it ourselves too."

Haskell uses monads so much primarily because of a self-imposed limitation of purity and so now they need something to make it so that they can have the mutable state, the sequencing, things like that, OK?

There's other things that monads are useful for. I'm not saying that, but this is their primary use on a practical level.

Just a short recap, if I'm in Haskell, yeah, monads, let's use them because that's the way you do things in Haskell. I know in Scala, in the effects systems, they use a lot of monads as well. They're trying to get to this purity that they don't quite have as much as in Haskell, because the language does allow for mutable stuff and things, but they're trying to get there with effect systems.

If I'm in Clojure, if I'm in some other functional language that doesn't use them, even ML, I wouldn't want them. I wouldn't say, "Oh, we need them or we can't continue."

All right. What is the practical use of monads? How is it that monads allow for you to sequence these things? This is going to be the last thing we talk about.

I haven't really explained why monads let you do this. Now I'm going to try.

Let's say you're doing imperative programming. In imperative programming, we're just going to simplify the model a little bit.

We're going to say that, in imperative programming, you have subroutines, you define these subroutines, and then the subroutines are calling other subroutines.

It's subroutines of subroutines of subroutines until you get to some built-in functionality.

A subroutine is just a sequence of steps and it does a basic thing or it calls another subroutine. That's a pretty basic model of imperative programming.

You have these nested subroutine things, but what the CPU expects is a linear sequence of steps. What do I run next? What do I run next? What do I run next?

An imperative compiler would probably compile it like do this, do this, do this. Now, do I go to and jump over to this subroutine and do that, do step one, do step two, do step three.

Look, step four is another subroutine call, so jump to that subroutine and continue like that and then you will have a stack so you could come back out of the subroutines.

If you notice this nested structure, you could linearize this by doing a monadic join. I have a subroutine A includes a call, the subroutine B and I do it.

It's a subroutine of subroutines and I do a monadic join that becomes a single subroutine, not nested but it's just steps. This allows the sequencing of all the steps for the execution. It doesn't go straight to the CPU obviously, but it becomes a linear sequence of steps because of that monadic join.

You can go listen to the other episode where I talk about monads in more depth about how they're defined.

It's a list of steps inside of a list of steps. It's a list of list of steps. You can do the join and, boom, it becomes just a single list of steps. That is what gives you the sequence of the CPU needs.

Now, here's the thing. You need some branching. You need some kind of, "Well, I'm going to execute this subroutine and if the answer is zero, I'm going to do this subroutine. If the answer is one, I'm going to do this subroutine." You don't even know what branch you're going to go down until after you've executed that step.

This is where the bind comes in because you can have a function that makes that choice. I'm going to execute some like database query, let's say, and I get the answer is zero. My function is going to look at that answer and return a subroutine A or a subroutine B depending on the answer.

If it returns zero, it'll return subroutine A, but if it's one, it'll return subroutine B and then the join will execute, make that a linear sequence of steps.

You don't know the steps in advance, because you're using bind, not just a strict join. You don't know the sequence of steps in advance because there's going to be choices that you make along the way, or that your program makes along the way.

You're not going to be able to just linearize it. You have to execute it one step at a time. Some of those steps are going to call some kind of side effect. That's going to give you an answer that you can make a decision based on.

That is a property of monads that, because of the bind, you can't really expand it all the way. The bind includes a little decision in it. Like, what do I do next? Now that I've got this answer, what do I do with it?

That's a limitation of monads. A very well understood limitation that it's not a pure data structure. It has functions in it that will be required. They can't run until that thing executes.

It's all built-in there. You're doing these little pure calculations like, "What subroutine do I run next?" That subroutine gets returned and gets linearized in the next sequence of steps. It doesn't have the whole future. It just has one at a time like that.

That's enough about monads for the moment at least. We're talking about whether they're practical.

Like I said, just to recap, they're very practical in Haskell because of the choices it's made, its philosophy of not making compromises.

Something is doing the effects. Something has to be doing the effects, but they are pushing it off as far as possible. It's the run time that's actually running the effects.

I don't use monads much myself unless I'm doing Haskell or something. I'm not recommending them.

Philip Wadler was the one who brought monads from category theory into Haskell. It lets them do something that looks like imperative code with just a little bit of syntactic sugar, the do notation.

It lets you reproduce all these things that you have in imperative languages, like sequencing steps and immutable state and stuff like that.

If you like this episode, you should go to lispcast.com/podcast. There you will find all the old episodes, so many. You will see the audio, the video, and the text versions of all of them.

You'll also find links to subscribe and to get in touch with me on social media. Please do subscribe because then you'll get all the future episodes. Awesome.

My name is Eric Normand. This has been my thought on functional programming. Thank you for listening and rock on.