Haskell is very competitive with C, and on some benchmarks, it is faster. How is that possible? With all that Haskell does on top of the raw C code, how can it possibly be faster? In this episode, I talk about two advantages of Haskell that can make it faster than C.
Eric Normand: How is Haskell faster than C? I’ve two explanations, which I’ll get to in a moment. My name is Eric Normand, and I help people thrive with functional programming.
Usually, languages are compared to C. It’s the standard benchmark language to say like, “Oh, this language is only a factor of two slower than C.” Most languages are slower than C. Especially when you get into the more higher-level languages, you have this idea, “Well, we lost a lot of speed.”
You don’t have to deal with all these problems that C has. You don’t have memory leaks because we have a garbage collector. You don’t have to deal with when to free the memory, just all these niceties add up, but it’s at the cost of the speed.
Then, you have something like Haskell that is often faster than C in the benchmarks. It’s very close to C when it’s not faster. It’s not like, “Oh, it’s twice as slow.” No, it’s right there, it’s within a few percentage of C, and very often it’s on the other side, it’s faster than C. What’s going on?
There’s actually two things, at least. These are the two things I know of that are going on. The first is optimization. Haskell has a lot of knowledge. The Haskell compiler is given a lot of knowledge about the program and the form of types, usually. Those types are richer than what the types in C are.
Not only is it the types, but there’s a lot of the semantics of Haskell that give the compiler a lot of leeway, let’s call it, a lot of wiggle room for optimization.
One of those is lazy evaluation. In C, it’s strict evaluation, you put a function there, it executes one line at the time. It’s how you should think of C executing. It’s going to call this line. Everything on this line is going to finish before we get to the next line.
In Haskell, that’s not the case. You can call something, give it a name, and it doesn’t actually do anything yet. The compiler can actually analyze it and a lot of times figure out, “Hey, you’ve never used this and this branch.” I’m not going to do it yet until I know that you’re in this branch, and then I’ll do it.
Or, it might never do it. The analysis gets so complex. It might remember how to do it and in case you need it, it will be there, but it won’t do it. There’s a lot of stuff that you could in theory hand-tune if you’re very good at optimizing.
As a programmer, you could hand-tune and say, “Oh, there’s a certain case where I won’t need this value, so I’m just not gonna calculate it until I really, really need it.” In practice, it gets so hard. It gets so complicated that you’re just not going to do it. Haskell can do that. Haskell just does it and the programmer doesn’t have to think about it.
It can do some analysis, it can figure out when something is going to be needed anyway, it might as well just calculate it now. It can do some analysis and say, “Oh, it’s only needed sometimes, so I probably won’t do it yet.” Sometimes it just puns and says, “Well, I don’t know how to analyze this.” The net effect is that that it’s faster.
Another thing is it can do a lot of, because everything is pure, it can do a lot more optimizations like moving code around, inlining, and doing more stuff at compile time that can’t be done in C. Those are typically thought of like inlining, people optimization, that kind of stuff.
Haskell has a broader range of maneuvers that it can do that let the code get optimized really well. That’s nice. It’s like you’re getting the benefits of the high level. You can code how it should be read. I’m coding for another programmer, I’m just making it very readable, but then the compiler can transform it into something that’s better to be executed on the machine.
That’s number one, that’s optimization. Number two is potentially even bigger. That is that Haskell lets you use better data and algorithms. I need to explain because, yes, they’re both true and complete. Let’s take that argument, I agree, but let’s take it off the table.
Bryan Cantrill actually made this argument, and that’s where I got the idea for this episode from. He was talking about Rust but it applies equally to Haskell. He has been doing a lot of system programming in his career and a lot of it is done in C because it needs to be low level.
His argument goes like this, “Well, if you need a data structure, in C it’s just hard to do anything more complex than a linked list.” Or, maybe you could get a little bit more complicated, but you write linked list all day long because it’s something that you know you’re not going to mess up and it does the job, and it’s fine.
Haskell, because it’s higher level, it can actually manage much more complex data structures and do so in a correct way. It gives you tools to write data structures that are known to be more efficient for certain access patterns. Linked lists are very linear.
Not every access, if you access the first thing, it’s constant. In general, you’re accessing things inside the list randomly. Let’s say that’s what your algorithm is. You’re accessing stuff randomly. That’s linear. If you do that in a loop, wow, you’re quadratic already.
In Haskell, you can replace that linked list with something else, like let’s say a tree. Now, your access is logarithmic, and you’ve already saved a bunch of time in terms of time complexity.
That is another way that Haskell benefits over C. That if the problem is complicated enough or big enough where a linked list…The difference between in-access complexity and Big O notation complexity, between a linked list and a tree, boom, it’s a big enough difference. Haskell’s going to win.
It’s simply a matter of how much complexity can you handle. Of course, if someone wrote a tree in C and you imported that library and you included it in your C code, you would start competing again with Haskell. You could do that, sure. Do you do that? Is that a possibility?
In these benchmarks, it might not be a possibility. Whereas, in Haskell, you could do that, you could write it yourself. I find that this is the case, a lot of times, in higher-level languages. It’s a spectrum. C is a very low level, Java’s higher level than C, but then Scala’s higher level than Java, Clojure is higher level than Java.
It’s a spectrum. What happens is as you tackle these benchmarks, sure, C is going to win because it’s very small problems. It’s like calculating something with a known algorithm. People have been optimizing the C algorithm for that for years, so they know exactly how to make it the fastest.
When you’re dealing with more real-world problems, bigger problems, very often, you do need garbage collection and a better concept of data type and data structure than C will give you. That’s what happens. It is not just about data structures, there are other facilities of the language.
As a little anecdote, I heard a story once where there was a competition to see who could write the fastest program, and it was C versus Java. It surprised everyone, especially the C programmers, but the Java implementation won.
It won by a lot. The C people were like, “No, that’s not possible. It’s…” “How can this big monstrous VM beat our highly-tuned tiny little C program?” They started reading the Java code. You could imagine them huddled there with their printouts like, “Huh!” They all were like, “No, look. They cheated.”
What they were pointing at was, in Java, they had used threads. They used multiple threads to solve the problem, whereas, in C they didn’t. They considered that cheating. In C, you can see the problem here. You could see why they consider it cheating.
Because they’re used to benchmarks where it’s just one thread, purely sequential code. It is really hard to write threads in C compared to Java. In Java, it’s very easy to write threads and to start new threads. What do you do? What do you do? Who won?
I say that Java people won. That’s the whole point of Java, is that it makes those kind of things easy. It makes threads on cross-platform really easy. It makes garbage collection cross-platform really easy. I think that the same thing is true of Haskell.
If it makes writing a data structure that gives you an advantage over a linked list, let’s say, easy. That’s what you get. That’s why it’s faster.
All right. If you have some ideas about why Haskell is faster, about these high-level languages, how they could be faster than a low-level language where you get to control everything and highly tune the code down to individual instructions if that’s the thing you like doing. How is that even possible? Do you agree with me, disagree? I’d love to know.
You can go to lispcast.com/podcast. You can find ways to contact me, email, Twitter, LinkedIn. You can also find all the past episodes. They have audio, video, and text transcripts of all the past episodes. You can find subscribe links if you want to subscribe. Thank you so much. Check you later.