Because I promote functional programming, people often take what I say to mean that I don’t like OO. However, I think there’s a lot of cool stuff in OO. In this episode, I go over three or four things I think OO does really well.
Eric Normand: “Why is Object Oriented Programming So Great?” Hello. My name is Eric Normand and I help people thrive with functional programming.
As a functional programming advocate, a lot of what I say often sounds like I don’t like Object Oriented Programming. I would like to talk about in this episode nothing but things that I like about OO. This is purely an opinion piece like most of my episodes.
I feel like I’ve studied Object Oriented Programming in college from a Java-centered approach. I’ve also studied the early papers about Smalltalk and studied a lot of Alan Kay’s work. Never actually written anything in Smalltalk, but I’ve played with Squeak. I’ve gotten some insights into it.
I’m not like an expert or anything, but I feel like I understand enough about it to be self-aware about why object-oriented programming could be powerful where things are going wrong in the world of Java and things like that.
I would like to go over some of the things that I really like about object-oriented programming. Before I do that, I would like to say that Clojure wouldn’t exist without the object model that exists on the JVM. It uses it to a really great effect.
Most of Clojure is written in Java. It definitely uses the OO features of Java to implement its features. Functions are just objects. There’s a class called fn, fun. There’s an interface called IFn. All of the core abstractions are developed as interfaces.
It’s not clear that there needs to be such a strong division between object-oriented and functional programming, they can complement each other very well.
It’s neat that FP and FP language, Clojure is built on top of the object-oriented principles of the JVM. That’s really neat. That’s awesome. I’m going to go over three or four points that are really cool about OO.
Here we go. The first thing is that messages provide a layer of indirection. This layer of indirection is used for several things in most languages. When you send a message, the message has a name and then has some arguments.
This name provides us a bit of semantics, some meaning to the message you’re passing. It’s usually a human-layer meaning. It’s in English or some human language. It’s not something the computer can understand directly.
It’s an important layer of indirection because it lets you do polymorphism. By that, specifically, I mean that the receiving object can be a different class. As long as the message name is the same, the object receiving it can be different. You could have different results.
That’s polymorphism. It lets you separate out the caller from the callee, meaning you can hide the implementation details. The caller should not have to care how it’s implemented.
All this comes naturally out of indirection. Any layer of indirection should do that, should provide some kind of hiding. I’m indirecting through this so I don’t have to know what’s on the other side, like indirection in real life, like some kind of escrow service.
I don’t need to know when the person is going to get their money. I put the money in this place and then they get it later.
It’s a layer of indirection that serves to separate out the caller from the callee. That’s super important in programming. This message layer lets you do late binding. Some OO languages will use the static types to decide what method to call based on the message, but Smalltalk didn’t really have that.
Everything was late bound. That’s what allows for polymorphism in a lot of ways that when I’m sending this message, I don’t know what the receiver is going to do with it. There’s always the hope that it will understand the message and do what I think it means, but you don’t know it’s going to do it.
This is important when you’re building a system of any size because you don’t know how the system is going to work in general. Every change you have to make means that you will have to — if you don’t have polymorphism — change both sides, the caller and the callee.
The caller now has to send the message to a different type of thing, it might require change on both sides. Changing the thing, the callee, and changing the caller to match the callee. With late binding, the caller is really insulated from the change of callee, meaning I can define a new type of number.
Let’s say I want to make a complex number class, it implements plus, times, divide and minus, and all the things that I consider important for numbers. Now, I can plug that right in and it works in all of my existing code. No recompiling, nothing. It just works. That’s late binding.
As your system evolves, there’s always going to be new stuff and new requirements that come in. You learn and you want to be able to change the system, but you want your changes to be isolated so you don’t have to change everything. That’s what late binding gives us.
It also gives us a dispatch point. You can look at it in another way. In FP, even in typed FP, you often have different values that are part of the same type, and you want to do a different thing based on the value. The classic case is something like you have a maybe type optional value.
You either have the value or you don’t because you want to do a different thing if you have the value. If you don’t have the value, do something else. This assumes you know all the possible values. That means that if you need to add a new value to that type, then you’re going to have to go change all the call sites.
You’ve tightly coupled with early binding so the indirection is partial. It’s not real because the caller is still not totally separated from the callee. In FP, what you do is like a case statement or an if statement, some kind of branch on the value. If it’s just the value, if you have the value, do this. If you don’t have the value, do that.
In OO, it reverses that. Instead of choosing the function first and then in the function you branch, OO says, “First look at the type of the thing or first look at the class — what value is it? — and then ask it to figure out what the operation should do.” That’s method lookup.
Let me say this again. In FP, the name of the function you’re calling will determine the function. Then inside that function, you do some dispatch like a branch based on the value. It’s a little dynamic, because you have to check at runtime. Do I have the value right or do I not have the value?
In OO, it’s reversed. First, you look at the value. That means looking up the value’s class. Then in the class, you’re doing this dispatch of look up the name of the method. Then that will give you the method, and then you call it. It’s the exact opposite order of things.
The class is primary. The value is primary is primary in OO, whereas in FP the function is primary. By doing this reversal, I think it’s one of the magic tricks of OO. It’s like a jewel. It’s just one idea, but it gives you all these little benefits.
This one reversal of letting the type — the value itself — determine how the operation should work, it means you don’t have this closed world assumption anymore. The caller can be separated from the callee. The caller no longer has to worry about what value it’s going to be sending this to.
Once it sends the message, the callee can determine what to do with it, what is appropriate for it, for its semantics, for its way of representing things. This lets you add new cases, totally ad hoc without changing the caller.
It’s another way of scaling your system. Every message pass is like you’re able to divide your system in half, the side that’s calling and the side that’s being called. It’s a very clean break.
You can modify one side without modifying the other. That’s what layers of indirection are supposed to do. Systems that don’t give us those layers of indirection become very tightly coupled.
OO is all about this ability to separate out the caller from the callee and give us this important level of indirection. Just to summarize, it gives us polymorphism. It lets us add in new cases or let’s call it new types of values, new classes that fit right in to an existing use.
I gave the example of a complex number can now — if you implement all the right methods — go into a formula that you developed using integers, for instance. It’ll fit right in. You’re calling the same operations on it, so why not. The caller had no idea that that was even important.
It allows for late binding, which means you can redefine things even after they’ve started being used. It also allows for dynamic recompilation. I can modify the code of this one class without having to recompile all the other classes, because it’s just an indirection. It’s a message pass.
Finally, this late binding idea lets you be open. I can add new classes, new cases, new types of values that I didn’t have to anticipate. I don’t have to change the caller. It’s all about this indirection.
This indirection, it’s basically a humility. It is saying, “I don’t know how this system is going to evolve. I don’t know the requirements six months from now, two weeks from now. I don’t know how things are going to change.
I’m going to learn things along the way. I don’t know everything right now. Things are going to change, and I want to have to change as little as possible. Some code should still work even though I change a lot of it.”
It’s a way to scale. In any project, when you first start, you know almost nothing about it. Programmers are like scientists. We’re like empirical scientists. We go into a domain, and we don’t know how it should work.
We try things, we make hypotheses, and we test out whether this will make a good model of the system. Sometimes, we get pretty far. Then we get stuck, and we need to go back to the drawing board and make a new model. That’s what we do. We’re like little scientists.
We don’t know how things should work, but often we get a lot right in the process. We can see that little things. We do understand those things, so we can break those off into small pieces that are totally encapsulated.
This is a great thing for OO, total encapsulation of things. Those things might stick around for a really long time, because they were really well understood. That understanding was encoded in their interface.
Some things we got wrong, especially the bigger things. More chances for mistakes. We had to rework those. We don’t want to rework the other parts that use those things. We want to rework the thing itself.
I see OO as this admission that we don’t know. The only way to know is to go in, find out and try it out. Try to encode this knowledge as a program and see how it looks, see how it works.
We can’t do that if every time we try to change something, we have to make it. We have to change the thing calling it as well. It’s an open-world assumption and just means “Here’s the stuff I know. I know there’s other stuff that I don’t know yet, but this is what I know.” [laughs] It’s basically what it means. That’s built into OO.
I’ve said, I think, a lot of good things about OO. People will still continue to think that I don’t like OO at all because I talk about some of its limitations, but I do like OO. I think there’s a lot of good stuff there.
Thank you very much. This has been my thought on functional programming. My name is Eric Normand. If you like this episode, you can find other episodes at lispcast.com/podcast. There, you’ll find all the past episodes with text, transcripts, video, and audio. There will be links to subscribe on podcasts via YouTube or RSS, if you like the text.
You can also find links to find me on social media, like email, Twitter or LinkedIn. I love to get into discussions. If this episode was meaningful to you, either you’re pro, you agree with me, or you’re con, you disagree with me, let me know, we’ll talk about it because that’s what this is for.
It’s me broadcasting, so I can talk to more people who like these ideas. You don’t have to agree with me. You just have to like talking about them. Awesome, thanks for listening. Rock on.