Is there a silver bullet for software development? (part 1)

In The Mythical Man-Month, Fred Brooks argues that there is no improvement that can give us an order of magnitude increase in productivity. His main point is that most of what's left to improve is essential complexity. But is that true? Can we throw in the towel and declare there is nothing left to improve?

Transcript

Eric Normand: Is there a silver bullet in software development? Hello. My name is Eric Normand and I help people thrive with functional programming.

I just finished reading this book, "The Mythical Man-Month." It's a bunch of essays about software development.

This is the 20th Anniversary Edition, which I think was written in 2001. It has some responses...Oh, no. Look at this, 1995. Never mind. It's pretty old now. It's still [laughs] 25 years old, but the book came out 20 years before that...the original essays.

What's interesting is, you get to see the author's responses to his own essay after time, and responses to other people who have responded to his essays over time.

One of the essays is called, "No Silver Bullet." Fred Brooks, the author makes the argument that, when he wrote it, there would not be a ten-fold increase in productivity in software development. He lays out a pretty clear argument that there are two types of complexity in software development.

In anything, there are going to be two types of complexity, but he uses this framework. It's developed by Aristotle, so it has a history to it. The framework is, there is the essential complexity, which is the conception of the thing, figuring out how the software is supposed to work and what it's supposed to do, and all that stuff.

Then there's the accidental complexity, which is all the stuff about typing it in. Back when he wrote it, there had been a lot of gains in the typing it in stuff. They moved from a batch-oriented mainframe system, where you would work on your software, submit it, it would get in a queue.

Sometime later, maybe days later you would get the results of your run back. Computing time was very scarce, and it took a long time. There was a very high latency between you writing code and getting the result back.

In the time before, in the 10 years before he wrote the essay, they developed time-sharing systems that were reliable. People could use the computer, basically, all the time. They would have a terminal that was connected to the mainframe. They could type code all the time, run it, and have a much faster cycle.

He talks about a lot of improvements like that, that aren't about thinking about the thing but more about making it real, implementing it, typing it, stuff like that.

His argument is that at the point when he wrote the essay, the amount of complexity that comes from typing it in is shrinking because they're getting more and more efficiency with that implementation stuff, that typing and making it real.

To his mind, it was already less than one-tenth of the complexity. The other 90 percent of the complexity was the essential stuff. Thinking about it was getting your algorithm right and stuff like that.

His argument was, "If that's true, that you only have 10 percent left or less than 10 percent left of accidental complexity," these are his terms that he took from Aristotle, this accidental complexity, you'll never be able to get another ten-fold increase in productivity.

The only thing you can do is save off a little bit more of that 10 percent. You're never going to get an order of magnitude difference. I think that argument makes sense, but I want to rebut it. The main problem I have with the argument is the division of stuff into essential and accidental.

From my perspective, the improvements we've made to the act of programming were innovations, were inventions that took something that people thought was essential and made it look more accidental.

People thought that at the time, computers are these things that you only get access to for a little bit. That's an essential part of computing. It took people inventing and pushing the status quo, to turn that into something that you could address. It's a mental shift as well as physically implementing the terminals into the time-sharing system.

A lot of the stuff that he talks about being essential are stuff like, all the states your program can get into. Implying that there's mutable state. That there's a lot of complexity in memory management and stuff like that. In the times since he's talked about that, those things have become much more of a choice.

They are not essential anymore, they're not an essential part of the problem. I'm not sure whether we have shaved off, or gotten it down to less than 10 percent, because that idea that there's an essential complexity to it. I think everything does have an essential complexity. The technology is always moving stuff from essential to accidental.

It's better to address, to categorize, the sources of complexity, not by this subjective notion of essential, but by a much more objective notion. Before I get into that, there's a very popular paper that is a response to this essay. It's called "Out of the Tar Pit." If you're interested in functional programming, you should read this paper.

It is taking this essential and accidental complexity framework and talking about how to address it and how functional programming does address it. They modify the essential and accidental a little bit, in a good way. They make the essential complexity, the complexity that is irreducible because it is about your domain.

If you're making accounting software, accounting by its nature has some complexity in it that you can't get rid off, or you won't be doing accounting. If you simplify that away, you're not doing accounting, you're doing something else.

Same with launching a rocket. If you have software that helps you manage rocketry, it's a very complex thing. If you eliminate something from rocketry that makes your software simpler, are you really doing rocketry?

Then there's the accidental complexity, which is all the implementation complexity we add to it. This is all the bugs. Then we are using constructs that don't exactly fit the way the domain wants them to, so we have these corner cases.

We use threads and that introduces complexity. It works on the Web so we have to complexity of AJAX requests. All of these things are accidental. They're not a part of the domain that we're trying to implement.

This is a much better way of categorizing the two things, because it's not saying...Even a mutable state, using a global mutable variable is accidental complexity, because you don't have to do that. Rocketry isn't about variables. It's a much clearer line, a much more objective line, between the domain and the accidental complexity.

The paper is about how do we deal with this accidental complexity now that we've identified it? How do we deal with that? They use functional programming and stuff to address it.

I was working with this framework in my book. I like the framework, I think it adds a lot to the discussion. Whenever I would show someone, "Oh, here's the accidental and essential." People would be like, "Did you make these words up? It just sounds wrong. They're not just right." Of course, then, I have to explain, "No, there's a long line of people using these words."

I know it requires some explanation, but it goes back to Aristotle. [laughs] It's not like I made them up. I'm sorry that they're bad, but that's how it got translated from Greek, whatever.

The more I use them, the more I realize it's actually better than having a binary split, like the essential and the accidental. It's actually better to talk about multiple slices of complexity that have to do a lot with the actual software development flow.

You could talk about domain complexity. This is the rocketry stuff, rocket science as a thing. When you implement it, it's going to have a certain complexity that's irreducible, or you're not doing rocket science, or you're deliberately leaving something out.

There's stuff like your architectural decisions. What platform does it run on? Are you using a certain type of database? What language are you using? All those things are going to add complexity, because they've got their own quirks, they've got their own things, and you have to manage.

If you're on the Web, you have to deal with the browser, you have to deal with JavaScript, you have to deal with AJAX requests. All these things, by themselves, have already added complexity, but they might be necessary because that's your business case.

"Oh, this is Rocket Software but delivered on the Web." That's how we deliver to our customers, and that's an advantage that we have. It's necessary, but it's not part of the domain. Then you have stuff like how you implemented it.

This is built on top of you architecture, the software you write, choices that you've made in turning the domain into code, and dealing with the architectural complexity. All of that, you could have bugs, you could have misfits, a change of requirements, you have this legacy way it works. Now, you have this new way it works. That's complex.

By splitting it into three, you actually get a clearer view of the sources of complexity. There's complexity of the domain, there's a complexity of your architectural choices, and then there's the complexity of your implementation choices.

The implementation choice, they're going to have to deal with the other two forms of complexity or sources of complexity. There's another opportunity to add to complexity in the software that you write.

Like I said, the reason I think splitting them up like that is it's much more objective about where it comes from, whether you can actually do something to remove it. I find it much more useful. You're not having an argument about whether this thing can really be gotten rid of.

It's like, "No, we made that choice. We're using JavaScript. We have to stick with that." There's no more discussion. Yes, it's complex, but we just have to live with that. It's a choice that we're making. We're using object oriented programming, or we're using functional programming.

That comes with a certain amount of complexity, and we're choosing to live with that. What are the practices for managing that complexity that we can do for minimizing it, managing what remains, and then managing the other complexity?

This is a better way of looking at it. It also allows for a little bit of mobility. If you look at microservices, one thing is, obviously, they're adding architectural complexity because, now, everything is a distributed system. You're also allowing for different language choices per service or different database choices.

You're carving out the complexity into smaller pieces. You're not having the whole, we chose Mongo because we knew that for X. We're also using it for Y even though it's not a good fit, because we don't want two databases. Now, you just say, "Well, I'm gonna use a new database per service. I can choose the database that makes sense for that service."

That lets you play with the complexity a lot. Likewise, you can choose different languages with microservices. They have different paradigms per service. It also lets you make more nimble choices around where your complexity goes.

To recap. This is a great book, The Mythical Man-Month by Fred Brooks. I wish I had read it earlier. Mostly, I regret it because it's the kind of book that everyone talks about all the time, and I couldn't really ever participate in the discussions.

It has a lot about the history of programming and what it used to be like when you had these giant systems and the big rooms with air conditioning and stuff. Pretty cool.

Yeah. Essential complexity, accidental complexity. Not the best way to model it. I think you should split it into three.

You can find this episode and all the past episodes on lispcast.com/podcast. You'll find links to subscribe and links to social media. You'll also find video, audio, and text for all the episodes on there. Take care and rock on.