Summary: “Prefer data over functions” is a common adage in Clojure circles. It is poorly debated because it is a terse statement in generalities. A valuable perspective is that data is transparent at runtime, while functions are not. This perspective gives a firm ground for discussion and design.
There’s a design rule of thumb in Clojure that says that we should prefer functions to macros and to prefer data to functions. People talk about why, people react saying that functions are data, etc. It’s all true, but it’s all missing the point. It doesn’t get at the fundamental, structural difference. And I think the discussion breaks down because people speak in generalities and not much is made measurable and concrete. But the pregression from macros to functions to data is, in my opinion, increasing in one important aspect, and that’s the availability at runtime. Discussions should hinge on whether availability at runtime is desirable, which of course needs to be determined on a case-by-case basis.
Macros in Clojure are simply not available at runtime. By definition, they are expanded at compile time. It would be hard to make sense to be passing around macros as runtime values. Try to pass a macro as an argument to a function. You can’t.
Further, once a macro has been expanded, it is no longer clear that its expanded code even came from that macro. Macros are opaque at runtime.
Functions are first-class values. They can be passed around to functions, stored in maps, etc. They are totally available at runtime to be called.
But, calling them is about all you can do with them (besides building new functions with them, like with composition). You can’t even easily inspect the code that’s inside them, nor get at the environment they have closed over. And that’s kind of the point. The function is a useful unit of computation. It’s not a unit of semantics.
A function, too, is opaque in its way. A function, at runtime, is a black box. What does it do? You can’t tell. You can’t even tell how the function got there. Was it a
fn defined in code? Or was it the result of function composition using
comp? That information is not available at runtime.
In Clojure, by data, people are really talking about the immutable data structures available in Clojure. Just to be concrete, let’s narrow the definition of Clojure data down to edn.
Edn data is available at runtime. It’s first-class in the way that functions are and macros are not. But it is also transparent. The structure of the data is completely available at runtime, unlike the structure of the function. This is why data is preferable.
Because data is available at runtime, you can do many things with it. Does the data describe a computation? Well, write an interpreter. Interpreters are much easier to write than compilers (macros) because the interpreter runs at runtime in the dynamic environment of the program. Compilers (and macros) separate out the two phases of compile-time and runtime. You have to keep track of the difference in your code. And Lisp is totally well-suited for writing interpreters. The first Lisp was defined in itself, for crying out loud.
And since it’s just edn, it can also be manipulated using all of the tools available in the language. Maps can be
assoced to. Sequences can be iterated. Transfering it over the wire is easy. Storing it to disk in a way that can be read in is easy. Try pretty-printing a function. But pretty-printing a data structure? Easy.
And, finally, the other thing about data is that it can be interpreted in different ways by different interpreters. I’m not trying to say that you might implement two algorithms for doing the same thing. What I’m getting at is that you can compute from it, or analyze it, or algebraically transform it, etc. It has become a semantic system in its own right.
Why not work with code, which is data in Clojure?
You can argue that since code is data, why should I use data structures likes maps and vectors instead of actual code, represented as lists? This is actually a very valid point, and I think this is one of the better arguments. Lisp was defined in terms of an interpreter for data structures which together provide a powerful programming model. It would be foolish to discard this power and define our own for no reason.
The best reason I can think of is that our data is usually a very restricted form of code, many times not Turing complete. Turing complete code is proven to be impossible to analyze. But our restricted data model is powerful in exactly the way we need it (for our specific problem), but not generally powerful (as in Turing complete). So we can design it to be analyzable.
If we can restrict the power to be less-than-Turing-complete, we can analyze it at runtime. If analyzing it at runtime is desirable, then it is desirable to represent it as data.
The semantics of data is vague, while code is well-defined. Why should we use data instead of code?
Ok, this is also a good point. Clojure code, in theory, is well defined. At the very least, it is defined as whatever the compiler does. Most of the time, it is well-documented and well-understood. But your ad-hoc data structure, which represents some computation, has all sorts of assumptions baked in, like what keys are valid when, that are undocumented, have poor error messages, and maybe corner cases.
Wow, such a good point. When you’re designing a DSL, this is always a challenge. But, just like restricting your power to below Turing complete can make your analysis way easier, keeping your semantic model simple and well-defined is the key to making it worthwhile. If the semantic model is simple, it could be beneficial to have it available at runtime. For instance, you could create a custom editor for it. If it’s just functions, that’s out.
Hasn’t this discussion happened many times before? I mean, Ant started off ok, but then it was its own programming language and it sucked.
This is very true. People have talked before about the difference between internal and external DSLs, and how external DSLs eventually lose because they need all sorts of conditionals and loops, which the internal DSLs had by default. In my experience, this is true.
My personal guideline is that I only prefer data after I have bound the problem to a well-understood domain. That means that I have to write the program first in code, using functions, before I realize that, yes, this could be described very succinctly and declaratively as data. This took a long time and lots of mistakes to understand. I essentially will only refactor to a data-driven approach after I already have it written and working.
I’ve been coding in Lisp for a long time, so I’ve internalized this idea of data-driven programming. It’s the main idea of Paradigms of Artificial Intelligence Programming, one of the best Lisp books out there, and a big influence on me. What the guideline of “Prefer data over functions” means to me is that when it’s beneficial, one should choose data, even if functions are easy to write. Data is more flexible and more available at runtime. It’s one of those all-things-being-equal situations. But all things are rarely equal. Data is often more verbose and error-prone than straight code.
But there is a sweet spot where data is vastly superior. In those cases, it makes your code more readable. It is more amenable to analysis. It can be passed over the wire. When I find one of those cases, you bet I’ll prefer data over functions.
I think that doing data-driven programming is one of the things Clojure excels at, even more than other Lisps, because of its literal data structure syntax. Data-driven programming is one of the deep experiences that I wish everyone could have. And it’s the primary goal of LispCast Introduction to Clojure, my 1.5 hour video course filled with visuals, animations, exercises, and screencasts. Check out the preview.