Product and sum types are collectively known as ‘algebraic data types’. These are two ways of putting types together to make bigger types. Product types multiply their states, while sum types add them. With these two ‘operations’, we can precisely target a desired number of states. In this episode, we explore how we can eliminate corner cases with algebraic data types.
Thoughts on Functional Programming Podcast
An off-the-cuff stream of Functional Programming ideas, skills, patterns, and news from Functional Programming expert Eric Normand.
I prefer Clojure to Haskell. It’s probably mostly due to accidents of history: getting in Lisp at an earlier age, spending more time with Lisp, and only having one Haskell job. But I do have some philosophical differences with the Haskell philosophy as it relates to Haskell as an engineering tool. In this episode, I go over four of them, and try to relate them with anecdotes.
Denotational Design is a abstraction design process created by Conal Elliott. I like it because it really asks you to step back and design the meaning of the abstractions before you implement them. In this episode, I talk about why I like it, what it is (step-by-step), and why it’s not about static types.
Business rules are different from your domain model. What goes where? I hope to tease apart this important yet subtle distinction in this episode.
Nil punning does give power to Lispers. But where does the power come from? Is it that nil really is a great value? Or is it more about the design choices made? In this episode, we explore this question.
Nil punning is a feature of Lisp. It began when nil was both the empty list and the false value. Two meanings for the same thing led to the idea of punning. Lispers liked it. And now, Clojure carries the tradition forward. In Clojure, nil has different meanings in different contexts. Is it good? Is it bad? We explore it in this episode.
What happens when your language is so powerful that small, independent teams can solve their problems without libraries? Does everyone flock to it? Or do you just get a lack of libraries?
Structure and Interpretation of Computer Programs talked about abstraction barriers as a way to hide the intricacies of data structures made out of cons cells. Is this concept still useful in a world of literal hashmaps with string keys? I say, yes, but in a much more limited way than before. In this episode, we go into what abstraction barriers are, why (and why not) to use them, and the limits of their usefulness.
I’ve gotten several questions about how to do X or Y in the Onion Architecture. It seems like giving the architecture a name has miscommunicated how simple it is. It’s just function calls that at some point are all calculations. In this episode, I try to deconstruct what makes the onion architecture work. Spoiler: it’s just function calls.