Why I Love Functional Programming

A personal reflection on moving from OOP-first thinking to functional programming: pure functions, recursion, pattern matching, lazy evaluation, and why these ideas hold up in real systems.

  • Functional Programming
  • OOP
  • Elixir
  • Haskell
  • Concurrency
  • Software Design

Table of Contents (click to expand)

Introduction and Motivation

For most of my early CS career, I thought OOP was just the right way to write software. Classes, inheritance hierarchies, getters and setters, design patterns with names like “Abstract Factory”. I kind of believed in it.

Then I took a Comparative Programming Languages course and got introduced to functional programming. And honestly, it scrambled my brain a little. In a good way.

It wasn’t that OOP suddenly became useless. It was more that functional programming showed me there were other ways to think about code simpler, cleaner, and in a weird way, more honest. Instead of building these elaborate object models and worrying about who owns what state, I started thinking more about data, transformation, and composition. That shift stuck with me.

This isn’t going to be a tutorial, and I’m definitely not going to pretend I can explain monads without making everyone miserable. I just want to talk about why functional programming clicked for me, and why it changed the way I think about solving problems.

The OOP Hangover

Before I get into what I love about functional programming, I should probably be honest about what started turning me off from pure OOP in the first place.

You’ve seen this kind of code before. A UserManager depends on a SessionService, which talks to a DatabaseFactory, which reaches into some global AppState, and now you’re three hours deep into a bug hunt trying to figure out who changed what. The worst part is that nothing looks obviously broken. Something just got mutated somewhere, by some object, at some point, and now your whole program feels haunted.

That was the part that started wearing me down.

It wasn’t that I woke up one day and decided OOP was terrible. I don’t think that. It was more that I kept running into systems where state was smeared all over the place, and every piece of code seemed to know a little too much about every other piece. Nothing was cleanly isolated. Nothing was easy to reason about. And testing? Forget it. Sometimes it felt like writing one unit test meant building a tiny fake universe first. Mock this interface, stub that service, set up four dependencies just to call one method. After a while, I started to feel like I was spending more time managing the shape of the code than solving the actual problem.

So What’s the Actual Difference?

When you first hear “functional programming,” it sounds almost aggressively abstract. No loops. No mutation. Functions are values. Okay, man. Cool words. But none of that really landed for me at first. What made it click was a much simpler idea we talked about in class:

The distinction is in what the programmer is required to think about, and what the language hides behind the scenes.

What finally made it click for me was realizing that the machine hasn’t changed. Under the hood, it’s still all imperative. Haskell still compiles down to machine code. The CPU is still doing CPU things. None of that magic goes away.

What changes is the level at which you get to think. As the programmer, you can stop obsessing over how the iteration happens and focus on what transformation you’re actually trying to express. That sounds like a small distinction until you feel it in practice.

That’s really the declarative vs. imperative split. In C++, you write for (int i = 0; i < n; i++)and you’re spelling out the whole procedure: start here, stop there, increment like this. In Haskell or Elixir, you write something likemap f list and move on. You describe the transformation, and the language handles the mechanics.

I didn’t realize how much mental clutter the loop version carried until I saw the alternative. Once you’ve felt that shift, it’s hard to unsee.

Who Needs Loops Anyway?

Let me talk about recursion, because this is where a lot of people mentally leave the room, and honestly, I think that’s a mistake.

Yes, in functional programming you replace loops with recursion. And yes, the immediate reaction is usually, “wait, doesn’t that just blow the call stack?” I had the same reaction. But this is where tail recursion really changed how I thought about it.

If the recursive call is the last thing the function does — the tail position — the compiler or runtime can reuse the current stack frame instead of creating a new one. So the stack doesn’t keep growing. It runs like a loop.

The Elixir example that finally made this feel concrete for me was a factorial function:

defmodule UserMath do
  def fac(num), do: fac(num, 1)
  defp fac(0, prod), do: prod
  defp fac(num, prod), do: fac(num - 1, num * prod)
end

What I like about that second version is that it carries the running product as an argument. That’s the trick. Because of that, the recursive call becomes the very last operation, which means the runtime can optimize it into iteration. So you get code that still feels clean and declarative, but without giving up the efficiency you’d expect from a while loop. That was a huge moment for me.

Once that clicked, recursion stopped feeling like this weird ceremonial thing FP people do instead of writing loops. It started to feel natural. More than that, it felt elegant. You define a base case, then the recursive case, and the whole problem starts to look smaller and more structured.

And pattern matching makes it even nicer. Each clause handles one case, cleanly and explicitly. There’s something satisfying about that. It feels less like wrestling control flow and more like stating the shape of the problem out loud.

Pure Functions Changed How I Think

A pure function is one of those ideas that sounds academic until you actually use it and realize how much pain it saves you. Same input, same output, every time. No sneaky reads from global state, no side effects, no weird little “cheeky arbitrary effects” — which, hilariously, is almost exactly how my course slides described them.

What I love about this has nothing to do with purity in some philosophical sense. I’m not interested in code as moral discipline. I like it because it makes life easier.

Testing gets dramatically simpler. If a function just takes an input and returns an output, testing it is almost boring in the best possible way. You pass in known values, check the result, and move on. No mocking half your application. No setup script. No teardown. No wondering whether your database fixture is lying to you.

Concurrency gets a lot less scary too. Most race conditions come from shared mutable state. Two threads touch the same thing at the same time, and now you’re in hell. If your data is immutable and your functions don’t have side effects, a whole class of bugs just disappears. You stop spending so much mental energy on locks, synchronization, and who might be mutating what behind your back. There’s a reason Elixir, running on the Erlang VM, can lean so hard into concurrency. WhatsApp famously handled 2 million connections on a single server with Erlang.

And then there’s composition, which might be the most satisfying part of all. If f takes an A and returns a B, and g takes a B and returns a C, then g(f(x)) just works. No shared state to break the chain.

Pattern Matching is Actually Addictive

I honestly don’t know how to talk about pattern matching without sounding a little converted, but I’ll try.

In Elixir, when you write:

def fac(0), do: 1
def fac(n), do: n * fac(n - 1)

you’re really defining two different clauses of the same function. Elixir just tries them in order and picks the one that matches the input. fac(0) hits the first one and returns 1. Everything else hits the second. No if n == 0 check, no branching logic cluttering your function body.

You can pattern match on the shape of a list, on custom data types, even inside nested structures. And once you get used to writing logic that way, a lot of ordinary if/else chains and switch statements start to feel strangely clumsy. Not wrong, exactly. Just heavier than they need to be.

Because at some point you realize you’re not really trying to describe control flow. You’re trying to describe the shape of the data and what should happen in each case. Pattern matching lets you do that directly, which still feels kind of magical to me.

Lazy Evaluation: Only Compute What You Need

One of the big ideas there is lazy evaluation: expressions don’t get evaluated until their values are actually needed. That sounded kind of abstract to me at first too, until I saw what it lets you do.

Because of laziness, you can work with data structures that are conceptually infinite. You can define an endless list of numbers and only ever evaluate the slice you actually use. That still feels a little illegal the first time you see it. And more importantly, it changes how you think about performance. In a strict language, some of these operations would feel expensive or awkward right away. In Haskell, the language just doesn’t do the work unless something actually asks for it.

Elixir has its own version of this idea with Stream, and I really like how practical it feels there. You build these lazy, composable pipelines where data only gets pulled through the transformations when it’s needed. Pair that with Enum.map, Enum.filter, and Enum.reduce, and you end up with pipelines that read cleanly without feeling wasteful. That combination of readability and efficiency is a big part of why this stuff grabbed me in the first place.

Real-World Use Cases

Functional programming shows up in places where correctness actually matters.

Blockchain

Cardano — which is probably one of the most academically serious blockchains out there — is written in Haskell. And the reason makes perfect sense if you’ve bought into any of the stuff I’ve been talking about: when a mistake can cost absurd amounts of money, you start caring a lot more about guarantees.

Haskell gets used where a single mistake might cost billions of dollars, and that’s exactly the kind of environment where its strengths stop sounding theoretical. The big one is formal verification. Because the language is pure and mathematically well-behaved, you can do more than just test whether a function seems correct. You can actually prove properties about how it behaves. For a financial system that’s responsible for billions in value, that’s not some academic flex. That’s the whole point.

And Cardano doesn’t just use Haskell around the edges, either. Plutus, its smart contract language, is embedded in Haskell. Marlowe, the DSL they use for financial contracts, is built on top of that. So this is a real example of FP ideas making it all the way into production systems where failure is incredibly expensive.

I really don’t think you can build a serious financial system on top of a language where state can quietly shift under your feet. At some point that stops being a design preference and starts being a liability. The blockchain world seems to be slowly figuring that out too.

Distributed Systems & High Concurrency

Elixir runs on the Erlang VM, which came out of telecom, a world where downtime is basically unforgivable. It was built for systems chasing nine nines of uptime, which is such an absurd standard that you kind of have to respect it.

What makes that possible is the model underneath it. You have lightweight processes, the actor model, and immutable message passing. Instead of a bunch of threads poking at shared state and hoping nothing catches fire, processes communicate by sending messages. That sounds simple, but it changes everything. You can spin up huge numbers of concurrent processes without inheriting all the usual nightmares.

That’s a big part of why Elixir ended up being attractive to companies like Discord when they needed to handle millions of concurrent users. The model just sidesteps a lot of the shared-state pain that makes highly concurrent systems in languages like Java or Python so exhausting to reason about. When you don’t have to constantly ask “who else might be touching this right now?”, the whole problem gets a lot more manageable.

Finance and Security

Haskell also shows up in more places than people think. It’s not loud about it, but it keeps turning up in corners of industry where correctness matters more than having the biggest ecosystem or the trendiest developer mindshare.

AT&T has used it for network security automation. Intel has used it in multicore parallelism research. NVIDIA has internal tooling built with it. And once you start looking through the Haskell in industry list, you realize it’s a lot longer than most people would guess.

That pattern is the interesting part to me. Haskell tends to show up where bugs are expensive, where “probably correct” isn’t good enough, and where being able to reason about code formally is worth more than having twenty different web frameworks. It’s not the language everyone reaches for by default, but in the places that care deeply about correctness, it keeps earning its way in.

Data Pipelines

Even if you never touch Haskell or Elixir in production, the functional style still leaks into a lot of the tools people use every day. Once I noticed that, it stopped feeling like some niche programming subculture and started feeling more like a general way of thinking.

map, filter, and reduce are everywhere. They’re baked into Spark, Pandas workflows, and pretty much every stream-processing system worth mentioning. So even if you never write a “pure FP” codebase, you’re still working with FP-flavored abstractions all the time.

And I think understanding why those abstractions exist actually matters. They’re useful because they let you describe transformations directly, without getting bogged down in state management and control-flow noise. Once that clicks, these tools start making a lot more sense. You stop treating them as random API tricks and start seeing the underlying model. And honestly, that alone makes you better at using them.

Closing Thought

The real shift with functional programming isn’t syntax. It’s how you model the problem.

With OOP, you tend to ask: what objects exist, and how do they interact?

With FP, the question becomes: what transformations happen to the data?

That sounds small, but it changes a lot. Code written in that style is usually easier to test, easier to reason about, and easier to parallelize. And once you internalize it, you start bringing it back into other languages too — writing more pure functions in Python, preferring immutable data in Java, reaching for map and filter in JavaScript instead of manual loops.

And that’s probably the simplest way I can put it: I’m not saying everything should be rewritten in Haskell. Ecosystems matter. Hiring matters. Learning curves matter. But once you’ve written code as a chain of clean data transformations, it gets a lot harder to be impressed by tangled state and giant loops.

← All posts