Write usable code before you write reusable code

Your code actually runs on one of these, and it does not care about how abstract, elegant, or interesting your code is

I’ve been making software for nearly twenty years now. It’s amazing to reflect back on my early days and to see myself as someone just learning how to code.

The character of that experience is punctuated with moments of confusion about the structure of the software I was writing. I would often stare at the screen while my mind raced trying to think of how the various code modules I was about to write would fit together. I was, in effect, trying to solve big code organization problems before actually writing a single line of code.

Indeed, my story is not all that uncommon in the software world. At University (and from reading various popular programming articles), I was taught to model my application before writing the code. I was told to think of the all the classes and how they interact, maybe make one of those Vizio diagrams with a bunch of lines connecting some cool looking boxes, and then sit down to program later on.

And now here I am, in the middle of the biggest project of my life (a new and interesting 2D puzzle platformer game built from scratch with my own game engine), and I can now confidently say that I got where I am by rejecting most of this advice.

As soon as I abandoned many of these practices, my productivity skyrocketed to new levels, and I made more money because I was more efficient with my time.

In this article, I will show you the benefits of avoiding premature design and overengineering. At the end of it, you will understand why it’s valuable to make a program that works before you start thinking about how to structure your code.

I will also provide you with some skills you can immediately apply on your next project. If you practice these skills, you will be able to build bigger and more sophisticated products, achieving your wildest ambitions with much less stress and maintenance woes.

I have profitable apps which I have maintained for ten years, and the only way I can do it is by being ruthlessly dedicated to keeping things simple in this way.

In my opinion, the advice I am about to give you is the most valuable programming advice I can think of. It was originally introduced to me in one of Jonathan Blow’s talks, and I can say without hesitation that my life would have taken a much less productive path had I never been exposed to it.

What’s wrong with designing your application first?

Well, nothing really. At least not immediately. It’s always good to have some rough conception of the direction you’re going, some kind of road map.

So if you’ve got a noteboook open and you’ve scribbled some drawings and ideas on the pages, that’s not really what I’m talking about. I do that sort of thing all the time when I’m imagining a new game or app concept. The notebook is more of a way to get the ideas out of my head and onto some kind of medium so I don’t forget them.

The problems start to happen when your rough idea becomes some kind of concrete idea before you’ve ever had a chance to test it out.

One example of that is a common practice which is associated with Object Oriented Programming. People have a rough idea of how the app is supposed to work, and then they go and create a bunch of official-seeming documents listing all of the properties of each object and how they relate to each other. These are usually called UML diagrams.

You might even take it further and start creating a bunch of complicated inheritance hierarchies before you’ve ever written a single line of code.

I think this exercise is a non-productive use of time. It’s non-productive because you’re creating all of these theories about the way your application should work, but you aren’t actually testing them.

You think an Employee object needs a middle name property, but you haven’t created a single piece of user-facing functionality that uses it. How do you know you really need it?

You think Employee inherits from Person and that you definitely need this higher level of abstraction to tie the two together, but can you tell me what functionality will use that? Does it serve a purpose yet?

This is the trap of creating abstractions before you create functionality. It’s the trap of trying to make your code reusable before you make it usable.

It’s a trap because you have shifted your thinking away from problem solving and have instead decided (perhaps unconsciously) to focus on abstract theory. You’re trying to create a blueprint for a toolbox before you have a single tool.

Let say I want to make some pancakes for breakfast. Seems simple, right?

Ah, but you can’t just make some pancakes. You need to design a generic process which could make any pancake with a variety of different ingredients you may or may not have on hand.

Your process for making pancakes needs to handle the presence or absence of buttermilk, eggs, whole wheat flour, all purpose flour, salt, sugar, and maple syrup. It must take a multitide of leavening agents into account as well.

This clearly isn’t practical, so what happened? We opened the door to new level of abstraction, and as a result, our seemingly simple notion of making some pancakes exploded with combinatorial complexity.

We took what should have a been a simple process and made it harder by trying to make it generic. We didn’t actually need to make any kind of pancake, so we were solving the wrong problem.

If I follow a simple recipe with some built-in assumptions, I can make breakfast without totally losing my mind.

I buy all of the ingredients. I assume I will have those ingredients on hand. I make the pancakes according to the recipe, and I don’t consider any other possibilities. As a result, I get food into my belly efficiently, and I don’t die.

Human beings are masters of abstraction, but abstractions actually make life more complicated. They are important. Indeed they must be used. But if you can avoid using an abstraction, you should do so.

I don’t need an abstraction that lives at a higher level than “buttermilk pancakes according to Alton Brown’s recipe” because those are the only pancakes I ever plan to make so long as I am alive. I will never make pancakes any other way, which means I can reduce my cognitive load by removing that possibility from my life.

This, as it turns out, is also how you make great software. You focus on the few problems the software needs to solve very well, and you ignore everything else.

Build products one feature at a time, one line of code at a time

I have been on several projects where people try to make a “skeleton” of the entire application using some monolithic design pattern like Model View Controller or Model View View Model. After they make the skeleton, they do another pass and fill it with “models” which are basically glorified structs, and those are usually taken from some UML diagrams someone made six months ago.

There are a number of problems with this approach.

The most glaringly obvious of these is how difficult it is to tell which features have been built and which have not been built. Sure, you’ve laid the tentative groundwork for the entire application, but now that I see all of these entities in there, I can’t really tell what you’re currently working on.

It’s actually more confusing to have a mix of code which is in use and code which is not being used. I find myself wasting precious time searching and searching to differentiate between the two.

In my experience, you can boost your team’s productivity by applying a simple rule. Only code that is actively in use gets let into the project’s codebase. No tentative features. No models or stubs that don’t do anything. No commented code.

When you establish a simple standard like this, nobody needs to question whether a function or piece of code they see is in use. If it’s in there, you can assume it’s being used.

I advocate building products one feature at a time. That way, you always have some testable unit when you’re done with that feature.

Oftentimes, building one feature means you’ll have to do a ton of groundwork to get it in, and that’s fine.

The groundwork for the feature is the feature. Don’t try to separate that out into some other kind of work unit. How can you test the work without building the feature? The feature is the test.

For example, in a 2D game, I might need to draw a colored rectangle on the screen.

There’s actually a ton of work that goes into doing something like that (at least when you’ve decided to make your own game engine). You need to setup a rendering pipeline, map the screen’s coordinates onto some kind of projection matrix, setup your vertex and fragment shaders, and setup vertex buffers for triple buffering.

All of that goes into getting a single 2D rectangle on the screen. It’s actually an astonishing amount of work.

You will naturally break that work into different sub-units as you’re working on it, and you may come up with some small tests to make sure you’re doing everything correctly along the way. So maybe you put in some breakpoints and affirm that you’re getting the right drawing coordinates before you send them off to the GPU.

That said, the ultimate thing you can test is whether you see a 2D rectangle with the color of your choosing. That’s the thing your paying customers want. It’s what you’re really being graded on.

I see teams get into places of great confusion whenever they shift their attention away from the customer and instead service their own metrics.

This is one area where I take issue with SCRUM as it is implemented in many organizations. Because everyone is deadly terrified to take on a single large feature that could take a month (because you’re laying the groundwork for it), nobody ends up doing bold and courageous pieces of work which serve as the foundation for much of the work to come after them.

That’s sad because those big bold moves are how you grow as a software engineer. The more of those you do, the more you start to see applications as a cohesive whole. You lose your temerity when it becomes common to edit thousands of lines of code in a single day.

A single feature could take a month, or it could take a few hours. It really doesn’t matter. The point is to focus on doing one thing at a time and in an order that makes sense.

If it’s a game, first get a 2D rectangle on the screen. Then make it so the rectangle moves as a result of pressing some buttons on your game controller. Then add in a world around the game character, which it can explore. Then add in collision detection, then add in physics.

Don’t try to make “the engine” in some abstract sense because that’s a huge monolithic task. Don’t even try to make the various “layers” of the engine, although the layers will naturally shake out if you’re designing your game to be cross platform and need to create a separation between platform dependent and platform independent code.

I’ll put it this way. In the course of making a game, I first think that I need to get a rectangle on the screen. But in order to do that, I recognize that the game needs a platform layer for the platform it runs on. So I make the platform layer in order to accomplish the larger goal, which is the feature.

If you follow this style of building products, the first features will be the largest because those are the ones where you’re figuring out the groundwork, and then the features will gradually get smaller and smaller as you realize you can reuse more of the code you’ve already written.

How does this affect teams?

This is usually the point in the lecture where someone mentions teams. How can you have a team working on a project in separate git branches when one person is making some huge foundational change? Wouldn’t there be a bunch of merge conflicts if someone is doing something as fundamental as writing a renderer that will serve as the foundation for the entire product?

There certainly would be problems with that, and I think that’s a place where you have to ask if it makes sense for there to be multiple people working on the project at that time.

Ultimately, teams are supposed to serve the project and not the other way around. That is to say, you should mold the team around the work that needs to be done on the project. You shouldn’t mold the project around the team you have selected.

If deciding to do a feature means one or two people won’t be busy, don’t try to keep them engaged on this project. Have them do something else while this big piece of work is being done. Then, once the project has the kind of structure where it makes sense to have multiple people working on multiple paths, you can start bringing people back on.

At any one point in time, the people on your team should each be working on a totally separate and fully testable feature. You might go years before ever bringing on another developer, but that’s a good price to pay if it means nobody is stepping on anyone else’s toes, and we aren’t wasting time having people work on small unimportant bits while waiting for someone else to finish something truly important.

Everyone on your team should feel like they are contributing in a major way, and oftentimes the best way to accomplish this is to have people on a totally different project where they can contribute fully.

How to make useful code reusable

I just spent the first half of this article dragging top-down abstractions through the mud, but the thing is, I’m not against abstraction or planning. Both are important and necessary parts of building software. It’s how you do them them matters. Are you a slave to them, or do they serve you?

All good reusable code comes from code that was specific at one point in time.

If you look at your code and you notice several places where you’re repeating yourelf, that’s a place where you can remove the repetition by turning those repeated lines of code into a function.

This isn’t something you do before the fact. You always do it after the fact.

You solve problems first, and then you look at your solution to see if there are any common themes. If you spot any themes, you make those themes into reusable functions or data structures to reduce the amount of code in your project.

Casey Muratori calls this “semantic compression,” and I really like that idea. I think it may be the most useful idea I have encountered in my entire programming career.

Structure is not something you impose on your code. Structure is what naturally evolves out of each small decision you make to solve the problems your software is meant to solve.

At its core, software is just procedures or a long list of instructions. It will always be that way because the hardware is procedural and imperative. One instruction follows another, and they get executed one by one.

There is nothing object oriented or purely functional about the hardware, so even if you’re modelling your program’s logic with those high level approaches, the thing you’re making ultimately turns into something procedural, and that’s the code which actually runs on the hardware.

Although it is nice to think you can impose some big structure on your code, in practice you need to get the CPU/GPU to do some work by literally telling it what to do and in what order.

You can pretend that the code is declarative or object oriented, but if there’s a big problem you need to solve, you will probably end up dropping down to the procedural/imperative level to solve it anyway.

Therefore, all grandiose and dogmatic software architectures are an approach taken in bad faith. They’re what you get when you turn your attention away from problem solving and attempt to glorify abstractions for their own sake. None of them are natural, and they don’t feel natural to work with because you end up having to shoehorn your problem into their way of doing things.

In my experience, this approach is eminently natural and downright cozy. On a human level, it feels good to work this way because I’m not fighting some big system which is imposed on me. I’m gradually building my own systems in ways that make solving my problems convenient for me.

As I work, I build up my own library of functions which come out of repeated actions my program needs to do to solve its core problems.

If something is a function, it’s a function because it is used in at least more than one place. When I read my code, and I see a function, I don’t have to guess if it gets called in more than one place. I already know it gets called in more than one place by virtue of the fact that it is a function.

Abstractions are better when they are based on concrete things your program needs to do, and they are counterproductive when they are imposed on your program before you’ve ever solved a single problem.

The key to using abstractions is to wait until you’ve solved a few problems first. Once you see patterns start to emerge, you take those patterns and turn them into functions. Then, the next time you see the same pattern, you can just call a function you already know you use.

Over time you’ll have a library of functions that can do the bulk of the work your product does, and you’ll just end up using those instead of starting fresh.

How to start

If you’re new to this, you might not understand where to begin with it. Admittedly, I have been writing about it in a somewhat abstract way. Conrcretely, how does this actually look in practice?

Let’s say your goal is to get a 2D rectangle drawn onto the screen.

The first thing I would tell you to do is remove any thought of code architecture from your mind. Just think about what the CPU and GPU need to do to draw that rectangle.

That’s probably some long list of instructions in a specific order. It’s something like the following.

1. Setup a vertex buffer
2. Setup a rendering pipeline in the graphics library of my choosing
3. Setup vertex and fragment shaders
4. Fill the vertex buffer with the triangle vertices which represent the rectangle you want to draw
5. Tell the shader program how to fill those triangles with a color or texture
6. Issue the draw command

Granted that’s an overly simplified way of looking at it. There’s actually much more work. But the thing I want highlight is the fact that this is just a list of instructions. It’s some procedures which need to happen in a specific order.

Once you know you need to get the CPU/GPU to do these things, you start with the first one and just work down the list. So you’re probably setting up the vertex buffer in your program’s main function, then setting up the rendering pipeline right after that.

Literally get the damn rectangle onto the screen first, and then look over your code and see if there are any structural themes or repreated blocks of code. Take those repeated blocks of code and “compress” them into functions you can call multiple times.

That’s it. That’s how you allow your software architecture to evolve naturally.

If you’ve been programming for quite some time and were as unfortunate as I was to be indoctrinated in big programming ideologies like Object Oriented Programming, this way of doing things might feel like it felt to write programs when you were a naive kid just playing around and experimenting.

And it should feel that way because that is how programming is supposed to feel when it is uncorrupted by ideological forces trying to make it conform to some elegant mathematical or academic theory.

Good code is unremarkably simple. It’s not showy, fancy, or overly abstract. It’s not a toy demo of some new programming fad. It’s just some rather boring looking instructions, one after the other, structured in a way that solves the problem.

If you think your code or software architecture is interesting, you haven’t absorbed the message. Code isn’t supposed to be interesting. What we do with the code is supposed to be the interesting thing.

The most productive and wealthy programmers write simple unremarkable code that solves problems. They don’t think it’s all that special. It’s just a list of instructions the CPU needs to execute.

Once I truly took this message to heart, I noticed a huge productivity boost in my own work. It actually became a problem in my job becuase I would get work done so quickly that people couldn’t figure out what to do with me.

I’m sure I will get roasted for saying this, so bring out the flamethrowers.

I believe it’s possible to be a 10x developer when you’re truly focused on the things that matter. It isn’t that you’re literally programming at ten times the speed, but rather that the speed boost comes from not doing all of the other distracting things which have nothing to do with the core problems you’re trying to solve.

When abstractions become your master, they distract you from what programming actually is.

Programming is problem solving.

Programming is telling a computer what to do, instruction by instruction.

Programming is not modelling. Programming is not abstract navel gazing.

Programming is a concrete practice by which you take an idea of how something should work, and you translate it into a literal set of instructions computer hardware must execute to accomplish your goal.

Effective programmers create abstractions which naturally evolve out of the problems their program solves. They do not mold their program around a set of abstractions they have pre-decided they need to use.

The computer is your slave, and you are the master of abstraction. You invert this relationship at your peril.

Game designer and engine programmer. https://tedbendixson.itch.io/cove-kid