Nov 1, 2016
Written by M. Scott Ford

Throwaway Code

Over the years, I’ve heard a lot of different attitudes regarding code that’s going to be thrown away. Let me be clear here. I’m not talking about code that we think might get thrown away. I’m talking about code that we know will get thrown away.

As you may already know, at Corgibytes, we approach software development very much like remodeling a house. Consequently, I’m constantly looking for various ways that the metaphor breaks down. This means I pay a lot of attention to some of the older houses that I walk past or through. Anytime I see visible construction on an existing structure, I keep my eyes open for things that the crew are following which might not have an analogue in the software world.

One particular practice I’ve noticed time and again in recent months is things that get built which are later removed. Here’s a photo I took very recently. I think it’s a good example of what I mean.

Temporary Support

I’m not an architect, so I can only guess at what’s being built. But, to me, it looks like a temporary support that was used to hold up the roof of this porch while the column was being worked on. I do know that the structure wasn’t there when I walked past it a few hours later, and it was also hard for me to tell what work had been completed.

Are there equivalent things that we create on our software projects? Should there be?

In my own work, I do implement software practices that have a similar pattern.

Temporary Acceptance Test

If you’ve read my post on the pyramid of automated tests, then you’ll remember that I’m not a big fan of a test suite that is built entirely of acceptance tests. Exhaustive acceptance test suites are incredibly slow, they result in testing the exact same behavior over and over and over, and discourage developers from running the full test suite.

One of my favorite places to employ acceptance tests is when I’m hunting a bug. I like to use the bug report to write an acceptance test that demonstrates the bug. I’m a big fan of taking the time to write these. I find that they save me a lot of time that I would normally spend clicking through the application to force the bug to happen. It also fits in with my “lazy programmer chant” (three or more use a for). When working on a bug, I’m guaranteed to have to run through the steps to repeat at least twice: once to reproduce the bug and once more to make sure I’ve fixed it. I can count on one hand the number of times I was able to manually click through the app only twice to fix a bug. More often than not, I have to click the app several times.

Failing Test

Once it’s in place, I then use this failing test to guide my investigation efforts. This might be by time spent with the debugger or by adding logging statements to provide more insight into what’s going on. I like to think of this as “hunting” for a bug. Saying that always conjures up images of Elmer Fudd stomping through the forest hunting for a “wabbit.”

Finding the Bug, Second Failing Test

I eventually find the bug, and I’m able to identify the change that needs to be made to fix it. Most devs I’ve worked with over the years stop there. They have a failing test and a commit that fixes it. Job done. Time to move on. I think this is a missed opportunity. It’s hard for me to imagine a bug that can’t also be reproduced via a unit test or an integration test. What I prefer to do is to add a second failing test at a lower level on the pyramid, and then make sure that the “fix” I’m applying fixes both the higher-level acceptance test and the lower-level test.

As well, I usually discover a category of bugs which could also pop up, and so I write failing lower-level tests for those too. I don’t spend time writing additional acceptance tests though, because that would be superfluous. In this specific case, the acceptance test is providing no extra value over the lower-level test.

I then delete the acceptance test from the project. I don’t need it anymore. It was a troubleshooting tool, and it did its job. I have another lower-level test which covers the specific change I made.

Temporary Support

I like to make sure that the acceptance test is committed to source control after I have it failing. Then, I create a separate commit later on to delete it. I don’t usually squash my commits. This ensures that I’ll be able to find that specific acceptance test in the source control history if I ever happen to need it again. But anyone who looks at the combined diff for the resulting pull request won’t see that the acceptance test was ever there. And, in that way, I think that these temporary acceptance tests I create are perfect analogues for the structure I captured in the picture above.

I’m curious to learn if anyone else follows practices which result in code that intentionally gets thrown away. Please leave a note in the comments if any come to mind.

Want to be alerted when we publish future blogs? Sign up for our newsletter!