Testing Considered Evil

pieterhpieterh wrote on 04 Oct 2011 12:53

user_add.png

Here's a provocation: the more you test software, the worse it will be. To understand why, I need to explain how we (as a profession or industry) actually make good software. Very few people understand this, and we use techniques like unit tests for support, not illumination.

I'll keep this brief and brutal. How do we make good software? It's a very hard question to answer. I'll start by defining "good" as large-scale, relevant, and useful in the long term. My two bridges parable explains this. We make good software by first of all, discovering what the real problems are, and only then solving them in careful incremental steps. So, assertion one:

1. It is impossible to know what software we need to make before we start making it.

Because, only by throwing some starting solution into the economy of interests that represents our customer base can we start to understand what they really need. Once we understand the problem, we start to solve it. However, we're still mostly wrong. Only by repeated attempts at probing the problem space can we learn enough to make useful long-term solutions. Each attempt needs to be as minimal, as cheap, and as disposable as possible. So, assertion two:

2. When you think you know what the problem is, you are still wrong.

Now, the problem with small groups of experts is that they are usually stupid. Like laser light, they think coherently, with the same education, culture, language, motives. They can solve the wrong problems perfectly, over and over. And most software teams (especially commercial ones) are like this. Perfect idiots. So, assertion three:

3. Only a diverse crowd with conflicting perspectives can really identify the problem.

Once you've identified the real problem - say, "we need an operating system capable of running a planet of a hundred billion devices" - can we start to solve it. However, even constructing software of any size is enormously risky. Software doesn't solve one problem, it solves thousands of them. Millions even. Of which most are wrong, irrelevant, or unimportant. Someone I know once spent a week optimizing a piece of code that turned out to be the OS's idle loop. Which brings us to assertion four:

4. If you don't have that crowd of people fully engaged in every stage of your software development process, it will go wrong in unpleasant ways.

And essentially, therefore, the challenge of large-scale, long-term software engineering comes down to a simple question: "how do I get people across the world to join in this project as co-owners, co-thinkers, fully bonded participants who wake up at 3am worrying they might have not done the best they could?"

The answer is fairly complex and I've explained it in my work-in-progress, Culture & Empire, chapter 2. You don't need to read it, but if you do it will enlighten you.

And part of that answer is, don't spend effort testing. Sure, write test cases. You don't want to look like a fool. But looking like a fool is better than being a fool. I've often made stupid mistakes in public, laughed about them, and used that to encourage others to get over their shyness.

It should be clear by now that I'm assuming open source. Closed-source software is a failure, no matter how long it might appear to work. It fails, period, because it can't attract the necessary knowledge economy it needs to survive over time.

So you release open source, and you spend weeks testing it. Forget the 80-20 rule which says, if you spend equal effort everywhere you're wasting 80% of it. What's worse is that by releasing perfect code, you prevent others from helping you. I'm an expert in making this mistake.

Which brings us to assertion five:

5. Releasing buggy immature (open source) software is an essential part of building a community.

And since testing is all about bringing software to maturity before releasing it, it's safe to say that testing is harmful.

Postscript

If you take this advice literally without understanding the whole model, you're going to get hurt. You will also need: rapid response when a bug is reported; rapid release cycles according to the maturity of the product; increasing cynicism towards changes according to the maturity of the product; test-driven development increasing as the product matures; smooth learning curves for new participants; etc.

So if you try to submit a patch to a stable ZeroMQ version, I'll ask you for a before and after test case, a Jira issue, and a third person acting as problem owner. If you try to submit a patch to an unstable version, I'll accept it without pause, and release it without testing. I hope you understand the difference.

Postpostscriptum

People have pointed out that I use test-driven design of APIs myself. This is true. Perhaps for APIs this is a valid approach. I'm not totally convinced. I've used this extensively for CZMQ but that API was built out of immediate necessity, and old pieces of code. The problem was accurately known. And still, the project got very few contributors. Perfection precludes participation.

Comments

Add a New Comment
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License