If you’re trying to sell a quality improvement technology or process you’ll eventually be asked to calculate the Return on Investment. Management wants a way to monetize the pros and the cons of the change you’re proposing. Converting your proposal to dollars is seen as an easy way to put the trade-offs into a universal language.
The problem with ROI and quality improvement is that you’re measuring the dollar value of things you’re trying to prevent. And if you prevent them you can’t prove they would have happened or that your quality initiative is what prevented them.
This is hard to do with a quality improvement process such as Test Driven Development. By any account TDD increases the amount of code written for a given piece of functionality. You may get differing estimates of the increase, such as 1:1 vs. 2:1, but you still write tests as well as functional code so there is more code written. On the other hand one IBM/Microsoft study showed a 40% – 90% reduction in bugs at code completion. If these bugs never happened, how do you justify the double- or triple-code?
If the teams have a history you can estimate the overall cost of defect remediation and reduce that amount by 40% (the low end) or 65% (the middle reduction). Still, there are occasional defects that cost significantly more than what’s typical. You also have potentially unaccounted costs if defect remediation is built in as part of business as usual. (Consider that all defect fixes go into the next release which is tested as a whole build by the QA team. You may not be able to separate the portion of the QA effort in re-testing bug fixes vs. testing the new functionality.) And your whole argument may be lost if a critical defect does make it into production even though TDD may have kept 10 others from ever being created.
Even a 15% – 35% improvement in development speed (from the same study) isn’t a killer argument. If a team has been working together for a while it will improve its productivity so you can’t be certain TDD was responsible. In the end, your ROI calculation is fairly easy to cook depending on what result you need to deliver. More importantly, it’s very difficult to prove true or false.
So how should you approach quality initiatives if not via ROI? In many cases you have to look at the ROI of development as a whole and stop attributing ROI to specific development practices. You don’t put an ROI on using Java or C# instead of FORTRAN, do you? The question should be this: Are we delivering quality software at a cost that is profitible to the company? Will this change reduce that profitability?
The reason this isn’t the same as asserting a positive ROI of TDD is because all development practices interact. TDD may improve your confidence in refactoring your code. The refactoring may be the greatest single contributor to improved performance, but TDD enabled fearless refactoring. Which one had the positive ROI? Similarly TDD may significantly reduce the cost of your functional test automation. Functional test automation may allow you to run complete regression suites on each build. The rapid feedback from the continuous regression may be the key to reduced defects making it into the QA verification cycle. Which piece had the positive ROI?
Don’t get me wrong. I’m a big fan of putting proposals into terms C-level people can understand. But I’m not a fan of cooking results or hiding information in overly-broad metrics. If you have to calculate an ROI, there are numbers on the Internet that can help you do that. But helping your executives understand the real value of quality initiatives is a much better answer.