Automated tests – a brief business case

The following is a short rundown on automated software tests and their potential pay off, along with a simplistic formula I’m using when determining whether or not to automate a given test.

Note: Automated software tests are often referred to as “unit tests”, although unit tests are just one of several possible automated test types.

Why write automated software tests?

In short, automated tests can make you money by giving you competitive advantages. They can do so directly by reducing labor costs, reducing bug frequency, increasing product stability and increasing sprint velocity; they can do so indirectly by increasing client satisfaction and increasing your company’s credibility.

There are a wealth of empirical studies which all point to the fact that writing automated tests has a positive effect on finding and preventing errors, and thus reducing development costs in the long run.

If you want to read them, you can search for “test driven development studies”. The findings in these studies include:

  • Software written without the aid of automated tests has 2.4 – 4.6 as many errors as software in which automated tests are written as part of the development process.
    Evaluating the Efficacy of Test-Driven Development: Industrial Case Studies (2006) by Thirumalesh Bhat and Nachiappan Nagappan (Microsoft Center for Software Excellence & Microsoft Research)

  • Software written with automated tests as part of the development process has 40% – 90% fewer errors when it reaches production, i.e. more errors are found during development and testing – when it’s cheapest to fix them.
    Realizing quality improvement through test driven development: results and experiences of four industrial teams (2008) by Nachiappan Nagappan, E. Michael Maximilien, Thirumalesh Bhat and Laurie Williams

If you know of other studies, whether they’re supportive or contradictory, please leave a link in the comments!

What does a software error cost?

How many bugs does it take to make a client abandon your product or your services as a consultant? How much revenue is lost when that happens? These are complex questions which I won’t try to answer here, but it’s important to keep them in mind when deciding on an appropriate level of quality assurance for a given product or service — i.e. how often and how thorough to perform tests.

The cost of a software error depends highly on the software lifecycle phase in which said error is found and fixed. The (iterative) lifecycle of most software I work on can roughly be described as follows:

  • Phase 1: A developer writes and tests (whether automated or manually) the software on a workstation. During this phase the cost of bug fixing is minimal, as there are generally only a few parties and environments involved in the process.

  • Phase 2: The software is deployed to one or more test environments, and tests are performed by developers, dedicated software testers or similar domain experts. It is typically considered acceptable to find errors in this phase, as the cost of change is still low compared to errors which find their way into a production environment. Still, errors found by clients negatively impact their view of you and/or your company — if you don’t take my word for it, just ask them.

  • Phase 3: The software is made available to end users, e.g. by being deployed to one or more production environments. It is typically considered unacceptable to find errors in this phase. This is based on multiple factors, including but not limited to: increased costs due to otherwise unnecessary reiteration of the product lifecycle, lost work effort and loss of credibility to your clients.

Not surprisingly, it’s significantly cheaper to fix errors early rather than late, something that automated tests are exceedingly good at.

How often will a test be conducted?

In general, software must be tested every time it changes. The greater the change, the more parts of the software must be tested. This is called regression testing, an activity often neglected when it has to be performed manually.

Thorough manual regression tests are neglected because they require (a lot of) focus and (a lot of) time. They require adequate test plans to be available, e.g. what the expected start and end state of a given user story is.

Thorough automated regression tests, on the other hand, are performed very often as they require very little focus and time. The developer simply presses a button and sees a checkmark or error a few seconds later. They too require adequate test plans; automated tests will actually force the developer to draw up a test plan as part of the development process, as it’s used to direct the automated test itself.

What does a software test cost?

Over the years I’ve had several discussions about the direct and indirect value of automated tests.
I’ve found that it’s best to avoid the parts of these discussions which haven’t been quantified; hence we’ll focus only on labor costs, since that’s something most companies can quantify.
It’s up to you to determine whether other factors play a role for your particular situation, e.g. the qualitative benefits of software tests when applied to you, your team, your product or your company.

The following formula assumes that maintaining test plans and test code is about equal in terms of cost. I’ll emphasize again that the formula makes no assumptions reg. whether code produced with automated tests is indeed of higher quality, or has any other benefits.

  • Total price for manual test = hourly rate of staff * time per test run * number of test runs

  • Total price for automated test = hourly rate of staff * time per test run * number of test runs + estimated development effort

For an automated test to be worthwhile, an improvement of the time per test run has to cancel out the initial development overhead (assuming testers and developers are paid the same).

Example calculation

This example is based on my previous work on the login feature of a self-service portal, which was (and still is) used by three clients and about 250.000 end users.

At the time, the login feature supported the following use cases:

  • Members can log in using two-factor authentication.
  • Members can log in using a PIN.
  • Super users can log in using a PIN.
  • Super users can “impersonate” a member for various administrative and test purposes.

The above was tested thoroughly before each release, for each of three clients.

During the first year of development, seven major 3rd party software updates were released by Microsoft and Google, which would potentially impact the login feature of the site (in my current job, this would be Microsoft and Sitecore instead).

The test automation ROI calculation was as follows:

  • Hourly rate of staff: $70/60 min
  • Time per manual test run: 6 min (it took about four minutes for a developer with domain knowledge to dig up suitable test data, and about two minutes to test all four scenarios – assuming the software was running and ready to go)
  • Time per automated test run: 10 sec
  • Number of test runs: 7 updates/year * 4 test scenarios * 3 clients = 84/year
  • Estimated development effort for automated tests: 8 hrs = $560

This resulted in the following labor costs:

  • Manual test costs: $7/test run
  • Automated test costs: $0.2/test run + $560

On a year-by-year basis the following labor costs were predicted, purely for test runs required due to 3rd party software updates:

Year Manual test cost Automated test cost
1 $588 $576.8
2 $1176 $593.6
3 $1764 $610.4
4 $2352 $627.2
5 $2940 $644

Break-even would be reached in a little under a year, after 82 runs. Since the software was expected to have a lifespan of at least 3 years, management agreed that the tests should be automated.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s