Developing software is a process that involves a lot of repetitious work. Building the code, updating configuration files, creating database change scripts, unit testing, deploying and integration testing are all tasks that are repeated many times during development.

Whatever anyone tells you, none of these tasks are exciting, in fact most of them are tedious and prone to error. That’s why so many developers automate them; if they run correctly once, they’ll keep on doing that. Remember, computers are faster and more reliable than people. Automating these parts of the development process is a problem solved.

Why is it then, that the tests run by so many testers are manual?

It’s not too hard to see how this situation might arise: maybe your team has always had separate testers who have little to do with developers, or possibly your QC staff either don’t have the skills or even the time to automate their testing. These problems can all be overcome by training, research by your testers, or even by lending a developer or two to your test department for a while.

It doesn’t matter whether you’re using waterfall or agile development methods, there are very few situations where your testers should find themselves working through an Excel spreadsheet for every new release of the system.

I’ve spoken to many testers and the most dedicated ones always want to automate their tests. It increases their efficiency, prevents regression errors, improves their accuracy and gives the testers more chance to create tests that are actually useful. It also gets them away from stepping through interminable spreadsheets, manually filling out results as their morale slowly dies. Automated tests can be written or configured to produce custom reports for every run.

So it seems from this that everywhere that QC testing isn’t automated should soon be working towards it.

The problem is that test managers seem to be divided between those who agree that automating everything is the way forward, and those who believe that the only “real” way to validate a system is to do it manually. Sometimes this belief is based on previous bad experiences, sometimes on a lack of confidence in automated tests, and sometimes just on the feeling that people carry out testing better than machines.

Whatever the reason for any reluctance, it pays to analyze things a little more closely. There are two main costs associated with automated testing. The initial set-up of the automation environment. This is a one-off cost, which is amortized over the life of the project. As the set-up is often part of the configuration of the build system, the time it takes should be a matter of hours at most.

The second cost associated with automated testing is the set-up and maintenance of the tests. Let’s assume every new test takes half an hour to write. If the manual tests take a minute each to run on average, and the automated tests take 5 seconds each,  then every time a test is run automatically it saves 55 seconds. To save the same amount of time that it took to write the test (1800 seconds) will take:

1800s / 55s= 32.7

Assuming your tests are run for every check-in and your development team produce four new builds every day, which seems to be a reasonable average, the payback time is just over 8 days. However, as more of your tests are automated, the reduced execution time means your team will be able to produce more tested releases per day, so the payback time will be reduced accordingly.

To summarize:

It’s easy to put off automating your QC tests, but for any non-trivial system it will pay back your investment quickly. You’ll also end up with higher morale in your QC team and a set of tests that can be run even before they arrive at work in the morning. Once you’ve got a few tests running, you’ll wonder why you didn’t do it sooner.