If you follow the undercurrents of software testing, you have likely read a few posts about a dilemma based on differences in what Michael Bolton calls testing and checking. Testing describes an activity that involves learning, observing and judging among other skills. Checking answers an objective yes or no question.
Testing and automated checks are different in some practical ways you should consider when talking to managers.
Why Wasn’t This Bug Found?
Programs do exactly what we tell them to the same way every single time. They are usually just as straight forward when we use them for testing.
In the largest UI automation project I have worked on, a typical nightly run reveals some tests have failed because of timing issues or known bugs. Every once in a while, I find that a new change introduced a problem, but that is unusual.
Other types of automation are a little more stable. You get fewer test failures that don’t point anywhere, but they are just as unlikely to find new problems.
Usually, I find most new bugs during the process of writing the automated check. This part of the process involves more complicated workflows, and the observant tester will often notice problems without specifically looking for them.
Automate All The Things
Many people who want automation, and they want it to perform nothing short of magic. There are a few companies that do mostly technical testing — Netflix, Google, and to some extent Facebook — but they are dealing with problems of scale that most people will never see in their lifetimes. They also have to deal with the occasional drawbacks inherent to testing at that level.
When an automation strategy is driven by the idea that every test case and every bug found should be added to the test suite, we quickly get to a point where running the tests takes an unreasonable amount of time and effort. Such cumbersome test maintenance for the old - hopefully more stable – features causes time constraints that prevent testing new important features.
Usually, when people say they want 100% automation, they just want faster feedback or some piece of mind that changes don’t introduce new problems. Automation might be part of the solution, but there are often easier ways to address these concerns
Why Is This Taking So Long
Test automation, particularly in the user interface, takes a long time. It takes time to create and manage the environment; time to develop each test; time to update the tests that are broken because of product updates; and time to work on the test framework itself.
For anything that isn’t really small or a one-off, test automation is a development project with a lot of dependencies. As with development products for production code, some bits go quickly and some take more time.
Another source of delay is the tests themselves. UI test suites can take several hours when they get big. Eventually people begin running these overnight, and then, parallelizing to run sets of tests at the same time on different servers. That means getting feedback about a build the next day at the earliest.
Even unit tests can become slow. For one client, I saw a set of unit tests with hooks into the database and dependencies three deep. These builds went from taking minutes to hours in about a year.
Setting reasonable expectations based on the differences between testing and checking can mean the difference between a happy manager and someone that is constantly unsatisfied and confused about why things are the way they are.