Load it, check it, quick-rewrite it, plug it, play it, automate it. Automation is a trend of today in almost all industries and is quickly spreading into all parts of daily life. And it’s great! Just think how it benefits your daily life; from the moment your alarm wakes you up, the central heating in your home triggered by time or temperature, the map app on your mobile phone that reroutes you based on live traffic reports, the computer backup that has run by the time you reach the office, and Amazon Alexa remembering all the things we can’t! Even my bills are being paid on time throughout without any onus on me to be more organised, thankfully! With technological advances continuing to make our daily lives easier, it would be an opportunity missed if we didn’t utilise the approach in our work. Certainly, the manufacturing industry has done so to great effect, but how about software testing?
There are a lot of benefits, but also a lot of misconceptions about those benefits. Let’s look at a few that I’ve seen highlighted in the ongoing debate:
Machines are certainly faster at running tests, sometimes running a whole regression suite in an hour that may take a couple of weeks to test manually. So, 1 - 0 to automation, right? Well, can machines also identify and write the tests faster? No, they can’t do this at all and require a huge investment of resource to reach a point where they can be considered ‘automated’.
There’s an argument that machines aren’t prone to human error and thus, produce higher quality outputs. Was that a little chuckle I heard?! The premise is accurate, but in reality, we all know machines fail. Secondly, the automated tests are written by humans initially, so in fact, they’re only as good as the person who wrote them. I would agree however, that automation is more consistent – if you want something tested at the same time, in the same way and repeatedly, automate it.
Some think that automated tests are more thorough and catch more bugs that humans may miss. Maybe I’m confused, but this is the opposite of my understanding! Automated tests run a limited set of tests and don’t run edge cases, they don’t wonder ‘what if…’ and don’t possess a creative, inquisitive nature.
We also hear that the tests are ‘reusable’ and so once they’re written, their associated costs flatline. Perhaps on a stable, fixed system, but then maybe that’s because they would be redundant after a successful run (if nothing changes, why would you keep running the same tests?). They need to be maintained, expanded as the software is developed, ported to handle new technologies, and/or integrations and extended to cover areas where bugs have been identified. And the skill level required to write automated tests is much higher. So, yes, they are reusable but they will still require further investment.
I’ve even heard the question; is QA dead because of test automation? Well, that would be a fundamental misunderstanding of the difference between testing and QA and what they achieve. Testing can be automated but is actually only a measure of quality and feeds into quality assurance. QA is so much more than testing, to assure quality we need to understand so many aspects and variables, most of which are human!
I think one of the greatest demonstrations of automation's strengths and weaknesses is the recent trials of self-driving vehicles. We’ve seen that they can be beneficial and have achieved some success, but they’ve also caused fatal accidents. One accident was caused by an unexpected set of circumstances, which is exactly why the human ability to identify an ‘unhandled exception’ is so important. Automation is fantastic in a controlled environment with a specific objective, but it’s not going to think outside the box, it’s going to test the same thing over and over.
Automation is also often suggested as being most suitable for regression testing i.e. a long, arduous, time-consuming set of tests to be repeated regularly. Again, I agree with the premise, but I also believe there are better ways. At Ghyston, the developers do write automated tests but we run them as part of our build and continuous integration process, thus building in regression testing every time we make changes and rebuild the software. This approach means we can deliver stable builds into QA that have already passed all the existing tests, leaving our testers to explore the software and focus on usability, discovering new use cases and journeys and provide a comprehensive test approach. We also build confidence by running smoke tests and sanity tests pre-release to ensure there is no sign of fire anywhere in the application and that no unforeseen errors have crept in.
Ultimately, it’s about knowing that everything has a place and utilising it appropriately. Automated testing helps us cover the repeated tests and deliver stable builds, and helps us to provide efficiency within the project without sacrificing quality. Manual testing provides the human touch, flexibility, it reaches the areas that scripted tests cannot and provides a platform for evaluation from the perspective of users, which is critically important to understand. As with most things in life, it’s about finding a balance.