Test All the Things part 2: What are all the things?

Fraser Powell By Fraser Powell 04-05-2018 7 minute read

Test All the Things part 2: What are all the things?

You know the feeling. Just as you sit down with satisfaction at having done all the things, something else crops up. Typical. So up you get to do it, only to find it leads on to even more things you need to do. And so on and so forth into the never-ending cycle. This is the story of owning a house…no, sorry, this is the story of how my understanding of testing has developed over the years. Catch up on part 1 here.

Everyone is responsible for quality. Not everyone is responsible for software testing. However, the concept of testing is not exclusive to the test function in an organisation. Testing is simply a measure of something, such as ‘quality’. Testing doesn’t directly affect or implement the quality, performance, security, usability or anything else, but it gives an indication of how well these things have been implemented, thus allowing us to take actions to improve the implementation if necessary. We can all benefit from testing the different elements of our work to understand the quality we are producing and improve it.

My learning correlates with my job progression and the exposure to new challenges, coupled, admittedly, with a few mistakes and hard lessons along the way.

Test all the test scripts

As a wet-behind-the-ears graduate in my first job as a software technician/tester/general dogsbody (always trying to catch up with all the things!), I was fed test scripts to execute and my only objective was to test all the things on the script. Job done. However, when issues were raised in areas that had not been tested, as they were missing from the test scripts, I realised there was a need for the scripts themselves to be tested. They needed reviewing by the relevant people so we could ensure they covered as much as possible. Had I been a brighter spark I might have realised a few other things, but we’ll get to that. Instead of voicing this valuable, though far from inspired idea, I moved on to a new role as a test analyst in another company so I could voice it there and be hailed as a genius, thinking I knew all the things.

Test all the systems

Unfortunately, I was the only test analyst and my test script review brainwave was less than impressive, as well as less pertinent, owing to the new set of challenges I faced. And I certainly didn’t know all the things. Paired with the only developer as a delivery team, we were tasked with a complete overhaul of the technology being used whilst maintaining and enhancing the software itself. An epic request. Testing the software functionality alone was not enough - many aspects could not be tested this way. There was a need to test the database directly, a new installer program, release management, a variety of hardware such as EPOS systems and hand-held devices and all this across different operating systems and technical infrastructures. The learning curve was steep. However, the vast majority of these new test items were still related directly to the software. It was only in my next role as part of a test team within a more mature organisation that I started to see the big picture.

Test all the processes…and the testers

To begin with, the test team was small and we had some PR issues; parts of the organisation thought we were the bottleneck choking the workflow, we asked too many questions and caused too many problems. Were we too busy reviewing each other’s test scripts to actually execute any tests? No. We had to test the wider process. This led us to realise that the functional specification documents provided to us were being misinterpreted by various parts of the software development process, or contained gaps, errors or sometimes made no sense at all! We set up reviews of these documents and included a representative from each group to ensure we tested the documents and achieved a shared understanding. We still had some issues so we tested the process again and identified a need for kick-off meetings to ensure anyone directly working with each specification had the right understanding. It was often down to me to represent the QA team, but did I have all the necessary skills? I wondered if anyone else in the team would be better suited. So, we built a skills matrix of the team to identify what we had and what we needed - we tested the test team! Which resulted in us stealing some of the support team to improve our domain knowledge, if not our reputation within the business!

Test all the things!

Then I arrived at Ghyston as the only member of the QA team, but fortunately, not the only ambassador for quality in the office. The culture here is one that strives for quality code and quality user experience at all times, and a culture which tests whether we really are producing the quality required. As you may be aware, we recently rebranded and as part of that process, all elements of our approach to branding were tested, along with our approach as a business and how that affects and should be reflected in our brand. We test our relationships with our clients to find out if we are achieving what we think we are, understand what we need to achieve and identify what we can do to get there. We test our sales process and our marketing effort, we test the interfaces we build with end users in the contextual environment, we test our internal infrastructures and processes, we even test to see if we’re buying the right biscuits! Testing can be applied to pretty much everything. It can manifest in many forms, from high to low levels, processes to deliverable items and through to people. Sometimes, we even talk to inanimate objects to test ourselves (we have rubber ducks to bounce ideas off). It sounds mad, but if you haven’t tried it you should test it out, otherwise you can’t say you’ve tested all the things!

Share this article

Subscribe to our monthly newsletter about hot topics in the industry