Testing is an art. But did you know, that it might be compared to…mixed martial arts, where rounds are like sprints? In both cases, you have to do a good planning and draw conclusions based on a current situation. Imagine yourself as an MMA fighter and your project as an opponent. How would that fight look like?
It has begun. In a two weeks’ time, another project will have to be tested. You want to be well prepared for a fight, so you start the research. You are looking for other – similar projects, to get familiar with the previously used strategies and found bugs. You are reading the specification and asking people involved in the previous projects about feedback. Based on that, you plan your test strategy. You are writing down what exactly you’d like to test: boundary values, checklist, BDD scenarios, all written and ready for use. Finally, your test plan and test cases are ready. You are ready for a fight.
You started. You are confident and ready for anything. It’s not the first project of this kind that you’ve found yourself working in. You start your prepared tests and begin the attack. A few minutes later you find the first bug! So far, so good – you can test on. Test Case1, Test Case2, Test Case 10… Suddenly it turns out that the project is more complicated than you first thought. You have used all the testware, but only 10% of your plan was done. Time is running out and you are exhausted. It’s break time.
You sit down and have a moment to think about what happened. You are doing a retrospective. You know, that Specification-based testing will be ineffective here. You have to change your strategy ASAP and decide to try Experience-based testing, namely exploratory testing. Unfortunately, you realize with panic, and don’t even try to write down the areas you’d like to check and how. Next round is starting.
Now your testing is chaotic – a little bit here, a little bit there, without rhythm a reason. At the beginning it seemed effective and a few bugs were found, but moments later you are not able to find anything new. You hear a gong. Sprint is over. It’s time to rest.
During the break, your technical leader tells you to take a grip. The two of you sit together and you explain the course of events. Two rounds are behind you, and there’s a lot to talk about. After a short conversation, you decide to start testing based on context. Critical areas, well described in the specification, needs to be tested based on a test plan and test cases. Exploratory testing will be used in other areas, but you have to find fitting methodologies and describe them in regards to the tested system. The previously checked areas will be covered by regression – the same sequence of test/punches will be repeated to check if other bugs appear. Finally, you will be focusing on the areas where you found lots of bugs with relatively small effort – remember about bug clustering!
You’ve heard the gong. It’s time for the last sprint. Armed with the knowledge and experience, you approach the tests calm and steady. You know that the time is running out, but you have to keep your cool. You strike further blows carefully and steadily, watching what happens to the opponent. You increase test coverage where it is needed, other areas are just checked by ad hoc tests. You have more experience now, so you aim better. From time to time you come back to regression tests, attacking already verified areas. You know that an attack in one place can cause an error in another area. Round is over. Now final score will be announced by judges – customers.
The fight is over
You are tired but happy and back to your corner. You know that you did what you could. You eagerly wait for the verdict, but UAT continues. It is not relevant that you reported 100 bugs; if the judges will be unhappy with the final result – all would have been for nothing. Even if you found 200 bugs, but all of them are low priority and severity bugs, while you skipped one critical defect, then the project is in trouble. Finally, the verdict in announced. Judges are happy and thank you for a job well done. It’s time to rest. Soon, next fight will come…
This story describes how complicated and time-consuming a good testing process is. All starts with planning, writing down test strategy and test plans. However, when time comes to use those in practice, it turns out that things are not always so pretty. Projects can be (and not so rarely are) far more complex and complicated than they appear to be at the beginning. You often have to take a step back to be able to move forward later.
Mixed Martial Arts analogy is not a coincidence. I have been training MMA much longer than I have been testing. Through all these years it repeatedly turned out, that they have a lot in common. I hope my experience will help you become a better fighter / tester.
When you choose exploratory testing as your testing approach, remember that exploration is not a chaotic testing of random things. You always have to think about the areas that you want to test and how you want to check them out.
Don’t disregard retrospection. Not all meetings with managers are a waste of time 😉 Feedback is very important, knowing what went good or bad and why is a key for improvement. Based on that you may choose your strategies better for the next sprints/projects.
There is no such thing as “best approach” to testing. Sometimes it is the BDD that works, at times it is the TDD, and yet, in another case, it may be that you need to use waterfall instead of Agile. The CDT school (context-driven testing) can often be a very good idea – but bear in mind that this approach is for experienced testers only. One thing that will work each and every time is TDT – Thinking-driven testing. In the end, we make all the decisions. Good software testing is a challenging intellectual process.
Pay attention to regression testing. Keep in mind, that errors may not occur immediately, but over the longer time periods. Vulnerability found in one place can cause a rash of errors in another.
Last, but not least: the most important thing. Remember that all the work may be wasted if the customer does not approve the project. He is the measure of whether the project is done well or badly.