How testing has changed

Date posted
16 February 2016
Reading time
11 Minutes
Irek Pastusiak

How testing has changed

With the advent of agile methodologies reaching maturity and becoming the standard delivery approach, testing as a discipline has had to evolve. No longer is testing a separated concern, an action performed at the end of a gated phase of software development. With continuous integration, testing is performed at all times, at all levels. Equally, the role of the tester has had to evolve to adapt to the new normal. In a 'traditional' waterfall context, developers and testers are identified within separated teams, with communication limited to interaction via that lesser-known social media site, HPQC (or another defect tracking tool). With clearly separated roles - developers develop the thing, testers test the thing - testers are rarely allowed to directly interact with application internals. As a result, most (if not all) testing efforts target the UI layer. Testing is understood to be a manual affair, with acceptance tests based off extensive business rules defined at the outset of the project, and the effort of testers during the 'build' phase is to ensure manual tests are written to cover most eventualities. If there happens to be allowance for test automation, it is generally UI test automation - either HP QTP or Selenium. Traditional UI test automation is pricey in terms of test creation, requires plenty of time to execute tests, provides feedback late and the cost of test maintenance is exorbitant. As a result, typical outcomes of UI-heavy test automation are constantly red test execution results which require manual input to analyse. Overall, testing is very rarely perceived to be a value-add, with limited confidence in automation, and high costs on time & people at the later phases of projects. Thanks to the agile concept of cross-functional teams, testing is no longer perceived as a siloed activity, associated only with UI interactions. By implementing test pyramid approach, test maintenance costs are reduced, test execution time is shortened and trust in quality is restored. Let's look at how. Perception vs Reality User story design: user stories should contain acceptance criteria. These criteria can be accompanied by more detailed test cases if required. In a perfect world, this should be in place before story is discussed by the team during refinement sessions. In the worst case, acceptance criteria (and test cases) should be included in the user story before sprint planning starts. This implies testing should be very much a forward thinking role, focused on understanding how new functionality fits into the existing system and how this functionality should be tested. As a result, doubtful requirements can be questioned early. The overall number of low value tests should decrease. Edge cases, gaps and inconsistencies in the requirements should be identified, both within and between user stories. User stories are ready closer to the end of the sprint rather than to the beginning, for obvious reasons. Test automation: developers write code. Their job is to solve user & business problems with software, and coding is very much a part of this. Writing code also means writing tests for their code on agile projects. Developers in a scrum team need to participate in test automation implementation by default. Where a tester can add value here is not only pairing with developers to write test code, but also understanding what they're testing, and making suggestions and improvements through participation & review. This not only improves throughput at the end of the sprint, but also removes a delivery bottleneck. Exploratory testing: unit or API testing is simple in its nature - there is a predefined set of input data and expected results. Expected result can always be defined for given input. The UI testing presents different challenges; as the ways of engagement are vast, it is virtually impossible to write automated tests to assure all use cases. It's crucial to realize that not every test can be automated, especially on UI level. This is where manual exploratory testing can add value. Feedback from projects where focus was shifted from only test automation towards test automation aided by exploratory testing states that exploratory testing quickly helps to detect significant number of defects which were not caught by existing test automation. Non-functional testing: agile is very much about delivering value to users/customers regularly, so product increments may be delivered every few weeks, typically once a fortnight. Every such increment should be suitable for production deployment. This implies non-functional testing is no longer a separate testing phase, it is a core part of delivery process and must fit into sprint duration. There is no reason why non-functional requirements would have to be verified via purely manual testing process. They should be executed early, provide feedback quickly and be as much automated as possible. In other words, they should be a part of your continuous integration server configuration. Continuous integration as a process requires frequent commits to trunk and is hard to imagine without feature toggles. Using feature toggles has implicit requirements with regards to the number of complete environments being available for the team. If, for whatever reason, provisioning or deploying to so many environments is a challenge, benefits of CI process may be questionable. However, even in that situation, I can't think of any reason why not to use a CI server to run your tests as part of deployment pipeline. Automated, robust test builds (unit, API, end-to-end) should be triggered by commits. In ideal scenario, the same would apply to non-functional tests. Tests which by definition take long time to execute and therefore would "lock" the pipeline for too long, may be run periodically, either nightly or weekly. This is a typical scenario for soak tests or resilience testing following chaos monkey approach. Even if those tests are not executed straight after the new code has been delivered, running those tests frequently and early in the lifecycle helps to deliver working solution earlier. Testing is no longer a responsibility of one, it's a responsibility of all. Testers are champions of quality, and should share their skills around their team to become truly cross-functional.

About the author

Irek Pastusiak