Since the emergence of Continuous Delivery and “Agile” development practices, organizations have become far more re-active to changing user needs, both in terms of how they collect and document requirements, and in how they communicate them to testers.
This was necessary, given that consumers now demand greater speed and quality than ever before, and are driven to whichever company can offer them the best user experience. The ability to deliver high-quality services on changing user needs is therefore a major differentiator, while brand loyalty has been replaced by experience loyalty.[1]
However, with development now driven by a constant series of change requests, a challenge has emerged, not found in traditional Waterfall environments: testers have to piece together the multitude of disparate user stores into a complete, coherent system.
When performed manually, this typically leads to the twin pain point of over-testing and under-testing, where gaping holes in a system’s logic cannot be spotted among the unconnected user stories and test scenarios, while certain functions are tested over and over, due to unspotted overlap.
Negative testing is especially neglected, as the stories gathered through interactions with users tend to focus exclusively on desired functionality. It is then the responsibility of the BAs, developers and testers who design the system to consider what should happen when this expected behaviour does not occur.
This is extremely difficult when user stories are stored in isolation of each other, and testers are left connecting the dots. They must consider, for example, what should happen when a trigger or constraint in a user story does not occur – does this lead the system down another “happy path”, constituted by a series of overlapping user stories, or should it never be possible for the system to behave in this way? As a consequence, much of a system’s logic goes untested, and especially the negative scenarios which usually cause a system to collapse.
When adopting this approach to requirements gathering and test case design, testers are already modelling a system – albeit implicitly. We believe that modelling can therefore present a solution to the problems discussed, if it is formalized. It can work within such “Agile” approaches, working to ensure complete testing, without disrupting the tester’s ability to react to change.
Formal modelling “connects the dots” between user stories, in the same way testers currently have to do so themselves. However, it does so in a far more systematic manner. Flowchart modelling, for example, connects user stories into a single diagram, where shared test steps become a single block in the flow, with multiple arrows (edges) going in and out of it. This forces the modeller to think in terms of a system’s logic, decisions and constraints, moving towards the completeness lacking in disparate user stories. They are forced to think “what happens when this trigger is not present?”
The modeller can then return to the user, verifying what should happen when something is not specified in the user stories. This shortens the feedback loop, and increases the likelihood that software will deliver the desired user experience first time round. Flowcharts present the further advantage of being accessible to users and BAs, who are already familiar with Visio and BPM diagrams, for example. The user can therefore verify the use cases stored in a flowchart, and can even specify the desired functions in flowchart form themselves, bridging the gap between user, tester and developer.
From the complete model, test cases can be systematically derived in a way not possible with unconnected user stories. Each path through the flowchart is equivalent to a test case, meaning algorithms can be applied to identify every possible test. Because a certain piece of logic might feature in several user stories, optimization algorithms can be applied to create the smallest set of paths which cover 100% of the logic specified by the user stories. These include negative tests, so that testers can guarantee that the developed software delivers on the desired user experience.
[1] https://blogs.ca.com/2015/07/06/why-it-operations-must-now-combine-agility-with-stability/