As a Next-Gen Mainframer, I’ve always realized the importance of Quality Assurance (QA) as an integral part of any development process. The phrase “Customers don’t want to be treated as QA” couldn’t be more impactful and imperative; however, developers are human, after all, and they can sometimes overlook steps within this crucial task. In any product, especially a complex one, developers cannot plan for every possible issue that may arise after its release.
The obvious answer to this problem involves thorough testing, but this answer raises another question: how comprehensive should this testing be? With time spent in the testing phase as the investment, and a clean, working product or feature as the return, where is the line that yields the best return on investment, without extending deadlines?
During my time this summer working with the z/Raiders development team at Broadcom, it became apparent to me that this question is much more complicated than it appears, but many experienced developers become confident in drawing this line from a clearly defined, strong, yet efficient development process. I began learning how to minimize investment and maximize return using two main strategies: Agile development practices, which allow for close customer communication and better adaptability, and comprehensive automated tests, which produce increased testing speeds, more accurate results, and provide broader coverage.
Why Automate QA?
Before my experience at Broadcom, a customer reported an issue while using Detector, one of Broadcom’s Mainframe products, which aims to provide comprehensive monitoring and analysis of Db2 databases. A customer might need to store information in a database for a variety of reasons, but the mechanisms with which they add, read, modify, delete, or generally manage their records can significantly impact the performance of their products. Detector allows the customer to view an extremely large number of metrics related to these operations, as well as to manage and consolidate these metrics through the use of specific time intervals, thresholds, profiles, datastores, and so on.
In this reported case, the customer navigated to a panel in Detector intending to view certain metrics. Unfortunately, Detector was not working as intended for this specific use case, so the customer was greeted with a panel of zeros in place of the metrics they had expected to see.
To avoid this issue occurring in Detector’s future releases, I wrote an automated test to navigate to this panel and validate its values. If this test were to be performed manually, it would have taken more than double the amount of time to execute. Below shows this example being performed manually versus programmatically in real time (see gifs):