Rally Software

 View Only

  • 1.  How do your teams handle UAT in Rally?

    Posted Aug 29, 2022 10:01 AM
    Many of our teams are still having a separate UAT Cycle as well as testing stories within the iteration.  The teams have been adding test cases to each user story and testing them (manually or automated) within the iteration, but also keep a separate UAT Cycle that runs separate test scripts before a release to production.  

    In these cases, the QA team's practice has been to open a separate user story from the other stories and attach all of their UAT Scripts to that story.  The example I had last week had over 1000 test cases on that UAT Story alone.  They are then adding it to the milestone that represents that production release which them slows down (if not completely times out) any milestone dashboards they have set up.

    I wanted to reach out to this community to see what your teams are doing if they are still in a place where they are requiring a separate UAT Cycle.  Are they creating a user story for UAT?  Is each test case reflected again on that story?  Other?  I want to try to get to a better practice for these teams while we work on fully automating testing for their flow and moving all of the testing (functional, user, etc) to be within the iteration.

    Thanks for your input!


  • 2.  RE: How do your teams handle UAT in Rally?

    Posted Aug 30, 2022 08:01 AM
    We have a regression test cycle prior to each release. We have our test cases in Rally and for each regression test we create a new test set, into which we copy the test cases. The QA is done in a beta environment, which is separate from our development environment. If defects are found, they are reported against a specific test case and retested after development fixes the defect. QA signs off on the release when all the test cases pass.

    Hope that helps!

    ------------------------------
    [JobTitle]
    [CompanyName]
    ------------------------------



  • 3.  RE: How do your teams handle UAT in Rally?

    Posted Sep 02, 2022 06:08 AM

    We have 4 Releases per year, and have a round of Regression testing in a dedicated (in-house, alpha) environment before the Release is greenlighted to proceed to deployment.

    We have amassed a large -- an overlarge! -- 'Regression Library' of Test Cases which are copies dissociated from the original Work Items (so that the 100% Passing status at the time of Acceptance is preserved, regardless of any subsequent TC fails that might occur for whatever reason; this is not Best Practice!).

    Recent years have seen various approaches to streamlining which of the Test Cases in that Library actually need running every time, and which ones should be run for a given Release based on the areas of the codebase that have been changed that time around. 
    I would personally like to see a proportion of the others that are excluded from that streamlining be tested in rotation as well, but we don't have the capacity to prioritise that at present.

    Although we create automated Test Cases as much as possible during the development of new Features, there is also an ongoing effort to increase the proportion of the Regression Library's core Test Cases that are automated. 

    For each round of Regression, we variously create/reuse/copy-and-rework TEST SETS (I think I've just about emphasised that strongly enough) linking those Test Cases that we are going to run.
    We spent the first several years after our adoption of Rally with Test Sets' functionality not quite matching the requirements of our legacy processes, but things eventually came into line, and Test Sets then proved a massive benefit.

    Defects are raised against the Test Case as appropriate.  But we largely find anything with Test Case coverage unlikely to suffer Regression Defects.  The majority of the Defects we find are from general off-script work navigating the system, creating or manipulating content to meet the Pre-Conditions of the Test Cases etc., outside of the area of functionality they are intended to directly test, and are raised as standalones.

    QA signs off on the Release when a stakeholder review has confirmed that only trivial or extremely 'niche'-use-case Defects remain.  The POs prioritise these in a Defect Backlog, and farm them out to receive attention in dev teams' bugfixing timeboxes (5-10% of a Sprint) alongside their main work of Feature development in subsequent Releases.