Each time we apply maintenance, z/OS or OPS/MVS, we perform a verification/validation process to ensure our products didn't break with the new maintenance. Currently, I have some various things I look at but since we use SSM which uses OPS/MVS pretty heavily, I have been mostly relying on the fact that SSM functions properly to be my main test of maintenance. That said, I am not comfortable with that being my only metric. What I would like to do is create a REXX or process that I could run which would put OPS/MVS through its paces and test the majority of the functionality of the product. I have a few tests in mind that I could do; using the OPSJES2 function, enabling and disabling a rule, etc. I was wondering if anyone else uses something similar to this or would have some ideas of tests that I could include to make it a bit more comprehensive. Ideally, in the end this would be automated and write a report at the end of it that could be reviewed and given as proof that the product didn't break.
I run a suite of rexx execs to test all of the OPSMVS functions after we upgrade either z/OS or OPSMVS and write the results to global variables and then to a DSN.
That's exactly what I am looking for. When you say you test "all of the OPSMVS functions" do you mean the OPS*() functions or ALL the functionality of OPS?
Yes all OPS*() functions as well as other functionality.
Thanks for the clarification. I started creating a REXX to do some testing. It occurred to me that all I had to do was to go through the manual and start coding scenarios for each function, host environment, command processor, etc. It might take a little while but it will definitely hit just about everything that OPS can do. Are there any snafus or speed bumps you ran into when creating your REXX(s)?
If you have an exec that will loop through and test all of the parameters (OPSVALUE for example), make sure you clear the external data queue before each call to the function.