Hello Lizette,
Without obtaining more details (SSM controlled or not, what tools currently in place, # of regions on each system, etc) to provide more specific suggestions based on your environment, I'll provide a high level overview of how we would put this DDF cycle request within our current automation structure 'if' we needed to perform this request and assuming you have 'many' DB2 regions across all your systems. Perhaps you can take some of this as you create/modify your application. The trigger of the primary pgm (DB2DDF) would be dependent on your configuration (From some SSM UP_DB2DDF or ACTMODE=DB2DDF action, or from focal pgm on each system. Basically, the root of the logic is to initiate the DDF 'process' for all regions and then wake up and display 'where the process is for all regions' as set in monitor rules.This approach of course would expedite the process if you do have 'many' regions.
Here is an outline of how we would incorporate this DB2 DDF cycle request into our existing automation environment.
We would add DB2DDF options to our focal point of control OPS/MVS ISPF utility (OPS/REXX pgm adapted from OPSINQRY sample OPS/REXX available in yourhlq.CCLXSAMP ) which is triggered from TSO/E foreground. We have made various additions to the logic of this template utility including passing a Scope option to direct the request across a system,plex, group of systems, or all MSF systems. We would probably create DB2DDF options, having something like:
TSO OPSI DB2DDF BEGIN S=ALL
Trigger DB2DDF BEGIN process on all systems. The primary logic here would be to initiate the 'actual' process/application against all DB2s on all systems. This can be by setting SSM action UP_CYCLEDDF for all DB2s and then driving actual DB2DDF OPS/REXX from action TBL or triggering the DB2DDF program via OPSRMT to each system, and it would start the process for all DB2s on that system. "OI P(DB2DDF) P("region" BEGIN)"…
TSO OPSI DB2DDF STATUS S=ALL
Show status info for DB2 DDF process for all DB2 regions on all systems. This info would come from unique DB2 DDF status variables set by predefined rules or dynamic rules created within the 'BEGIN' routine at the start of the of the DB2DDF pgm as mentioned below.
The primary DB2DDF OPS/REXX pgm would perform the overall process of all the required DDF actions - SET LOG SUSPEND, SET LOG RESUME, STOP DDF, and START DDF. The logic would be performed in unique subroutines , passed as an argument to the pgm such as BEGIN,RESUME,STOP,START,ALLDONE,TIMEOUT.
The BEGIN phase/routine would enable a max time allotted TOD monitor rule with a spec of *+x mins where x is some number in mins of how long this process 'should' take to complete. The logic of this dynamic TOD rule would invoke this primary DB2DDF with a TIMEOUT option ("OI P(DB2DDF) ARG("region" TIMEOUT)"). Logic within this BEGIN routine would also enable dynamic MSG rules to monitor and set OPS Variables of the 'status' of the process. I havent looked at it all the cmds, but it looks like DSNL004I, and DSNL006I indicate that DDF is started/stopped. Thus, dynamic MSG rules on these would be created with logic to set some unique GLVTEMPx variable such as GLVTEMP1.DB2DDF.region.STATUS to a value representing the msg event (DDF STARTED/DDF STOPPED ). Logic in each of these monitor rules would also trigger the primary DB2DDF pgm with the 'next' action as an argument. So in the LOG SUSPEND monitor rule , set GLVTEMP1 status variable to 'SUSPENDED' and then "OI P(DB2DDF) ARG("region" RESUME)" to cause the next action cmd of SET LOG RESUME to be issued. Sure you can do the next action in the dynamic MSG monitor rule if desired, but we like to keep the 'actions/processes' self contained in one focal OPS/REXX pgm. Up to you. The BEGIN routine would then issue the first SET LOG SUSPEND command to begin the process.
The 'All Done' logic (firing on the DDF STARTED msg) would of course do the disablement of all the monitor rules Including the monitor TOD rule. Note – you were looking at sysplex variables. Sure,this is another option you can use if desired and if it fits better in 'your' final design.Just a note with plex vars you would have to trigger an OPS/REXX pgm from the monitor MSG rule (or update the var in the DB2DDF when retriggered) because the OPSVASRV() cannot be done in rules.
Issuing TSO OPSI DB2DDF STATUS S=ALL as stated above would of course have the code to loop through all the systems and present a formatted view (System,Region,DDF status) as obtained by each GLVTEMP1.DB2DDF.region.STATUS variable on all systems.
Good Luck!
Dave
Original Message:
Sent: 09-28-2020 09:25 PM
From: Lizette Koehler
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
I have 3 plexes (sandbox, dev, prd) . There is no access to any plex from any plex, save via MSF functions.
------------------------------
Lizette Koehler
Original Message:
Sent: 09-28-2020 09:19 PM
From: Dave Gorisek
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
Sysplex variables cannot be controlled currently from 4.8. OPSVASRV() is used to manipulate sysplex variables (within each unique sysplex). If you are expanding the focal view/control application across multiple sysplexes, storing status information about the process (DDF stopping? stopped?starting?,etc) for each DB2 can also be done effectively in GLVTEMP vars. Are all your systems in a single sysplex or across multiple sysplexes? DB2MSTR in SSM or all DB2 components?
Original Message:
Sent: 09-28-2020 05:20 PM
From: Lizette Koehler
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
Basically
DB2 SET LOG SUSPEND/SET LOG RESUME - just stops DB2 processing and the starts it back up. It does not shut down DB2
DB2 STOP DDF/START DDF - just halts distributed thread connections and then allows it to proceed. It does not shutdown the DIST task in DB2
MQ Channel STOP/START - MQ is left up, but specific channels are halted and then started
I was looking for more info on the SYSPLEX variable to see how that could help. But in 4.8 I do not see any SYSPLEX VARIABLE in there.
------------------------------
Lizette Koehler
Original Message:
Sent: 09-28-2020 05:06 PM
From: Dave Gorisek
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
Hello Lizette,
Basically you have an automation 'process' performing the local 'work' (DDF recycle of a DB2 or DB2s on local system, stopping of all CICS regions, etc) and then an automated focal point application that initiates and displays status info of your initiated action (DDF recycled good/bad/in the middle, CICS down,stopping??) from some focal system. First need to start with the local 'work' logic pgms that cycle DDF, stop CICS, cycle MQ channels, etc. Are your onlines in SSM control? If yes, for the DB2 regions do you have the DIST address space in SSM tables or just the DB2 Master asid? Also, does cycling of DDF in DB2 result in the DIST asid of DB2 ending? I'm not a DB2 expert but feel that is the case. I can blab some thoughts about this if you use SSM or not, just want to make sure what you have configured first, so I can recommend the best approach around your current setup in order to create an effective process template of doing some local actions. For the focal point of control aspect, a TSO/E utility pgm that triggers actions to other systems and displays the info is a scrollable ISPF data set is simple and effective. The OPSINQRY sample OPS/REXX pgm is a good template you can take and add some code to make it initiate and display info across one or more systems. Let me know how you have current onlines setup. SSM or not?
Dave
Original Message:
Sent: 09-15-2020 02:05 PM
From: M Tyrone Lastoria
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
Hi Lizette,
This sounds like a good use for the Sysplex Variables in OPS/MVS, IF all the LPARs are in the same Sysplex.
And even if there are multiple Sysplexs involved, you would be able to decrease the 'shipping' of saved information
using MSF from Sysplex to Sysplex rather than all LPARs. So the use of Sysplex Variables would become your
'Master LPAR' and the same saved information would be available to all LPARs in the Sysplex.
Tyrone
Original Message:
Sent: 09-14-2020 10:54 PM
From: Lizette Koehler
Subject: Best Practice for managing CICS/DB2/MQ on the mainframe with OPS/MVS
I have a very old application that does some very minimal functions. But due to routing to lots of LPARs it has its challenges.
In OPS/MVS R13.5 and above, what is the better way to do the following? Currently we just MSF Ship commands all over the place from one Master LPAR. I would like to see if there is a better way. Note: we collect all the DB2/CICS/MQ names from all LPARs and bring it back to the Master for OPS/MVS. Then OPS/MVS from the Master LPAR will issue commands back to the other LPARs.
I am hoping to find out there is a better way to manage this process
The Commands needed for all DB2 are
SET LOG SUSPEND
SET LOG RESUME
STOP DDF
START DDF
DISPLAY LOG
DISPLAY DDF DETAIL
And then I need OPS/MVS to tell us the actions were completed and when.
Currently the commands are issued but OPS/MVS Thinks that the command has not completed. So we have to manually validate DB2/CICS/MQ is in the state we need
For CICS - I need to have CEMT P SHUT I on all CICS regions. State when they are down, then at the prescribed time Bring them back up.
For MQ we need to stop/start specific channels. And then state when they are back up.
Any guidance appreciated.
Lizette
------------------------------
Lizette Koehler
------------------------------