We have a need to execute a local mode reporting program (written in house) and have it check, in real time, a flag in a control record that is updated by the operator using a CV task.
The process we had in place was working, but not reliably. The problem is/was that unless the page containing the control record is flushed from the local mode buffer pool, the program was consistently getting the record with the stop flag turned off, even though the operator had gone online and told the program to stop processing. Eventually the program either completed, or read the right mixture of records the cause the control record to be flushed so that it was read again and the program saw that it was time to stop.
We (the DBAs) wrote a program that would run in CV mode to retrieve a program's control record and return the record to the calling program for decision making.
This program dynamically changed the SYSCTL file to another name, bound a rununit, and the reset the SYSCTL file to the default. The called routine managed its rununit itself and, when it was found to be gone, say for a timeout, it would rebind itself and then continue doing its retrieval on behalf of the calling program.
This was all fine in test but failed on its first execution in production due to a security violation on the SYSCTL dataset. Seems only the production batch id, system programmers and the DBAs have access to the SYSCTL dataset in production. Learned something that day!
Okay, so I came up with an alternative method, that I don't like.
I coded up an IDMSOPTI module for each of our CVs, linked it with IDMS and called it another name (it was a polite one!).
I then cloned the protocol for BATCH and changed the CALL to IDMS to CALL the name I just created.
Change my program to use this new protocol and everything worked as before, just on SYSCTL file was used.
On the surface this is not a big issue, but here's the potential problems I have come up with:
1) If CA changes IDMS and issues an APAR, I have to remember to relink my alternate IDMS module in order to get the APAR into the special interface.
2) If we change SVCs or CV#s for a CV, I have to change my IDMSOPTI parameters, reassemble and link the special interface module.
So, My question boils down to, does anyone do anything like this, multiple rununits, one or more local and one or more CV, all in the same execution step in a batch job and if so, what did you do to get the CV mode to go to CV and, more importantly, the right CV?
Yes, I know, this is crazy, but it uses all documented features of IDMS and nothing based upon any outside knowledge, and it works!
The site I used to work at did something like this quite reliably.
They way they did it was to code a special IDMSOPTI module with a different sysctl-ddname - e.g. SYSCTLCV and then link edit that only with the program you want to execute CV mode. Make sure that this program is only called dynamically from the programs you want to run local mode. Then in your JCL you code a SYSCTLCV DD statement only, pointing to the target CV's SYSCTL dataset. The run units created by your other programs will run locally (as there is no SYSCTL dataset specified), except that of the program linked with your special IDMSOPTI (that use SYSCTLCV).
Although we'd regenerate the IDMSOPTI module with each release, we never found it necessary to go back and recompile the programs that linked it in the previous version. It was just something we tested with each release.
I'm not sure if this quite what you are after, but I hope it helps.
Thanks Stephen, that’s exactly what we want and (almost) exactly what we (almost) implemented.
Our routine did not use IDMSOPT, it used IDMSIN01 to change the SYSCTL DD name when called to do its BIND RUNUNIT.
After the BIND RUNUNIT was complete, it called IDMSIN01 to reset the SYSCTL DD name back to the default name.
This functioned PERFECTLY! in the test LPARs. However, when it got to production the jobs being executed that called the service routine failed for an S913, security error, as the users executing the job were not authorized to read the production CV’s SYSCTL dataset.
So, to get around that, and open a less flexible hole, I coded up an IDMSOPTI module, linked it with IDMS under a different name, cloned the BATCH protocol under another name, changed the CALL statement to call the new version of IDMS with the OPTI linked in and this works, in the test CVs. I have confidence it will work fine in PROD but we are in a code freeze for end of month and end of quarter so we won’t be able to implement into production until around the 15th of April.
The problem with this method is that
1) When we move to a new release, we toggle between our two SVCs (don’t ask, in place before I got here)
2) If we change CV #s things break until someone remembers
3) If CA should ever issue a patch to IDMSSTUB, then our special module will need to be relinked.
All things considered, unless I can get the security issued resolved (which I doubt will happen) we have the best solution at the moment.
What would really be the best thing other than using SYSCTL would be for CA to support passing an IDMSOPTI module during the BIND RUNUNIT.
COBOL lets one load, without calling, a module and makes the address available to you. That address could then be used to base an 01 level definition in the linkage section and then pass that 01 level name to the IDMS interface. I know this is possible as I have just written the routine to do it, but I have no means of pushing the loaded IDMSOPTI module to the IDMS interface.
There is a macro in the IDMS MACLIB, #CONN, that does a GOTOCV function which, if you look at the expansion and the comments in the macro, is a BIND59 (BIND RUNUNIT) and it expands to have an optional 3 parm of IDMSOPTI. Hoping it would work for the IDMSSTUB module, I manually coded a call passing the address of the IDMSOPTI module I loaded with COBOL but the interface doesn’t recognize it as being an IDMSOPTI parm. I also tried passing the address of the address just in case I misread the #CONN macro, but that didn’t work, either.
Thanks for the idea.
Charles (Chuck) Hardee<mailto:Chuck.Hardee@ThermoFisher.com>
Senior Systems Engineer/Database Administration
EAS Information Technology
Thermo Fisher Scientific
300 Industry Drive | Pittsburgh, PA 15275
Phone +1 (724) 517-2633 | Mobile +1 (412) 877-2809 | FAX: +1 (412) 490-9230
Chuck.Hardee@ThermoFisher.com<mailto:Chuck.Hardee@ThermoFisher.com> | www.thermofisher.com
WORLDWIDE CONFIDENTIALITY NOTE: Dissemination, distribution or copying of this e-mail or the information herein by anyone other than the intended recipient, or an employee or agent of a system responsible for delivering the message to the intended recipient, is prohibited. If you are not the intended recipient, please inform the sender and delete all copies.
DB-Syncro from cogito<?> will allow local mode buffers to stay sync’d to CV buffers – shared cache will not extend to local mode buffers (my ideation was rejected ☹ )
The information transmitted is intended only for the person or entity to which it is addressed
and may contain CONFIDENTIAL material. If you receive this material/information in error,
please contact the sender and delete or destroy the material/information.
Thanks Chris, right idea, wrong cost ($’s).
It would never be purchased for our use.
Your idea of shared buffers between CV and batch as done with CV to CV would be an excellent solution. Too bad it was not accepted.
Maybe in the future.
One can only hope.
Did you try running the control record fetch program in localmode? If the fetch program did a BIND RUNUNIT, OBTAIN, FINISH each time it wascalled, I would expect (reasonably or not I don’t know) that it would get itsown set of DMCL buffers separate from the reporting program, that the bufferswould be initialized at each BIND RUNUNIT, and thus the fetch would pick up thelatest copy of the control record from disk on each invocation. This obviouslyhinges on how mini-CV handles DMCL buffers for multiple run units.
It would work, but I think the overhead of creating a new run-unit every time would not be acceptable. Maybe if combined with a timer so the call is made only once every 5, 10, 30 or 60 seconds. For the situation where it is used, it would likely work, but there may be other situations where the timing is more critical.
It crossed my mind that it might be possible to force IDMS to re-read the record by invalidating the buffer?
I'm thinking that it should be possible and fairly easy to invalidate the current buffer. This could be done in assembler by passing the subschema-control then locating the current BME - SSCVIBA points to the VIB and then CURBME to the current BME. Obvously you'd also check that addresses and eye catches are valid. You could also pass the expected record's page number and validate BMEPAGE is correct. Resettting BMEVALID might be enough to force IDMS to reread the page, but it might take more than that.
So in your source program you would:
- Obtain your record. It should now be in the current buffer.
- Call the aforementioned program to invalidate the current buffer.
- Obtain the record again. Hopefully with the invalid buffer it is reread from disk and you then get any recent update done in CV.
I'd test this out, but I currently don't have the means to do so. I do have a program that pulls data from the current buffer, but I've not tried invalidating the buffer.
Hope this helps,