Does anyone besides me have a need for Global Type Sequencing down to the processor group level? Our IMS applications have type COBOL divided into DRV (Drivers), SUB (subprograms), IMSO (ims online) and IMSB (ims batch) processor groups. I'm told IMS programs are happiest when the subprograms are statically linked, therefore they maintain a list outside endevor listing their relationship which subprograms are called by which drivers and build their pkg scl accordingly so everyone is compiled in the correct order (and obviously why they cant use AUTOGEN). If processor groups were included in they type sequence table it would eliminate a lot of manual effort on the developers part, and perhaps allow them to utilize autogen. Side note, the driver and subprograms don't necessarily live in the same system/subsystem.
Yes! I would like to see this an enhancement idea, if it hasn't already been submitted.
Thanks Dana, I hadn’t yet, wanted to put a feeler out first. Glad I’m not the only one needing this!
First, for all compiler and the assembler types always save the object modules. If some subroutines are sometimes called dynamically (maybe because you utilize a 4GL compiler) and sometimes called statically then you may want to define processor groups that create both an object and a stand alone NCAL load module. Define a separate binder type to create load modules from the main routine object modules (object libraries are the inputs). This way the subroutine and main routine object modules are generated prior to the main routine load modules (one of the stages after the entry stage could be configured to require packages and generate again to take full advantage of the type sequencing). Second, for COBOL only, you can force the NODYNAM compiler option and define a type COBOLSUB for the subroutine object modules followed by type COBOL for the main routines which both compiles and binds. This eliminates the need to generate the main routines twice (second time using the binder only type).
For best quality software change management (emphasizing reducing risks for bad outcomes) I think static calls, and therefore object module subroutines with a separate binder type for creating laid modules, is preferable because static calls prevent calling routines from abruptly being effected by changes to their called routines without having been tested.
Okay, so if I want to move all dynamically-called subroutines over to Type SUB to make sure they all get processed before the driver programs, what are the implications? There is no output library difference but every footprint would then be wrong and would not match the load. Wouldn't ACMQ also work incorrectly without a valid footprint? It seems there is no easy way to make this type-sequence processing change after initial load.
When dynamic calls are coded with variables it is difficult to identify all of the main routines that should be tested when a subroutine is modified. If you have good reason to be confident that all of the effected main routines will be tested each time a load module subroutine is updated then dynamic calls and generating NCAL load modules all of the time could be worth considering.
Under the worst case scenario for converting to static calls, I think everything may need to be regenerated, plus revised or new processors and processor groups and types, and maybe new libraries, for example .load instead of .loadlib, which also requires changing JCL The details of how to get from here to there, and what, if anything, can remain as is, depends on the interdependencies and other details of your context. By defining types whose processors always generate object modules from source code, and another type whose processor always inputs object modules and outputs load modules, you can *gaurantee* that the type sequencing will be correct for statically linked load modules, while also supporting dynamic calls if they are still sometimes needed.
The example processors provided by CA are not always good. They may, for example, include processors that input from the same library that it outputs to. For type sequencing it is best to avoid same library as both input and output, within a single processor and across different processors of the same type or different types with the same type sequence priority. Types COBOL and COBOLSUB with NODYNAM may be a feasible compromise, but NODYNAM is only possible with COBOL. Type ASM and ASMSUB, etc., for technical reasons, IMO, will not work well, especially if you activate duplicate blocking.
I don't want to start a long discussion but someone is feeding you a load of nonsense as regards IMS programs and subroutines. If you're up to it, what you really need is a "redesign" effort in terms of how you are going about doing your compiles and linkedits.
The reason IMS likes statically linked LOAD modules is because you can optimize their performance and have smaller modules than limitless and timeless dynamic calls. As we all know, dynamic calls allow anyone to call anyone. With limited CPU time allocated to IMS regions, you want to be able to "control" that and developers need to more rigidly structure their internal subroutine calls as well as DB calls to ensure residency time is not exceeded.
Subroutines can be compiled and then that's it. The object created out of the compile should be stored and then processing stopped. A separate source member that creates the LOAD member that brings together all the subroutines under one load can then be invoked to create the statically linked load module. IMHO the beauty of this approach is that a subroutine can be compiled in isolation and then the load member created outside of recompiling a mainline (which arguably is really just another "subroutine").
All of this goes back to a paper I wrote years ago talking about statically linked modules and Endevor. I actually wrote about in my blog awhile ago but if you want a Word Doc, send me a note and I'll send it to you. Composite Load Module Creation – "in-approval"
I hope you are well.
You request could be an enhancement, but I tend to agree with Mathew about processors way of life which is strongly depending on the site management.
Reviewing and redefining the technical type seems to be the best practice, since already apply to the GTS apply to type and a lot of customer are already doing this way. CA Technologies use such organization. I do not say that your idea doesn't make sense, that's why I'm curious to read more feedback and comment from the community. Then if after ready this you think that you will reach the limit of the GTS and can explain why, you may attempt to convert this as ideation.
I hope this help.
There is a lot of conflated discussion here. We are not interested in managing load modules/objects as input components at our site. Further, it is not feasible for us to split out dynamic and driver programs from under a single Type after Initial Load. A simple workable solution is to provide an option similar to how Global Type Sequence is setup. Give the Admin a table where they can manage the order of execution under a Type. Just like with adding a new type, default processing order for processor groups can be handled just like when a new type is added that is missing from GTS - just tack it on the end. This is still a valid enhancement idea.
Eloquently put Dana, I'll mix and match our needs and submit an "idea". We are not currently in the position to re-tool what was set up 20 years ago, but rather take an existing table and tweak a smidge to make the mess we have manageable.