We are converting a in-house written SCM tool to Endevor.
Currently the in-house tool will add include statements to a link step if there are any. We have created a type "INCLUDE". We have then updated our processor to perform a 'print' step, if this step is successful at locating an include card we perform a CONWRITE of the card and append this to the source.
Our issue is that we want to show the include card as an input component, we're currently writing some REXX to create a CONRELE statement but I was wondering how others might approach this problem?
Our processers have a CONSCAN step followed by a REXX exec step followed by a CONRELE step to add run time input components to the program elements. You are not asking about run time and you do not need the CONSCAN in this context, but you could code this the same way. With a REXX you can potentially do various verifications followed by TSO or email notifications when there are issues and you an look up the migration path for the input component if you enforce a library naming standard. There is a processor execution slowdown with CONSCAN, REXX, and CONRELE steps.
Although there is a counter-argument for using the SYSLIB only for vendor libraries, I suggest relying on SYSLIB for all of the input libraries and eliminating the INCLUDE cards as much as possible, making include cards optional if sometimes needed. The binder will identify what is being called and include it automatically. If you have both CICS and DB2 libraries the CICS libraries will need to appear ahead of the DB2 libraries due to same named alias pointing to different load modules.
I have processors with a copy to a pre-allocated temporary library step, The copy step will sometimes "fail" because there is nothing to copy but the MAXRC ignores such failures. Then a later step reads from the temporary input library, which may be empty.
You might benefit from reading an article I posted on "Composite Load Module Creation", which is what I am assuming you're doing if you use INCLUDE statements in your link decks.
As for your original request about tracking them... you don't really need to since ACM will automatically track which library and which member came from where during the link step.... and once you get comfortable with Endevor, you'll find you actually don't even need those INCLUDE statements.... but refer to the article....
Composite Load Module Creation – "in-approval"
I think there is no need for the copy step to populated the PDSE object library. For assembler, COBOL, and C\CPP we always write directly into the PDSE object library by specifying the member name (which we require to always match the source code name). Some people who read the "Composite Load Module Creation" article may benefit from bing told that the binder retains the original link-editor program name IEWL. I hope everyone is using the binder by now.
A benefit of type LINK paired with types COBOL, ASM, C, etc., instead of a type ASMSUB paired with type ASM (assuming he former type always creates object modules and the type latter always creates load modules) is that an unnecessary recompile is not executed on every second step generate action, as John explains in his article. But I think the primary benefit is that the subroutines are always available as object modules when the main routine load modules are created. This reduces the risk that the subroutines will be dynamically called load modules, which reduces the risk that impacts on main routines due to changes in their subroutines will not be tested before changes to the subroutines affect those main routines in production. For type COBOL it is possible to achieve a similar risk reduction without type LINK by imposing compiler option NODYNAM and reserving a type COBOLSUB to generate the subroutine object modules. However, developers can bypass that by placing a DYNAM in their source code, so a scan step is also needed to fail the generate action if CBL DYNAM is found in the source code.
Particularly for developers that do not use AUTOGEN, there is a negative impact of type LINK on them from their perspective. The developers must generate their main routines twice, once as object code and again as type LINK, whenever their main routines are modified. Whereas with types ASMSUB and ASM the developers generate their modified main routines once, as type ASM. Even with AUTOGEN the developers need to make themselves aware of what AUTOGEN did to take full advantage of it. I think risk reduction should take priority over what is most convenient/easiest for the developers. But it can be a difficult to convince management, particularly if they are busy with other things and do not understand or prioritize SCM and developers lobby management for what is easiest for them.
"But it can be a difficult to convince management, particularly if they are busy with other things and do not understand or prioritize SCM and developers lobby management for what is easiest for them."
It's been my experience that generally "management" doesn't care about the technical details and developers "overreact" to what's being asked of them due to lack of understanding or "Endevor education". While I am more than willing to explain the "technical details", sometimes it's just simpler to explain "COBOL/ASM will compile your program. You need to GENERATE the LKED types that ACMQ identified to create you load module and you have to GENERATE your BIND member to bind you plan/package." This is all the basic essence of what I have always referred to as "process normalization".
Alternatively, you "do it all for them"... in which case, as Endevor administrator, you will always be wrong.... and will inevitably create a spiders nest of customizations....
For those interested, an article on "process normalization"..... https://johndconsulting.wordpress.com/2015/10/19/process-normalization/
It is nice that way to be implementing ENDVOR SCM for the first time where there are no user pre-conceptions and expectations. But if the developers have been using ENDEVOR SCM for many years configured without type LINK then try to convince them and management we now need a type LINK for creating the load modules and they need to generate main routines twice without arguing technical details on how type LINK is safer while also try to convince them that safer for the business takes priority over what is easier for them.
Your expert commentary in this community forum and on your web site is helpful. The people who install and configure their SCM tool, and it is not going to matter what tool it is, need to understand SCM, their business, and their SCM tool, to implement the SCM properly, so there is a lot to think about and consider. I think there is a tendency to focus on the tool and on the business and the SCM part tends to get short-changed, and SCM will not win a popularity contest.
Superficially developers can be given a choice with types COBOL and COBOLSUB by allowing either object and load modules to be created using type COBOL so that developers can switch processor group for type COBOL and create a corresponding type LINK program element to avoid combining the bind with a compile. And on that day those developers will discover they cannot switch their processor group without deleting their existing load module from production because there is conflict with the output type duplicate blocking. But deleting the load module from production could be disruptive. Separate assembler/compiler and binder types arguably maximizes developer choice, which can be counter-intuitive because it restricts the developers to type LINK to create their load modules. "imposing a restriction = more choice" is an equation that is often false.
One way to avoid this trap for developers is to create more parallel migration paths but doing that increases the number of sync error failures. Some main routines may not have subroutines so this argument for type LINK does not currently apply in those contexts, but subroutines could later be added to main routines that currently lack subroutines, trapping the developers again into a compile and bind when a bind only could be better. I think it may be better to have documentation that goes into the technical details about the trade-offs and why the recommendation is to encourage use of type LINK for creating load modules and to avoid processor groups for the other types that only create load modules (or that create both object and load modules) insofar as it is feasible to do that. There are a large number of potential factors that affect this decision, it is not simple.
I find "freedom to choose" an interesting challenge. On the one hand, SHOULD the developers (who are not "sophisticated") have complete "freedom of choice" when it comes to specifying compile, linkedit, and bind parms? Outside Endevor... well.... that's beyond my scope of responsibility.
But inside Endevor? I would argue "that's why its called 'source and configuration MANAGEMENT". There are compile, linkedit, and bind parms that could do great harm to a system if the module generated under them made it to production. Now, you could argue "that's their problem"... unless you're responsible for a service bureau with multiple customers.... and you have the chance of that bad module affecting not just the original customer but other customers as well. I'm just not willing to take that chance.
I don't think I'm being paternalistic. On the contrary, I believe I'm trying to elevate them from a blind "but this is how we do it in test with a tool written by a sysprog oh-so-many-years-ago and I have no idea what it really does" to a proper understanding of how a z/OS program on a z/OS machine is configured and works. The site I'm at has literally tens of thousands of lines of customizations that have hindered Endevor's exploitation for decades and they have wisely chosen to ctl-alt-del and re-engineer. A BIG part of the re-engineering effort is (re)education and I'm fortunate no one (so far) has objected to being re-educated!
Here is another alternative that does not require you write any code - use IBM's IEBUPDTE. It can search a concatenated list of datasets, and you can use MONITOR=COMPONENTS on them. If it finds your member in the concatenated list of datasets it writes it out and returns a value of 0. If not found, the return code is 4.
I inherited a similar set up. I have some applications that have a "GETLINK" trigger in their processor gorups. Our type is called LINKCTL and the have the same member name as the COBOL element being processed. If GETLINK is Y , CONWRITE searches for the matching LINKCTL element and writes it to a pre-allocated temporary file. In case &&LINKCTL is empty, it is concatenated to the end of //SYSLIN. Similar to Dan IEBUPDTE suggestion, you can also use FILEAID with ALLOC=LMAP and monitor=components
//DD01 DD DSN=&LNKINCLB,DISP=SHR, // ALLOC=LMAP //DD01O DD DUMMY //SYSIN DD * $$DD01 COPY MEMBER=&C1ELEMENT //*
I've read through all of this and there are a great deal of really good ideas.
I was thinking if you are already writing a REXX to generated CONRELE records (cards) then why not do the following.
Have Developers add the following to the Source Code. Starting in CC 7
#BEGIN_COMPILE_OPTIONS < ------ PLACE COMPILER OPTIONS AFTER THIS RECORD APOST DYNAM=YES#END_COMPILE_OPTIONS#BEGIN_CISCCMPL_OPTIONS < ------ PLACE CICS COMPILER OPTIONS AFTER THIS RECORD CICS OPT1 CICS OPT2#END_CICSCMPL_OPTIONS#BEGIN_DB2CMPL_OPTIONS < ------ PLACE DB2 COMPILER OPTIONS AFTER THIS RECORD DB2 OPT1 DB2 OPT2#END_DB2CMPL_OPTIONS#BEGIN_LINKEDIT_OPTIONS < ------ PLACE CICS COMPILER OPTIONS AFTER THIS RECORD RENT INCLUDE etc.#END_LINKEDIT_OPTIONS#BEGIN_SOURCE
in your JCL alloc to a PDS member for example //SRCEIN < - this is the output of the conwrite//CMPLOUT DD DISP=OLD,DSN=hlq.&C1stage.CMPLREC(&C1ELEMENT),MONITOR=COMPONENTS//CICSOUT DD DISP=OLD,DSN=hlq.&C1stage.CICSREC(&C1ELEMENT),MONITOR=COMPONENTS//DB2OUT DD DISP=OLD,DSN=hlq.&C1stage.DB2REC(&C1ELEMENT),MONITOR=COMPONENTS//LINKOUT DD DISP=OLD,DSN=hlq.&C1stage.LINKREC(&C1ELEMENT),MONITOR=COMPONENTS//SOURCE DD here you create a file to be read into the compile that is a temp dataset
Have REXX do an execio on the file in SRCEIN DD STATEMENTusing an EXECIO DISKW Put the Compile recs in CMPLOUT, CICS in CICSOUT, DB2 in DB2OUT and LINKEDIT in LINKOUTOf course you Discard the # records and beginning with the next record after #BEGIN_SOURCE put into //SOURCEwhich is read into the compiler.
Optionally the same could be done for Assembler or any other language. Additionally if none exist you could create a comment record stating none existed and that could be included in the program listing for documentation purposes.
Just another idea, like the other great ones.
OKAY - so reviewing my own comments, using the same premise with the output records instead of the # recs in CC 7 for COBOL put an * in CC7 for Assembler * in CC1 (these are simply comment cards) /* etc. or Java
* BEGINOPT: * COMPOPT: NOCMPR2 RENT RES
* CICSOPT: OPT1 OPT2 etc.
* DB2OPT: OPT1 OPT2 etc.
* LINKOPT: RENT REUSE INCLUDE ABC(xxxx)
if you pass the REXX the element type then it would know how to format the options before writing it to the output file. Since the asterisk (*) comment card is being used no records need be thrown away. Just read it all in the COMPILER (assembler) which will ignore them and move on, also the information is in the source for next person working on the code.