Post ship scripts for Package Shipment were introduced in V16, to manage things like Binds and Phase Ins and other quirks that occasionally need to be done involving things other that straight PDS to PDS copies.
Is anyone actively exploiting this new functionality yet? Are there any "gotchas", or best practices that we should be aware of?
Yes, there are customers who use the post scripting for more then just binds. there is a web page for V17 and V18 that will point you to a wiki:
in each release there is a doc for setting up the post ship. if you go to the product ca endevor you will see V17 & V18. select the one you want. at this point put in a search for post ship and it will take you directly to the doc for setting this feature up.
hope this helps -
I'm glad to hear that the functionality is actively being used. The documentation seems biased towards support for Natural post-shipment processing, but maybe that is where the main need is.
I'd still be interested in end-user experiences of it, good or bad (even if as mundane as "it does what it says on the tin"), even for DB2 binds :-) and to understand what (if any) are the current limitations of it.
Reading the documentation, I appreciate the ability to do symbolic substitution, but (for example) can you do this at multiple levels of indirection? Also, it seems as though everything is resolved before it hits the target environment??
We are developing pre and post ship script processes for our shop. We already have a phase-in process in place, as well as DB2 binds, inside our processors. We are moving to a SYSPLEX environment, where we will have to do the phase-ins and binds within the shipment process. The biggest challenge for us has been in building the job streams from the flat files passed with the ship process. We build script files for both phase-ins and DB2 binds. The script files are basically PDSEs with member specific entries. When those files are copied across, they are flat files. We wrote programs to process the flat tiles and wrap JCL around the steps that needed to be executed. Another challenge we currently face is the ability to "capture" output from those jobs. Our current environment of shared DASD allowed us to store the phase-in and bind information in ENDEVOR listing files for future reference. We haven't conquered the challenge of doing that in a package ship environment with a hard wall separating PRODUCTION from our ENDEVOR environment. We have experimented with the symbolic substitution, and you are correct that the resolution happen before reaching the target environment. We have not found a way around that. Our thought process, at one point, was to build a script file containing values needed for substitution during the actual script process, but we haven't come up with a guaranteed method given the multiple changes that our environment is going through. Best advice, don't try to "duplicate" processes in your current environment before going into production. Chances are that it has taken years of refinement to get your current process where they are today, and the same will hold true for any package ship process you are developing and implementing.
These are interesting comments.
Have you raised any ideation items that may simplify things for those that follow in your trail blazing footsteps?
As for capturing output, there is an SDSF API that can be called in REXX to get the output of a job step, that could run within you pre/post implementation job stream.
It might be worth investigating this simple and effective option.
We have been in constant contact with CA regarding some of the issues we are encountering. Nothing has yet jumped out that would require an ideation, since our environment seems to be so unique.
As for output capture, in our environment, the output will be generated on our PRODUCTION lpar, and we will likely route it back to our DEVELOPMENT lpar, which is where the ENDEVOR environment will live. By the time the output gets back to where we need to add it to the listing files, all pre and post ship processing will already be completed on the PRODUCTION lpar. In our case, an external utility to ENDEVOR to append information to a specific listing file member would be helpful. Not sure if others would need that capability.
Your idea about SDSF REXX for capturing output is a very good one. I have several SDSF REXX routines that get the start and end times, return code, opens individual DD statements and reads them to capture information and even generate HTML emails and send to clients using the SMTP server.
let me address a question that was not answered:
The documentation seems biased towards support for Natural post-shipment processing, but maybe that is where the main need is.
i have done some research and found that the only thing that is pointing to natural is the heading:
this will be changed to:
the post scripting can be used in all aspects of the ship and not just for Natural.
sorry about the confusion.
Possibilities of the post shipment are (I would say) kind of unlimited.
i have done post shipment processing for more then 10 years already (even when it was not yet available by Endevor itself), everything running under the altid (now it is enabled by Endevor from V17). Db2 binds and Cics new copies are quite obvious. I also did Idms updates (new copies and IDD updates, generations etc,). I also created jobs in the post shipment to be executed in the scheduler in Prod. Another one I did was doing ftp job submission to the prod lpar and retrieving the job output automatically (ftp fetaure). I also created a connection between RA (Release Automation aka Nolio) and Endevor to have the deployment done by RA instead of Endevor. RA then will take care of updating the Endevor instance with ship date/time and failure/success.
If you need different Jcllib statements per destination of more then 2 or 3 lines, you need to do some tweaking in the Endevor Ispf Skeletons (mainly) and 2 or 3 Panels. With this you can have as many Jclorder statement lines as you like.
The way I work, is to generate members in what I call Shipinfo datasets. They are created in the Endevor processors and shipped as script-files. Basically it contains the element name, processor group and an indication whether it is Db2 (or not). Flattening this script-file allows you to sort (and include) special types/processor groups and Db2 modules to be bound. Some customers create the Bind statements already in a processor and ship those.
Thanks for your answer. My question was not generically about post-shipment activity in general - like you, I have been doing post shipment processing for a long time before CA Endevor provided Post Ship Scripts in version 16 - but specifically on people's experiences of using this new Post Ship Script functionality that CA introduced.
I'm not sure from your reply which you implemented as bespoke customization pre version 16, and what you implemented using the new Post Ship Script functionality. Have you ever converted any of your pre version 16 customizations to use the "out of the box" Post Ship Script mechanism?
I used the V17 post-ship scripting as well. It also offers virtually unlimited possibilities. Depending on what your needs are, sometimes it needs a bit tweaking (like the multiple Jcllib datasets).