Endevor

 View Only
  • 1.  Blue-green deployment

    Posted May 15, 2021 10:08 AM
    Edited by Mathew Goldstein May 15, 2021 11:14 AM

    See https://en.wikipedia.org/wiki/Blue-green_deployment for an explanation of blue-green deployment.

    Is anyone configured for blue-green deployment or something similar to "blue-green" deployment? How is your ENDEVOR configured? Do you have blue-green logical partitions with their own, yet same named, libraries? Do you have "blue-green" systems and corresponding but differently named "blue-green" libraries? Separate "blue-green" environments or ENDEVOR installations?

    Our ENDEVOR team was told to write a plan (in less than a day!) to reconfigure ENDEVOR SCM to support blue/green deployment "without a procurement". We are also told that IBM can deliver this for us via "IBM GITHUB" as a replacement for ENDEVOR SCM. I consider the no procurement restriction in the IBM mainframe context to be impractical since we would need, at a minimum, more DASD to support twice as many libraries, and technically we would also need to split our production logical partitions into pairs of blue-green logical partitions to retain the existing library names (which I assume is ruled out by the "no procurement" restriction). Management has set an end of this year deadline for the "blue-green" deployment reconfiguration to be completed.

    It is being clearly asserted that this to operate on a current year and next year processing basis and that both years are to be deployed to "production" simultaneously. But it is not clear what the motive is because it is not being clearly stated, I can only infer the motives. One of the motives appears to be security scanning, the notion being that we scan the production applications in advance of their becoming operational out of what will be the production libraries (maybe to better assure that there are no application updates after the application scan was successfully completed?). Another motive appears to be to quickly fall back to prior year processing (there are several reasons we sometimes need to do that). We have cross system load libraries, which makes it impractical to rely on current year and next year systems.

    We proposed adding a stand alone "prior year" stage to be populated by transfer actions from the current production stage and also adding a "next year pre-operational" stage in front of the production stage to be populated by promotion packages that would subsequently move the same program elements to operational production. This retains the current production library names.

    ​​


  • 2.  RE: Blue-green deployment

    Broadcom Employee
    Posted May 17, 2021 09:08 AM
    Edited by Joseph Walther May 17, 2021 09:35 AM
    Matthew.

    One low-impact and likely a quick solution:
    • Use Package shipping instead of doing Transfers or Moves through new stages. Shipments execute faster than Transfer actions, and there is much less change required to your current configuration. New shipping definitions could be established - one for a Green destination and another for a Blue destination. No new environments, Master Control Files etc are required.
    • Automated processes such as DB2 Binds, CICS newcopies, LLA Refresh etc can be made to automatically accompany package shipments. There is a good chance that processor changes are not required.
    • Shipments can be local or remote - depending on how your libraries are managed and details around the "server swap" process.
    • Shipments can be initiated manually or automatically. Likely the "server swap" process can include an Endevor swap for an automated shipment destination.



  • 3.  RE: Blue-green deployment

    Posted May 17, 2021 11:00 AM

    Yes, local shipment destinations would be a relatively quick and easy approach. However, I prefer creating stages over creating shipment destinations because with shipment anyone who wants to know in any detail what is in the blue or green libraries would need to first obtain the footprint information from those libraries and then use that information to compare those elements with, or view those elements in, our production stage in ENDEVOR.




  • 4.  RE: Blue-green deployment

    Posted May 26, 2021 01:57 PM

    Hi Mathew, If I am understanding correctly, I might have an Endevor configuration that is similar. I was required to build a configuration to support an online application for CICS/KIX. The goal was to install new code without interrupting the live service. We call this High Availability (HA).

     

    I have one default table and configured a Core path and a High Availability path. The entry-points of both Endevor paths stage-1 and stage-2 contain the same "output/executable libraries", but the Endevor database files, environment-names, and stage-names are different for source holding.

     

    On the HA path our Member Testing Facility environment is where the High Availability begins. I have a stage-3 and stage-4 defined with their own "output/executable libraries" to each stage.

    Stage-3 is the first leg of the MTF install and the regions are connected to LPAR1/CPU1 and the "output/executable libraries" end with a number 1. Example pgmlib1, jcllib1, cardlib1.

     

    Stage-4 is the second leg of the MTF install and the regions are connected to LPAR2/CPU2 and the ""output/executable libraries" end with a number 2. Example pgmlib2, jcllib2, cardlib2.

     

    Our Production environment is configured right next to MTF on the Endevor HA path and contains a stage-5 & stage-6. Works very similar with different production naming standards for the "output/executable libraries" ending with a number 1 and number 2. The first leg of the Prod install Stage-5 is also connected to LPAR1/CPU1, and stage-6 is connected to LPAR2/CPU2. Both LPARs reside in the same Plex.

     

    On our Core Endevor path we only have one stage for MTF and one stage for Prod, because these applications are not HA.           

     

    Our batch job scheduler for the HA side has a set-up concatenation that only pulls from Prod jcllib2 and MTF jcllib2. Within the batch job scheduler, we designed HA install schedules that are automated to execute console commands to move live online traffic.

    During our MTF install the live on-lines and web services traffic is set to only point to stage-4 (current code) while we promote the (new code) into stage-3. At this point, stage-4 LPAR2/CPU2 is handling all the live traffic while we install the new code to stage-3. Testing is run against the new code in stage-3 CPU1/LPAR1. When testing is satisfied, the live traffic is now pointed to stage-3 on the new code. We now install the new code to stage-4, leaving a copy of the new code in stage-3 using "bypass element delete". I want to note, we only install the new programs and control cards in the first leg to stage-3. The batch job scheduler is picking up JCL from stage-4 jcllib2. When the install is completed to stage-4, we now have a copy of the new programs and control cards both in stage-3 and stage-4. The new JCL and JCL Procedures (PROCS) only exist in stage-4, and we now normalize the online and web traffic. There is no outage or interruptions to the live traffic after installing the new code.

     

    I hope this helps.