Endevor

 View Only
  • 1.  When is a "Duplicate" not the same?

    Broadcom Employee
    Posted Feb 05, 2016 03:56 PM

    Recently I had a challenge where I had to find "Duplicate" elements across my Endevor inventory - mostly where the same element name exists in multiple Subsystems (for SandBox/Parallel Development) in order to check generate dates, processor groups etc. and I found it hard!  Maybe it's my old eyes, but I was looking at Endevor element selection lists, using ESort EFind and EOnly to filter them in to some sort of coherence, but my eyes kept twisting out of focus when I tried to compare/match element names where the only difference was a number (suffix) at the end.

     

    I'm thinking there has to be a better way...

    Has anybody done anything like this before...

    Sure I'm thinking the easy way is to get a CSV report, import it into EXCEL and then set up my masking/filter rules etc...

    But that way still leaves me with the challenge of copy/pasting back into Endevor when I've identified the element that needs to change processor group.

     

    And, while the principle sounds simple (just find all the matching element names...)  what do with an elements that are only duplicates up the map?

    Is an element with the same name a duplicate if it exists in another system, or subsystem but has a different processor group there etc.

     

    Or turning it on it's head, what about the unique elements - elements that don't exist anywhere else, but maybe should...

    The example I'm thinking of here is when you compile your programs to Object code and then use a LINKINC member to perform the actual link-edit, and/or a BINDPARM to control DB2 Bind...

    In those cases searching for UNIQUE elements would help me find the programs that don't have a corresponding Element LINKINC member.

     

    I'm feeling an Ideation coming on - an enhanced "super" EONLY that allows you to search for Duplicates or Unique Elements - but before I go too far down that path I wanted to check if others have found themselves in similar situations and how they resolved it.

     

    Please share your hints/suggestions.

     

    Thanks,

    Eoin



  • 2.  Re: When is a "Duplicate" not the same?

    Posted Feb 06, 2016 05:05 PM

    Dealing with duplicates - accurately identifying them where they should not exist, and identifying missing same named program elements of different types when they should exist, can substantially improve the quality of the software change management.  I am experiencing difficulty with this issue also.  I would very much like to see the algorithm that CA uses for global duplicate blocking on matching output type names to be enhanced.  Currently, if it finds two program elements with the same names and output types and the migration path of a newly added program element does not encounter the previously added program element then it blocks the add action because that is identified as a "duplicate".  But it may not actually be a duplicate, it may instead be two parallel migration paths that eventually merge to the same system and stage somewhere further up the two migration paths.  So the global duplicate blocking algorithm should look up both migration paths for a potential merge stage and if the system is also the same then allow the add action.

     

    Anyway, I wrote a REXX program to find duplicates based on same name and output type name for particular types across different systems (since we cannot utilize the built in ENDEVOR global duplicate blocking).

     

    Frist, a JCL saves report 03 on various ENDEVOR environments.  Then the JCL saves a CSV of the Processor Groups for the production stage and CVSTRIM is executed on that result to pick out the types, drop the delete and move processors, and select the processor group name and output type name columns. Then the REXX exec is executed.  It runs an edit macro to eliminate the header and the duplicates from the processor group CSV (%DELDUPS 1 40).  Then it reads in the processor group name and output type names to stem variables.  Then it runs an edit macro on the report 03 to eliminate unwanted information and remove "duplicates" of the sort that need to be disregarded (%DELDUPS 2 9 67 74).  Then it reads in the edited report 03 information on line at a time in a do while loop, using 'parse var; to assign values to variables, and using 'if then' logic to identify the duplicates based on matching program element names, mismatching system names, and matching type names or output type names associated with the processor group name.  Whenever it finds a duplicate it sends an email.  It is about 40 lines of REXX, plus a 23 line edit macro and a 9 line edit macro, plus the JCL.



  • 3.  Re: When is a "Duplicate" not the same?

    Posted Feb 07, 2016 11:24 PM

    Correction (edit is not working):  The edit macro on report 03 executes %FINDDUPS 2 9 67 74 to locate the potential duplicates (not %DELDUPS).  These are two useful freeware edit macros.



  • 4.  Re: When is a "Duplicate" not the same?

    Broadcom Employee
    Posted Feb 10, 2016 07:19 PM

    Wow Mathew,

     

    It took me a while to parse all of that - but yes I get that reading the System report to get the TYPE definition's 'duplicate' logic is probably the most "comprehensive" method I was however looking for something more immediate/interactive.

     

    Maybe I was starting too much with the end in mind, but I wanted it to work directly in the element selection list, rather than having to resort to batch reports.  I'm plaing with EFind as a basis to see if that can still be accomplished.  Will update you soon.  But thanks for the great ideas.

     

    Eoin



  • 5.  Re: When is a "Duplicate" not the same?

    Posted Feb 10, 2016 08:23 PM

    A partial solution can be useful and anything that will be useful is good.  However, a partial solution by itself is not enough, a comprehensive solution is needed, and I think that a comprehensive solution requires reading in generated reports with conditional logic that can be customized to the needs of each implementation as I described.  However, if CA enhanced the cross-system duplicate blocking algorithm to not prematurely, and therefore sometimes incorrectly, block output type name "duplicates", like it does now, then I think that would provide a solution that would be comprehensive for many implementations, and that would be a better solution because it would accurately/correctly block all output type name duplicates rather than report on them after the fact.



  • 6.  Re: When is a "Duplicate" not the same?

    Posted Feb 08, 2016 05:53 AM

    Hello Eoin,

     

    as a first step you might eliminate element-duplicates in the same Endevor-system / -Subsystem from you csv-dataset. This can be achieved by SORT with similar control-statements as below:

    dupelim.jpg

    while sort fields correspond to element-Name -type, -system and -subsystem but not -environment or -stage.

    by sum filelds=none sort outputs only one record per sortkey.

     

    Consecutively you can count records wirh the same element-name and -type to identify duplicates (count > 1) by using SORT with similar control statements as below:

    dupcnt.jpg

    while sort fields correspond to element-name and -type in the record,

    the record-length of the input-dataset is here 324, and the record is extended by a constant 8-byte field field containing "1".

    You can summarize these "ones" to get the number of records with the identical key specified in "SORT FIELDS ...", summary records with a number higher than "1" in colons 325:332 indicate a duplicate.

     

    Of course your duplicate selection and recognition is very complex. Above samples are just an indication how to eliminate duplicate records resp. how to count duplicate records by standard SORT-utility. Maybe this could - hopefully -help ...

    regards,

    Josef



  • 7.  Re: When is a "Duplicate" not the same?

    Broadcom Employee
    Posted Feb 11, 2016 03:51 AM

    Thanks Josef - I keep meaning to read ALL of the SORT book, but it's just so big!  This is a great tip for dealing with many types of problem that I'd not considered - just extend the record!  Neat!