Endevor

 View Only
Expand all | Collapse all

Endevor & SonarQube

  • 1.  Endevor & SonarQube

    Posted Apr 03, 2018 11:53 AM

    Hello Endevor people,  Has anyone integrated SonarQube (code quality inspection product) in Endevor, via the compile processor or other method?  Your feedback would be very much appreciated.  Thanks, Phil Gineo



  • 2.  Re: Endevor & SonarQube

    Posted Apr 03, 2018 07:02 PM

    Hi Phil,


    I don’t believe is possible as part of a single processor.


    You need to get the source in to a location that Sonar Source can scan like a shared drive.


    We do a loop of add then move, trigger a download to a server for scanning, then move again via API if the scan is good.


    Does that help any?


    Stuart



  • 3.  Re: Endevor & SonarQube

    Posted Apr 04, 2018 08:43 AM

    Thanks Stuart!  I was thinking in the generate and or move processor, possibly FTP the source to the Sonar Source server? 



  • 4.  Re: Endevor & SonarQube

    Posted Apr 04, 2018 09:13 AM

    Sending the source to the Sonar Source server is possible via FTP but it would be tricky to feedback to Endevor whilst the job is running because of the requirement for copybooks etc.

    I think you need to understand the requirements and the infrastructure around the Sonar Source install at your site to create an informed solution.

    A conversation the marva09 might be useful. The ALC background here could be useful.

     

    It is certainly is a journey :-)

    Stuart



  • 5.  Re: Endevor & SonarQube

    Posted Apr 04, 2018 08:44 AM

    Hi Phil! 

     

    I haven't integrated SonarQube but I do have experience to share regarding using a code quality inspection product within a processor. 

     

    Years ago, I used a product named "Pinpoint" which provided a McCabe Complex Structure score (McCabe's Cyclomatic Complexity | Software Quality Metric | Quality Assurance | Complex System | Complex | Software Engin… ) of your COBOL source. In essence what this did was give you a score between 1-100 with the low numbers being really ugly programs and the higher number meaning good structure. It also analysed I-T-E statements in the program and any GO TO variable labels you might have hidden. The stuff of really imaginative programmers....

     

    At any rate, we decided to execute the tool within our GENERATE processors and the *FAIL* any program that scored 70 or less. When we rolled it out, all was fine till some really old legacy programs were generated that scored 45 and less.... so developers came to us and said "If you think we're going to mess with that code to bring it up to 70. you've got another think coming...". 

     

    So we backed off and came up with another idea. You see, after the execution of the tool, we were capturing every program's score so that we could have a complete inventory record of our site's "quality". Every program score was saved and inserted onto a DB2 table in the processor. So what we decided was that, after that tool was executed, we would fetch the program's PREVIOUS score, compare it to the CURRENT score, and only *FAIL* if the developer had made the score drop.

     

    Nope. That didn't fly with development either. They needed the right to make bad programs worse.

     

    So in the end, we integrated the tool, saved the scores on a DB2 table, and just used the information as a metric for information purposes. 

     

    Automating "quality" was just not in the cards when faced with really old code and tight deadlines.



  • 6.  Re: Endevor & SonarQube

    Posted Apr 04, 2018 08:52 AM

    Thanks John, very interesting and informative story!



  • 7.  Re: Endevor & SonarQube

    Posted Apr 04, 2018 09:07 AM

    This is a good point John.

    No project is ready to expand their scope to include cleaning up "smelly" code. It also increases the testing scope.

    Rules in SonarQube can be amended so that you stop the code getting worse than it was before the code delta, so that bypasses that one issue.

     

    Code coverage is another factor that Sonar Qube can help with for code quality, to ensure that all paths through the code have been tested.

     

    I guess it really depends on what your organisation wants to get out of the Sonar products.

     

    Stuart



  • 8.  Re: Endevor & SonarQube

    Posted Apr 10, 2018 01:39 PM

    One of the Sonar products, planned to be used here, is an Eclipse plug-in named SonarLint. It can give immediate feedback to developers as they enter their code.

     

    A second Sonar plan is to support a batch on-demand Sonar scoring by application. The pilot application is heavy in COBOL, and like many others, it uses copybooks, macros and DCLGENS that it does not own. These belong to other applications or to the DBA’s. The collection of these items was resolved by the development of an Endevor batch procedure. It collects the inventory for a selected application, and then uses Endevor’s ACM information to identify and copy those other items not belonging to the application. Everything collected then is given a file extension and transmitted via FTP, and the Sonar scoring is initiated for the entire application.

     

    The next logical progression of this approach would be to kick off the batch process automatically on behalf of a package - sending only the outputs of the package before initiating the Sonar scan. 



  • 9.  Re: Endevor & SonarQube

    Posted Apr 01, 2019 03:20 PM

    Dan,

     

    We have started a POC with both SonarQube and SonarLint.  Could you share more information how you implemented your SonarQube code scanning interface?  I would also like to hear your plans to automatically initiate a code scan.  We wan to do the same thing at some point in the life cycle.  Provide the developer the capability to initiate a code scan at any time, but force a code scan prior to deploying to SIT or UAT.  We also want to automatically check to see if the scanning 'score' is getting better or worse than the previous scan.

     

    Thanks



  • 10.  Re: Endevor & SonarQube

    Posted Apr 03, 2019 06:31 AM

    Hello Bsquared.

     

    There is a simple SonarQube procedure that has been implemented at this Endevor site, and another more robust method is being considered. Likely neither method requires the installation of any new products. If there is interest, additional details and/or examples can be provided at this Ideation site.

     

    Today’s SonarQube process at this Endevor site is a simple JCL, submitted manually. The JCL uses a run-time parameter that names an Endevor system. The job builds inventory lists and gathers input component information for the named system. Then having assigned each element type a file extension, the collected items are transmitted to the SonarQube staging directories. When SonarQube is initiated, the entire application is scored.

     

    A more robust SonarQube process would automate the SonarQube scoring for an application, following the execution of an Endevor package. This method would engage an Endevor exit, a called Rexx routine, a JCL “model”, Endevor’s CSV, IBM’s FTP (or equivalent), and IBM’s REXEC utility. REXEC executes commands on a remote host, is documented in the same IBM manuals where FTP is documented, and would be responsible for initiating the SonarQube scoring.  Many of the objects of this solution exist as components of other solutions. The simple JCL of today’s SonarQube process would be tailored into the “model” for this automated process.

     

    Please understand that this approach is not being presented as an endorsement over others  – engaging Jenkins, Brightside etc.  Rather it is just one approach. Additionally, It offers nothing to determine whether the score is getting better or worse. 



  • 11.  Re: Endevor & SonarQube

    Posted Apr 05, 2018 06:58 AM

    To give our take on this, we decided not to use ENDEVOR as the driver for this.

    Realistically on our DevOps pipeline this process would sit under and be managed by an orchestration tool.

     

    So, we instead carved this into the area where our "capability uplift" products sit (which are all distributed)

    To tie in, we've written our own code analyzer which is great as it runs in realtime with our IDE and adheres to our standards. The problem with Sonar is that it didn't have that kind of realtime interface (this may have changed now).

    A lot of value is created from instant quality feedback on code as it's being written.

     

    So we have built an uploader via the IDE tooling which is a button press to zip the source and automatically fill in the upload parameters. To start a Sonar Analysis, it's a separate button press (hence why you can see an orchestration tool would be useful here), as it's dependant on the upload completing and that task may take some time.

     

    I particularly like Johns story which reverberates here, quote "They needed the right to make bad programs worse".



  • 12.  Re: Endevor & SonarQube

    Posted Apr 06, 2018 01:51 PM

    Thanks so much for your feedback Stuart, John & Steve.  Your feedback is a big help.  This post is an excellent example of the power of CA communities.  Regards, -Phil 



  • 13.  Re: Endevor & SonarQube

    Broadcom Employee
    Posted Apr 06, 2018 04:38 PM

    Hi Phil,

     

    I totally agree with Stuart & Steve on this - since the static analysis tool (SonarQube) lives off host, a processor isn't really a natural fit to drive the scans.  I would drive it from the pipeline - you can have the pipeline orchestrator driving the Endevor tasks and fit in a call to SonarQube somewhere as part of that.  The Endevor processor could definitely include the FTP step to get the code to the scan machine but the invocation of the scanning would be best done later in the pipe. 

     

    Another approach (if for instance there is no pipeline orchestrator in play), would be to FTP in the processor, and then after the processor, trigger the scan with a post-generate webhook.



  • 14.  Re: Endevor & SonarQube

    Posted Feb 09, 2019 01:33 PM

    We started our mainframe DevOps journey early last year with several independent processes not 'orchestrated' together.  Orchestrating with Jenkins seemed like a lot to bite off all at once. 

     

    We use IDz as our IDE.  We stood up a SonarQube server, installed SonarLint as an IDz plugin and created a rudimentary way to start a code scan.  The developer clicks a couple of buttons and a process kicks off in the background to execute a Rexx script to gather all components needed for a Cobol program scan and FTPs them to the SonarQube server.

     

    This 'pilot' got a few dozen programmers familiar with the tools.  It works but is not robust enough to roll out on a large scale.

     

    Now we want to take it up a level and introduce Jenkins and improve our Endevor-to-SonarQube interface and add more automation.

     

    This thread was started in April of last year.  Have any of you improved on your solutions/recommendations since then?

     

    Thanks.



  • 15.  Re: Endevor & SonarQube

    Broadcom Employee
    Posted Feb 10, 2019 06:02 PM

    One thing you may wish to check out that is new since this thread was started was CA Brightside.  Brightside is a command line interface that can script the fetch of elements from Endevor (+ dependencies) for scanning by SonarQube and drive the Endevor pipeline as well.  Here is an example of a script that was written in Python to do that:

     

    GitHub - chipset/bright: Automation for Brightside 

     

    Here are some other links that describe the possibilities:

     

    https://www.youtube.com/watch?v=91yf1fioaZA - Brightside Getting Started