Hi Phil!
I haven't integrated SonarQube but I do have experience to share regarding using a code quality inspection product within a processor.
Years ago, I used a product named "Pinpoint" which provided a McCabe Complex Structure score (McCabe's Cyclomatic Complexity | Software Quality Metric | Quality Assurance | Complex System | Complex | Software Engin… ) of your COBOL source. In essence what this did was give you a score between 1-100 with the low numbers being really ugly programs and the higher number meaning good structure. It also analysed I-T-E statements in the program and any GO TO variable labels you might have hidden. The stuff of really imaginative programmers....
At any rate, we decided to execute the tool within our GENERATE processors and the *FAIL* any program that scored 70 or less. When we rolled it out, all was fine till some really old legacy programs were generated that scored 45 and less.... so developers came to us and said "If you think we're going to mess with that code to bring it up to 70. you've got another think coming...".
So we backed off and came up with another idea. You see, after the execution of the tool, we were capturing every program's score so that we could have a complete inventory record of our site's "quality". Every program score was saved and inserted onto a DB2 table in the processor. So what we decided was that, after that tool was executed, we would fetch the program's PREVIOUS score, compare it to the CURRENT score, and only *FAIL* if the developer had made the score drop.
Nope. That didn't fly with development either. They needed the right to make bad programs worse.
So in the end, we integrated the tool, saved the scores on a DB2 table, and just used the information as a metric for information purposes.
Automating "quality" was just not in the cards when faced with really old code and tight deadlines.