ESP dSeries Workload Automation

 View Only
  • 1.  Recent issue with latency

    Posted Oct 19, 2020 01:05 PM
    ​Hi Everyone

         I am hoping other WLA d-Series users can help us out and give us their option.  About a week ago we had an issue with latency when submitting commands to active workload.  If we tried to resubmit a failed job the response back into the desktop client took anywhere from 30 seconds to 10 minutes.  Any new workload that came into the system ran fine since we didn't have to take any actions against that particular workload.
         Here is some more history, we did go P1 with Broadcom, we did have our network team review all and they were able to clearly show it was not network related, we tried a recycle of the WLA d-Series services (fyi, if we ran on the secondary server latency almost doubled), we recycles the WLA servers, and then finally recycled the database.  All of those tasks did not resolve the issue.
         We were able to finally get the latency into acceptable limits by running "purgecompletedjobs 'now less 2 days'", we did try "purgecompletedjobs 'now less 7 days" but that did not help. However we do have this built into our HOUSEKEEPING application but we do "purgecompletedjobs 'now less 14 days'" and have had it like this for roughly 4 yrs without any issues.
         Before we ran the purge completed apps command we looked to see how many records we had in the table and it was roughly 1.6 million.  We currently are running the application on AIX and our database is Oracle.  Now I am not a DBA by any means but 1.6 records does not seem to be a lot within an Oracle database.
         So I guess what I am looking for is to see what other shops have for records within the ESP_WSS_APPL and ESP_WSS_JOB tables, is over a million records to much or maybe our database is not sized correctly.  What do other users keep within their monitor view for completed applications, 2 days, 1 week, 1 month, etc.....
         Also when troubleshooting our issue we also noticed we have applications that state PROCESSING but all the jobs in the application have completed.  Is there an easy way to locate all such applications with doing some type of query? 
         We are looking for best practices for the database/application so we do not run into a latency issue again

    Appreciate everyone's time


  • 2.  RE: Recent issue with latency

    Posted Oct 25, 2020 12:39 AM
    Edited by SHARON SHIMANEK Oct 25, 2020 12:39 AM
    we run purge completed jobs default to older than  6 days, and  older than 3 days for most or interval jobs.
    you should also see how big your status messages are.


  • 3.  RE: Recent issue with latency

    Posted Nov 12, 2020 01:38 PM
    Hi SHARON
       Do you have an example your purge 3 days for interval jobs job?  ​And maybe your overall HOUSEKEEPING application.  Currently we do:
    purge completed jobs (now less 7 days) (all)
    purge logs job (now less 7days)
    PURGE_AET_DATA (now less 7 days)
    Delete_Clilogs (home grown script)
    Move_history_data (now less 15 days)



  • 4.  RE: Recent issue with latency
    Best Answer

    Posted Nov 13, 2020 02:06 PM
    We do:
    purge completed jobs (now less 6 days) (all)
    purge completed jobs (now less 3 days) (P* applications as most or frequent interval applications start with P*)

    purge logs job (now less 4 days)
    PURGE_AET_DATA (now less 1 year )
    Delete_Clilogs (home grown script) - i don't think we do or our hosting unix team set up a default process.  I will have to look at
    Move_history_data (now less 33 days) 

    We also run
    deletestatusmessages - older than 'DELETESTATUSMESSAGES threshold("%APPL.date")'
    deleteapplicationversions - older than 13 months  - 
    and we run SQL to delete from H_APPLICATION 
    default older than 370 days
    we run an extreme number of frequent interval applications so some of these we grouped to delete older than 35 dayd, 90 days and 180 days.



  • 5.  RE: Recent issue with latency

    Posted Nov 30, 2020 01:24 PM
    ​This has been a big help Sharon.  Thank you for all the suggestions