Automic Workload Automation

 View Only
  • 1.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 19, 2018 02:16 PM
    Hi, we're trying to reduce the volume of job output reports we store in the database. So we want to retain most output reports directly on the host, and not have them read into the DB. If I understand correctly, this requires we uncheck "Database" and check "File" on the Unix tab. Then of course we'll need to build (ourselves) a good cleanup mechanism for the files on the servers; and we thought obviously this is what the XRO table is for; it contains the file sizes and locations.

    So we tried it, and unless "Database" is checked on the Unix tab, no records are created in the XRO table. Is this really working as designed? I don't want to reduce the report block size or number of blocks, because we really do want to store a reasonable amount of log in the database for certain jobs.

    How can I track my report file sizes and locations without uploading the reports into the DB?


  • 2.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 19, 2018 03:09 PM
    Hey Jessica,

    I am unsure about the design of the XRO table but I would look into using the Agent variables for job report.

    For example UC_EX_PATH_JOBREPORT could be used for this I would think. The only requirement would be creating an agent group for each type of Agent.

    :SET &HND# = PREP_PROCESS_AGENTGROUP("AGENTGROUP_WINDOWS","WIN*",ALL) :PROCESS &HND# :   SET &AGENT# = GET_PROCESS_LINE(&HND#,1) :   p Agent: &AGENT# :   SET &jobreport# = GET_VAR('UC_EX_PATH_JOBREPORT','&AGENT#') :   p Job report location: &jobreport# :ENDPROCESS
    As for size, would probably need to write an OS level command for that. I dont think I would trust the built in Automic filesize script functions.



  • 3.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 20, 2018 03:27 AM

    Hi,

    The behavior of the XRO table you observed is correct. This is because the purpose of the Open Interface to Output Management Systems is to list or unload Reports stored in the database.

    However there is a list of reports stored on the file system available. When you open the Report window for a job, you can see it in the Directory tab:

     8vdv8gsu1l8q.pnghttps://us.v-cdn.net/5019921/uploads/editor/wm/8vdv8gsu1l8q.png" width="634">

    So one way to determine the files which needs to be cleaned up can be to query the RH table.
    Here an example, it lists the Agent Name, Client, RunID, End Time and File Name for Tasks ended bevor 2018 and having a Report stored on the file system:

     

    SELECT   AH_HOSTDST,   RH_CLIENT,   RH_AH_IDNR,   RH_TIMESTAMP4,   RH_FILEFULLPATH FROM   AH,   RH WHERE   AH_IDNR            =RH_AH_IDNR AND RH_TIMESTAMP4    < to_date('2018-01-01','YYYY-MM-DD') AND RH_FILEONAGENT   =1 AND RH_FILEFULLPATH IS NOT NULL

     

    Note: The entries in the RH table are cleaned up by the reorganization process, so you need to create the list before.

    KR, Josef

     



  • 4.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 20, 2018 05:05 AM
    Hey Jessica,

    do you have specific requirements for cleaning up reports?

    Here's what we do: We simply run "find" statements over the agent directory and delete any reports or other cruft older than a set number of days.

    The cutoff is different per directory, parameterized and in version control (svn) and the cleanup rules can be inspected by the users in a web interface I made, and there is heavy logging of what gets deleted by my surrounding framework, but it essentially boils down to simple access time based cleanups.

    Hth,
    Carsten


  • 5.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 20, 2018 12:46 PM
    Hi all,

    Thanks for the suggestions so far. They are all helpful and I may end up using Carsten's basic framework.

    For context, we have been instructed to keep logs on disk instead of in database, for performance reasons. If I understand Josef, the purpose of the XRO system is to allow users to develop their own algorithms for which logs to keep (in the database) and which to delete.  Essentially I want to do the same thing but with the physical files. 

    The most helpful tool for this would be a DB table which contains the SIZE of the output file on the host, in addition to the location, so that we can initiate disk cleanup jobs only when needed (but immediately, when needed). 

    We have over 6000 agents, many of which are idle for days at a time, and some of which have thousands of log files. For both of those reasons, we don't want to constantly scan them all physically. The agents are mostly grouped by application, and we want to give the (dozens of ) application owners the ability to set custom retention settings, such as number of days or number of logs. But we might want to override their settings (for example, remove the oldest x GB of files) if disk is getting too full.

    We have some large shared hosts running diverse applications with distinct requirements, so we'd prefer a job-based cleanup algorithm to a host-based algorithm. Having the file size tied to the job object would also allow us to identify which applications are using more than their share of disk on the shared hosts.

    I considered a post process step to insert the run ID and file size into a variable object, but that seems like too much overhead, given our volume.  


  • 6.  XRO_REPORTS in UC_CLIENT_SETTINGS: No record if report not stored in DB?

    Posted Mar 20, 2018 01:32 PM
    A few remarks:

    1 - To allow a job-based cleanup you will need to rename the log files with the job name. By default it's a calculated name based on the runid that is used. Can be done in post-processing using an Include Object to define one common rule for this. You could also add the "application" name in the file name to manage cleanup on "application" basis.

    2 - Cleanup based on job name can be performed using a default retention rule for most of the jobs and you will have to manage only the ones that have a specific setting (in a Variable Object ?). And use "application" rule if you have it included in the name of the log files.

    3 - Running cleanup on large number of agents can be done using Agent Group that are including most of the agents of the same type in your system and/or by application to have more flexibility in the scheduling of the cleanup. The cleanup job will be executed on all active Agents of the Agent Group wih the same processing, default rule for all files except those explicitely defined for special management.

    4 - To cleanup oversized files, you need to add the log file size detection at the end of the job, in the post-processing i.e. Then if it is over a defined default or specific threshold, cleanup log file or register it for cleanup in the next global process for oversize files (every X hours i.e.runs on all Agents but only for oversize log files registered).

    5 - Ayway you certainly have a monitoring of file systems and alerts when it reach a critical occupation level. Then you can activate an "emergency" cleanup process that is using a specific defult rule on the concerned Agent to remove more files than the default one and reduce by x% the retention period for the specific files on the same Agent. This should allow you to keep enough free space on all log directories for all Agents.

    Note: Don't forget to manage log files of the non OS Agents like SQL or FTP Agents. They can also be very large or in very large number per Agent.

    Hope this can give you some ideas to fix your problem.