CustomerPrograms: | Good morning, everyone, and thank you for joining us. Please feel free to start asking your DE questions here in the chat box. |
| steve: when i add a new agent to topology, i have to manually add it, one at a time, is there a way to add many via a script of some kind? |
| Nitin Pande (CA): | @steve You can add the agent from the CLI. We have 'ADDAGENT' command. It can be scripted if necessary |
| Loren Watts: | I posted this earlier this week but no hits yet. We use FILE_TRIGGER extensively. And OVERDUE with them. I need my overdue time to be in relation to the time at which the predecessor to the FILE_TRIGGER completes. I only see time references, REALNOW, ESPAHH, which refer to the Event being triggered, not the completion of job in my appl. |
| steve: | what manual is the ADDAGENT described in? and how to script?
|
| Srinivas(CA): | @Steve - we have CLI guide, that details about all commands. |
| Srinivas(CA): | Steve - command name is CREATEAGENT |
| Nitin Pande (CA): | @steve Yes, it is CREATEAGENT, not ADDAGENT. You can type 'HELP' in the CLI. It will list all the supported commands. You can also get help on individual commands by entering 'HELP CREATEAGENT'. To script it, you can use the cli command from server side in |
| Nitin Pande (CA): | @Loren Watts Are we looking at File Trigger events? |
| Loren Watts: | FILE_TRIGGER object within my appl / member |
| Nitin Pande (CA): | @Loren Watts We do have an Overdue in the File Trigger jobs. If you right-click on it, you will see it under Time Dependencies. |
| Loren Watts: | @Nitin I'm aware. My issue is a need a relative time for declaring overdue. It must be one minute, let's say, after the FILE_TRIGGER predecessor. I need that time. |
| steve: | we want to convert CAWA authentication to LDAP. the userid/password will be in LDAP, and the groups are still in CAWA? the userids are not automatically added to a group, we still have to manually add userids to a group? |
| Loren Watts: | @Nitin after the predecessor to the FILE_TRIGGER I mean |
| Nitin Pande (CA): | @Loren Watt Are we discussing ESP manager job or the dSeries job? |
| Loren Watts: | @Nitin - let me explain my application. Job 1 processes data. It writes tmp files when finished, indicating that between zero and seven successor queues have work to do. No tmp file indicates no work for queue three, for example. I'm using a file_trigger to determe if a queue has work. If no file detected by file_trigger, I have ESP complete that queue. |
| Loren Watts: | @Nitin For Prod environment monitoring, I am not allowed to have FILE_TRIGGER job fail. I need it to go overdue, and then have ESP Complete the the file trigger job and successor. |
| Srinivas(CA): | @steve yes, the Group permissions are specific to Product not LDAP. We need to define them one time in Product and associate the Users to that Group. |
| sharon: | we would like to schedule weekly the CLI command delete status messages. However it doesn't appear that command will take OLDERTHAN it only takes specific dates. Are there plans to change this so it takes the olderthan like purge completed jobs and movehistorydata |
| Nitin Pande (CA): | @Loren Watts In dSeries, you can define a "Monitor Continously using an Alert". That way the File Trigger will continue till the file condition has been met. |
| Nitin Pande (CA): | @sharon The DELETESTATUSMESSAGES command will only accept the date as threshold. It will delete all the records older than that date. |
| Jeff Fitzpatrick: | @sharon Example: Delete the Status Messages Older than a Specified Date |
The following example deletes all the status messages from the dashboard that were generated before March 22, 2010 at midnight:
deletestatusmessages threshold("2010-04-22 00:00:00")
| sharon: | @nitin, that is what we thought so each week we have to update the date. thanks |
| Loren Watts: | @Nitin That doesn't sound like a fit. I'll make note of it though. I do not need to wait for the file. As soon as file_trigger begins, the file is there right then. The presence of the file right then indicates my successor should run. No file right then idicates that queue does not need to execute. And I have ESP Complete the jobs in that queue. |
| Nitin Pande (CA): | @sharon You can use JavaScript to fill in the dates for you. |
| Michael Woods: | @Steve - There is a PIB B5EO112 that documents how to import users and groups |
| sharon: | @nitin, i can look into the java. thanks |
| Nitin Pande (CA): | @sharon Thanks and great to hear from you. |
| sharon: | is there a way to trigger and entire subappl. we can see in monitor subappls are easily controled like completing or bypassing, hold, release. We would like an easy way to trigger a subapp; |
| Nitin Pande (CA): | @sharon You can define an event for subapplication. Set the subappl jobs as root jobs. |
| Guillermo: | hi, it's possible to attach a spool file on a mail when a job fails? |
| Marco Villasana: | hello, I could say that versions are supported for jobs of informatica |
| Srinivas(CA): | @Guillermo While defining the job, we can define a Notification and select "Failed" monitor state to and chose "Attach Spool File" option. |
| Marco Villasana: | few of informatica jobs can run in parallel |
| sharon: | @nitin, in an application of 200+ jobs in order to choose what and entire supappl doesn't that entale everyone knows excatly what jobs are in that subappl. when choose jobs there we cannot tell what subappl the jobs are part of |
| Michael Woods: | @Marco - Can you give us some more detail on your questions? Are you asking what versions of Informatica we support and if you can run those jobs in parallel? |
| Marco Villasana: | for example, I can run jobs of informatica power center version 9? and what is the maximum number of jobs that can run at the same time? |
| Michael Woods: | @Marco - we support 64 bit powercenter 9.0.1 and 9.1 |
| sharon: | @marco i think the max number of jobs running together would depend on informatica not dseries. We have SOLIX and that tool cannot handle more than 1 job submitting at the same hhmmss |
| Michael Woods: | @Marco - The limit of jobs would depend on how may jobs the agent can handle at a given time (memory, CPU,...). We do not impose a limit within the agent. |
| Nitin Pande (CA): | @sharon There in no limitation on number of jobs within an application and sub-application. But if you have 200+ within a sub-application, then you may want to consider putting them in a separate application and have its own event |
| Loren Watts: | @Nitin Anything additional regarding file trigger and timing? |
| Customer Programs: | Okay everyone, we have to wrap up. Thank you for taking the time to join us today. We hope you found it valuable. |
| Customer Programs: | Are there any other questions before we wrap up? |
| Loren Watts: | Thank you for the opportunity |