We recently faced issues with our remote agents stopping a lot more frequently than they used to do before. After opening a ticket with support team, they found the following issue:
"java.lang.OutOfMemoryError: unable to create new native thread
java.lang.OutOfMemoryError: Java heap space
This indicates that we need to increase the java heap size when we start the Agent. Could you please stop and then start your Agent with the following command: java -d64 -Xrs -Xmx2048M -jar ucxjcitx.jar disable_cache"
Our JVM size was already set at 2048M (-Xmx2048M) so we went with 3072M. This helped somehow but did not quite solved the issue. We then decided to go all in and raised the value to 6144M. After doing this we stopped having issue.
It might be overkill, but if hardware resources aren't an issue, what would be considered the upper limit from which you can start having issues with your system ? 8192M, more? Has anybody experienced similar JVM issues ?
Thanks for any feedback?
thats a common issue on java agents caused by high workload or big report size.The easiest solution is granting more memory - as you did.
Another helpful idea is checking the job(s) prior to the java.lang.OutOfMemoryError: Java heap space message.If its always the same job causing the error it would be helpful analyzing the job & report if anything is consuming much memory.
What also could be done is limiting the agent resources of the agent (only xy jobs at the same time can be executed)
Regarding your question, there can not given a "hard" advice because different shops use different java agents with different jobs and different frequency.....
in our company we mostly use 256MB some Agents got 512MB...
Is there any where we store the Agent Cache? either its Linux agent/FTP agent?