Hello community, have a great 2019
I have a problem with an EM that is going near OOM every 15 minutes and also in the log I find:
1/10/19 12:03:34.956 PM CET [WARN] [master clock] [Manager.Clock] Timeslice processing delayed due to system activity. Combining data from timeslices 103141212 to 1031412141/10/19 12:17:46.994 PM CET [WARN] [master clock] [Manager.Clock] Timeslice processing delayed due to system activity. Combining data from timeslices 103141269 to 103141271
This causes harvest time to go to 30 seconds and more every 15 minutes
Can you have an idea what is happening to the EM?
Is working with 9GB of heap and 200 agents connected which is equal configuration with other 4 EMs working ok.
Depending on the EM release, as long as it is a recent one using Java 8, is it configured to use G1GC ?
The log messages are probably due to excessive garbage collection activity
Introscope Enterprise Manager Troubleshooting and - CA Knowledge
> 3. If you are using JVM 1.8, ensure that G1 GC is in use.
Look at the perflog for details. See what type of metrics , CLW reports, etc are being processed on the server. Don't also forget to check third-party applications (e.g. anti-virus)
Hello.Thanks for the reply.
Yes every collector is using G1GC algorithm.
Every 15 minutes I noticed that can be the procedure to clean greyed out metrics that the EM is performing.
This is written in the EM startup log file.
What version of APM are on the EM?
Is the EM a collector or a MOM?
What is the -Xms -Xmx in the Introscope_Enterprise_Manager.lax?
What version of Java is the EM running under?
Is the EM part of an EM Cluster?
If you do have an EM cluster, how many EMs and are you using the loadbalance.xml to direct agents to specific enterprise managers?
How many agents, application and metric is the EM gathering?
My first guess is that your EM is overloaded and does not have enough resources for the application/metric/trace load.
You may need to trim your metrics, see if there are any agents that are metric/historic metric clamped.
Second guess would be the same as the others, is the memory settings for JVM 1.8 G1GC to optimize the memory a bit better than the default garbage collection for how APM uses memory.
Hope this helps,
hello community. Tring to respond...
I made an heap dump during jvm approaching max heap and i discovered that memory was full of historical metrics.
So I solved the problem cutting historical metrics and keeping them lower than 1000k
Now it is working fine.
Thanks to everyone