Can someone tell me how long errors are held in the event database? We are using APM 10.1. A user was looking at a graph of Errors Per Interval for a Tomcat JVM and saw some errors on the graph. When they clock on the Errors tab they do not see anything. This is for a time period occurring about 8 days ago. When I go look into any other JVM, I do not see any errors for this time period either.
Default is 14 days unless you change it during or after installation.
Hiko has answered your question. Please let us know if this thread can be marked as closed or there are followup questions
You may also want to open a case on the main issue with Tomcat
Where is the parameter that sets how long the period is?
Wouldn't Errors Per Interval be stored as a regular metric in the Smartstor DB? If Error Detector is enabled, then the data from the error snapshot (not the errors per interval) would be stored in the traces db making that the default of 14 days max to keep.
Is this the parameter in question?
# How many days to store traces
yes that is the property in question. In a default installation it is 14, however in reading the documentation, it states that the error snapshot data that is taken if you have error detector installed will stay there for 14 (or in your example above, 7) days. To me the Errors Per Interval is a regular metric, but I want to find out for sure. Let me check with a few colleagues on this.
Ok I got confirmation from Engineering. The Errors Per Interval is a metric which is stored in your Smartstor DB and is bound to the Smartstor tier settings in the EM properties file.
The error snapshot data is from Error Detector and that is stored in the traces db and is bound to the ntroscope.enterprisemanager.transactionevents.storage.max.data.age setting.
So the metric above should have historical data prior to 7 days ago depending on what your tier settings are. If you are using Error Detector, then the snapshots will only be kept for 7 days according to your setting above.
Let me know if this helps.
Thanks Matt. I appreciate the response. That explains what we are seeing.