Hello, so has anyone ever seen this before. Running vmware probe version 6.82. Got a disk alert on the box and was surprised to see that the cause was from the vmware probe. Checked and there's a discovery folder and its 40GB.
What controls this any why is it growing like this? I have the discovery attach and get queues setup on this box to the hub it reports up to and the queues are empty.
Thank you for the post.
Unfortunately, we have not able to identify what is causing the problem.
To work the problem around, please use parameter called "discovery_server_version" in vmware probe.
Here is KB for the parameter.
How to eliminate ‘probeDiscovery’ queue backlog
Hope it helps.
Hello Yu. I already have that var set on all my 6.82 vmware probes so it does not help nor prevent this bug from happening.
<setup> loglevel = 3 multi_tenant_source = no multi_tenant_path = none use_instance_uuid = yes ds_target_include_folders = no show_networks = yes perf_request_batch_size = 100 disable_resource_pool_perf_metrics = no disable_vm_system_perf_metrics = no disable_host_system_perf_metrics = no disable_array_metrics = no use_guest_name_for_source = no pobc_default_template_enable = true major_version = 6 logsize = 25000 autorefresh = yes subjectname = VMWARE discovery_server_version = 8.42 periodic_full_publish_interval = 24 include_summary_perf_metrics = yes utf8_cipher = true self_monitoring_alarm_severity = 2 enable_self_monitoring_alarm = true enable_self_monitoring_alarm_aggregation = false</setup>
Thank you very much for response.
I'm afraid to tell you that please open support case for the problem.
Already did last week. Just opened this as well on community to see if anyone else hit this...
At this point they know its a bug b/c this is wide spread across many of our different sites.
Has anyone else had this problem and come up with a solution? I am at the same point as Daniel Blanco. We have the the exact same config and we are seeing the exact same issue. That discovery folder is slowly filling the machines hard drive.
Hi Rob, rtirak
So in working with support, this may be attributed to when the discovery_servers "probeDiscovery" queue starts to backup. If you check on your primary hub hub's queue status and check the "probeDiscovery" queue mine from time to time has a backup. I think that's when the remote hubs start getting this backup. Just a guess at this point.
I then noticed that when I have this backup, if I cycle the "qos_processor" probe my "probeDiscovery" queue magically empties out.
So what I did was put a timer on my qos_processor and have it run at 15 min intervals all thru the day. This has prevented so far any discovery_server queue backups. Also I checked today and for the 1st time over this weekend, when I checked the remote hubs I did not have a "discovery" folder filled with GB's of queue files. It was empty.
> Change to the qos_processor's r-click: "EDIT" dialog with the following:
Range From: 00:00
Range To: 23:59
Execution Interval [x] : 15 Min
> This seemed to prevent any discovery_server "probeDiscovery" queue backups
> Which then seems to prevent any vmware or cisco_ucs "discovery" folders from getting filled up as well.
Thank you very much for valuable input.
The qos_processor has specific role in UIM and there is no impacts even deactivated if you are
- No interest on QOS data enrichment.
- No interest on automatic origin change reflection.
- No interest on saving QOS_BASELINE data into UIM DB.
Here is doc
qos_processor - CA Unified Infrastructure Management Probes - CA Technologies Documentation