Idea Details

Disk latency in CDM probe

Last activity 06-13-2019 10:08 AM
Anon Anon's profile image
08-23-2016 09:41 AM

Seeing as how the CDM probe is supposed to give essential and critical information regarding performance and what not, it would be really nice if you would consider implement some kind of monitoring of crucial disk latency counters too. And this data should of course be shown in UMP together with all the other CDM data.


12-07-2018 04:21 AM

Sorry. I made mistake. Here.



Iostat Average Wait Time



The average time for I/O requests issued to the device. This includes the time spent by the requests in queue and the time spent servicing them.


12-07-2018 04:18 AM

In linux system, how about QOS_IOSTAT_AWAIT ?




Milliseconds (ms)

Iostat average wait time


12-06-2018 02:00 PM



  Example how this helpful at all? 

This box has ~15 disks and there is no way to figure out the latency on the one I want.

Nice try but this is NOT DELIVERED.

12-06-2018 12:46 PM

Agreed. I have systems with 300+ disks/volumes and knowing the average or total latency of them all is useless. Any outliers that you'd be interested in knowing about get lost because of the combination with the rest. Similarly if one has a system that is a database server using a single drive/volume but the server has a bunch of other drives, one might only be interested in the single drive the database server is using.


As an aside, with respect to this whole damaged idea thing, it is way too uni-directional with regards to communications. The one thing I will say that CA excels at is misunderstanding customer needs. I can count on one hand the number of ideas I can remember where there's any kind of response from product management or engineering to clarify the request/idea or to ask if a given approach would be satisfactory. There really needs to be more of that back and forth. Everyone would benefit.


As a similar example of a "delivered feature" that's useless, consider the alert listing the processes when memory usage is breached - fires for physical memory but total or swap (I might have these mixed around - sorry if so). If you have a SQL server system, you have SQL server allocated with all the available RAM and it stays resident. This alert is always triggered. Nothing though about the processes when you run out of total memory.


And who out there asked for EMS? Or that it's still not able to support any kind of officially supported HA environment (granted I've not tried out the 9.x version of it so maybe that function is there now). Crazy in this day and age that a critical core piece of an enterprise wide system would intentionally be built with a single point of failure. 

12-06-2018 11:45 AM

Hi danst04, thanks. How would you monitor this on a Linux box?

12-06-2018 11:44 AM

This then is useless. Were trying to figure out what disks have high latency and looking at a production box that has > 10 disks all we get is just a very high double digit million ms # and its cumulative total this is not at all helpful. 


Broadcom UIM PM's: This is NOT delivered in a useful way. 

11-29-2018 07:01 PM

I totally agree. It needs to be on a per disk basis. In the meantime, you could use ntperf which takes bit of configuration work but you can monitor each disk and configure the QOS accordingly. You can select the individual disks (instances) and set thresholds and alarm on values.


“Physical disk performance object -> Avg. Disk sec/Read counter” – Shows the average read latency.
“Physical disk performance object -> Avg. Disk sec/Write counter” – Shows the average write latency.

11-29-2018 03:53 PM

Hello, I'm looking at 6.30 and no it does not. I want to enable just disk latency against one disk on a system and the Disk Latency values are collected as a whole on the overall system. This doesn't help. Need a way to just enable it on a per disk level and set threshold on each accordingly.

08-23-2017 03:40 PM

Delivered as of 6/17. Please take the probe for a spin and advise if it doesnt meet expectations.