VMware vSphere

 View Only
Expand all | Collapse all

Event: Device Performance has deteriorated. I/O Latency increased

  • 1.  Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 13, 2011 07:41 AM

    Hi,

    Since upgrading to vSphere 5 I have noticed the following errore in our Events:

    Device naa.60a980004335434f4334583057375634 performance has deteriorated. I/O latency increased from average value of 3824 microseconds to 253556 microseconds.

    This is for different devices and not isolated to one.

    I'm not really sure where to start looking as the SAN is not being pushed as these messages even appear at 4am in the morning when nothing is happening.

    We are using a NetApp 3020C SAN.

    Any help or pointers appreciated.



  • 2.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 13, 2011 07:57 AM

    can you login to dataontap and see if there is any error message on the filer (cache and etc .. )



  • 3.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 13, 2011 08:03 AM

    The filers show no errors and the status is normal.

    One thing I forgot is that we recently enabled ALUA on our iSCSI group and changed the path selection to Round Robin. The SAT shows correctly on all hosts as VMW_SATP_ALUA.



  • 4.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 29, 2011 08:18 AM

    I am having the same problem, however I don't have any SAN attached to the host, the datastore is mounted to the local disks.



  • 5.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 29, 2011 09:10 AM

    I have a support case open with VMWare at the moment so hopefully they can shed some light on the issue.

    I will report back if and when they find anything.



  • 6.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 29, 2011 06:27 PM

    I'm getting similar errors in the event list on my ESXi host, but it's referring to a SAS tape drive connected to a dedicated SAS controller.  I'd be interested to find out what you discover, since it might just be related to the errors I'm seeing.



  • 7.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Oct 01, 2011 12:47 PM

    Turns out we can ignore these errors as they are warnings which were introduced in vSphere5.

    The highest lag we had was equivelant to 10 miliseconds and this was during our peak hours of when users were logging in and our backup window.



  • 8.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Oct 26, 2011 01:48 PM

    Device latency is "Average amount of time, in milliseconds, to complete a SCSI command from the physical device".  Now, the term "physical device", represents not only the disk, but also any hardware between ESXi and that disk.  A storage network can include storage adapters, switches, and arrays (or equivilents in ethernet storage networks).

    If you are investigating these messages, you may also want to broaden your investigation to the storage network adapters (ESXi and Array side if applicable) and the switch firmware/configuration.  You may also want to read up on the storage network best practices and compatibility from the vendor.

    Here are some references that define "Device Latency":

    http://communities.vmware.com/docs/DOC-11812

    http://pubs.vmware.com/vsphere-50/topic/com.vmware.wssdk.apiref.doc_50/disk_counters.html



  • 9.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Nov 30, 2011 07:10 PM

    Is there a simple way to just turn off the diagnostic messages?  I am seeing this message as well... I have a very simple environment setup and don't care as much about trying to maximize the I/O performance - my disks are just fine... just busy...

    I am just wanting to get rid of the messages...

    Thanks!

    Doug



  • 10.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Nov 30, 2011 08:15 PM

    These messages are informational and not a source of error or any system malfunction.

    This gives me a clear idea of how my storage array is behaving with the given workload and  is very useful from an administrative perspective.

    If i keep seeing these messages, i can go and fix my backend storage and move things around, ...

    I think its not going to hurt you but help you in designing a better storage layout, functionality and deliver a better latency to applications on the VMs. I wont turn it off (which i don't think you can today)



  • 11.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Dec 01, 2011 01:46 AM

    totally understand - as I stated, I am fully aware of the fact that my storage is getting really busy in its current design state... this is OK for my particular usage of ESXi.

    What I'd like to be able to do is to filter out the messages - so that what messages do appear in my log are potentially more of a concern to me... think "filter warnings, show errors only" kind of feature...

    is there no way to supress informational messages?

    Doug



  • 12.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jan 26, 2012 03:00 PM

    Will these warnings keep appearing if the I/O demand is high and keep on a constant value?

    I have a customer that began to put some real demand on I/O but recently after having his VMWare ESXi 5 not doing anything heavy for 1 month before yesterday.

    I know the hardware is fine but just keep receiving the warning messages.

    " Device naa.5000c5000b36354b  performance has deteriorated. I/O latency increased from average value  of  1875 microseconds to 140800 microseconds."

    Any suggestion beside just keep the message there is greatly appreciated.



  • 13.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jan 27, 2012 12:21 AM

    I am 99.9999% sure that my HW is just fine as well... and, for grins, I turned off my hourly builds... so the HW is pretty much idle.  I still see this message (not as much, but still occasionally) pop up - seems to be for no real reason.  Nothing seems to break from it (all my VMs hum right along without issue), its just annoying to see and it pollutes my log files imo.

    I am beginning to wonder if there is some sort of short-lived live-lock condition in ESXi 5.x... I never saw this message in 4.x.   My HW is unchanged and has been working flawlesly as far as I can tell.   I have a stock Dell PowerEdge T710.

    Doug



  • 14.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Feb 28, 2012 07:37 PM

    I am seeing this as well on an array that is basically unused.  The times that the messages are reported seem to be random and during very low usage times like 6am or 10pm.  There are no errors on the array end, only on the esxi side. This has to be something coming up in esxi 5 as I am not seeing any actual performance issues and my windows servers which are connecting to the san are not reporting any problems as well.  I really wish vmware would provide a little more information about this beyond the "your storage device/network is overloaded" article they have posted.  this clearly is not the case here.



  • 15.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Feb 29, 2012 01:36 AM

    Ditto... I would simply like to filter it out... I really dont care if its an overly sensitive counter/sensor within ESXi (I know my HW and VMs dont seem to be having any issues whatsoever...) - I just dont like it polluting my logs with extraneous information... I'd rather just see "the more serious stuff"... :-)

    Anyone from VMware care to provide an update on this?  It would be great to know the skinny (or better when when/how we might be able to filter it...)

    Thanks,

    Doug



  • 16.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Feb 29, 2012 05:05 PM

    I have discussed this event with others internally and I have not been informed of a method of filtering, or throttling these events.  The request for this feature has been submitted.  The feature request submission, review, approval, and developement is not a public process.  We cannot make any public facing statements or share any details as to whether the feature will be included in a future version.  If you feel strongly about this feature request, please reach out to your account management team to provide use cases and help prioritize it.  Thank you.



  • 17.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 01, 2012 05:16 AM

    Hi Daniel - thanks so much for the response - appreciate VMware having a look at the thread.   A filter feature would be a great addition if time/resource permits for you guys! Thanks again. Doug



  • 18.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Feb 07, 2013 06:35 AM

    Hi folks, had the same problems on local store on a brand new Dell-Server.
    I found this article
    http://www.vmdamentals.com/?p=2052
    from Eric Zandboer and changed Disk.DiscMaxIOSize from default 32767 Kb to 128 Kb
    - and this was it!



  • 19.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 09, 2012 06:06 PM

    I do not believe these messages are false. These messages occur because there IS high latency occurring, although very briefly. I have confirmed that there is a problem with Esxi 5 and the software ISCSI initiator. I purchased vMware from Dell with Dell R710 servers and must get my support through Dell.

    Look at this:

    What this shows is extop for 2 Esxi hosts accessing the same datastore. For some reason, the host that is 'inactive' seems to pause or lag where the latency can spike anywhere from 50-2000 milliseconds. In this example, its 561 ms on the inactive host and 1.8 on the active host. When I run IOMeter on VMs that run on the datastore, the average performance is normal, but IOMeter does show the Max I/O Response time jumping to the high latency numbers reported in the vCenter event log. These are also shown in the performance graphs for the hosts.

    The reason most people say ignore this, I believe, is because having high latency on the inactive host and not the active applications or VMs will result in them, generally, performing as expected. However, with a high workload, it can actually trigger the inactive host to lose the connection altogther. Also, if both hosts try to access the same datastore at the same time, the actual VMs or applications CAN lag significantly because of this.

    There is clearly a problem with software ISCSI. I have completely different datastores, differrent physical and virtual hardware, and completely seperate drivers for different hardware. The only thing in common is software ISCSI and Esxi 5 (does not happen on 4.1).

    This seems to be a weird locking issues or something like it between different hosts accessing the same ISCSI datastore. I can reproduce this on any ISCSI initiator and for comepletly different ISCSI datastores. This is a vMware bug that needs to be addressed, not ignored. I wish I could deal with vMware directly, but I have to go through Dell. For those who seemed to report a similar issue with non software ISCSI, it looks like it really is your hardware/setup and that just hides this particular problem.

    vMware... this is reproducable. Something is wrong with software ISCSI in Esxi 5.

    Message was edited by: TrevorW201110 to correct grammar.



  • 20.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 09, 2012 06:18 PM

    This is basically what i noticed as well.  The times I see these messages in the event log are usually during hours where there is almost no traffic on the SAN and/or vm hosts which is why i think most people are saying to either ignore the warnings or they are not true.  there clearly is a problem of some sort though.  I have been working with dell to alleviate high latency times reported on the san group itself (we have equallogics) and their suggestions have definitely helped in that regard but these warnings in vsphere still remain.  I think i might submit a ticket with vmware just because i pay for support and i want to make sure they are seeing reports of this instead of hoping they read these forums.



  • 21.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 16, 2012 02:47 PM

    Yeah, I have been trying everything I can think of to resolve this issue myself, but haven't been able to.

    Again, I didn't notice this until I migrated to vSphere5 and with the number of people reporting this issue, I definitely think VMware needs to be taking a look at this.



  • 22.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 02, 2012 03:31 AM

    I'm seeing these alerts as well, but there doesn't seem to be any alarms in vCenter to change these correct?  I wonder why that is?  Shouldn't everything be created as a vCenter alarm?   aka So how would I change the alerts to get email notifications if I wanted those?  I'm not seeing anything for vCenter alarms for devices.

    Thanks



  • 23.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 20, 2012 01:17 AM

    I've noticed the same issue on my FAS2040, but my events happen only when running SMVI (NetApp VSC) backups and when Snapshots are being removed.  (they usually happen together)

    Check to see if you can correlate the events with high NetApp CPU usage.  If so, check to see if you have large numbers of Snapshots and possibly in combination with dedupe.   Also check your NetApp and SMVI snapshot schedules.



  • 24.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 27, 2012 10:38 PM

    This is interesting.

    I was on a call for a vCenter Server issue and the vmware tech was looking at the logs on one of my hosts and he noticed the I/O latency increased errors in the logs.  The funny thing is they didnt start appearing till after I updated to 5 update 1.  I checked my other host and it had the same errors on luns that are presented to multiple ESXi hosts.  Prior to the upgrade I was recieving iscsi_vmk: iscsivmk_ConnReceiveAtomic: Sess [ISID:  TARGET: (null) TPGT: 0 TSIH: 0] errors.  This has something to do with EqualLogic luns that aren't configured for access from multiple ESXi hosts...so that is okay.  I dont seem them happening anymore after the 5 update 1 upgrade...odd.

    There was a netorking bug in 5 that could affect iSCSI connections, but I dont know if it is the same one being discussed in this thread.  It has been fixed in 5 update 1.  See the links below.

    http://vmtoday.com/2012/02/vsphere-5-networking-bug-affects-software-iscsi/

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2008144

    Can anyone confirm if this issue persists into 5 update 1 that had the latency errors prior to upgrading.



  • 25.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 01:45 PM

    I ended up putting a support call into vmware about this message.  For me, I wasnt actually noticing any problems with my setup, I was just worried about the messages.  After the rep verified all of my settings were optimal, they pulled a developer onto the call who basically said that these messages dont necessarily indicate a problem in my case.  They appear when the latency changes by 20 or 30% (cant remember which).  Im seeing messages like latency changed from 1286 microseconds to 24602 microseconds which is still only 24ms. this happens for a second and then it drops back down again so it isnt even that high to begin with and is only for a second.  And they confirmed that these messages are new to version 5 so people who only started seeing them after upgrading from 4->5, thats why.  Anyway, i wish they would change the logic on these messages so they would only appear if the % changed by a certain amount AND the overall latency was over a certain threshold as well.



  • 26.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 03:49 PM

    After what seems like a 1,000 tests and an eternity, Dell has finally decided to involve the vMware support team. This really does seem like there is a problem with software ISCSI in Esxi 5. The problem has not been resolved in any updates. Like my image from above shows, this is a quick burst of high latency that occurs and it always occurs on the hosts that are not the active path, but the latency can leak over to the active path. I think there is either a bug in monitoring latency (I.e. the monitoring functionality within vMware itself is the cause of the latency) or, I believe, there is a flaw in software ISCSI with with something like locking or pathing that causes the issue.

    I would ignore this, except my tests show that even a small burst of latency can cause the Esxi host to disconnect from the datastore, granted it takes a serious workload to get that result.

    I will post back with the results of the vMware support testing (hopefully, I have a repeatable environment where they can isolate the problem). I am pulling my hair out - over $100K in equipment has been on hold for months over this.



  • 27.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 04:05 PM

    Thanks for the info.  Please keep us updated!



  • 28.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 09:06 PM

    Another FYI: I have now proven that if all the software ISCSI sources have at least one active path, the latency dissappears and all performs well. I get the WORST latency (by far) when the paths are inactive, the hosts are doing essentially nothing. Look at the image below. The latency plunged the moment I started activity on all paths. I can repeat this over and over. The latency to the left occurred with nothing going on with the hosts, they were idle.... then I started iometer with multiple threads on clients of each software ISCSI source (you would think it would get worse) and the latency goes normal.



  • 29.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 10:04 PM

    How are you determining if/when paths are active?  Do you have multiple hosts accessing the volume?  The graph you show is for a particular host, no?  Does it show the same on all hosts?

    I have 4 hosts and 2 datastores that are on all 4 hosts.  1 datastore has vm's on it that are moderately busy and the other was just created a few weeks ago and only has a couple vm's on it.  Over the last hour (thats all the real-time graph shows) the latency numbers on the almost unused datastore look like:

    host 1: max 5ms, average .1ms

    host 2: max 5ms, average .033ms

    host 3: max 0ms. average 0ms

    host 4: max 3ms, average .022ms

    These are obviously low usage for this volume and I am not seeing the numbers that you are seeing.  Also, your graph doesnt show, but is that read or write latency that is spiking?



  • 30.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 10:17 PM

    Yes, there are multiple (two) hosts, and yes I see the same result on both hosts. I use esxtop to monitor the latency as well as the built-in performance charts. I know which path I will set active, by which datastore contains the VM and choosing the host that will run the VM. I only have this issue on Esxi 5, not 4.1. I have completely distinct software ISCSI sources (I.e. a Dell MD3200i and a FalconStor NSSVA). Both these sources have multiple datastores. Both use differrent physical connections (I.e. different net work cards, different  switches, etc). There are no common drivers or physical connections, yet both have the same problem.

    Again, if I have all my ISCSI data sources active, then the latency goes to normal. If any path is inactive (no VMs performing any read/write activity), I get bad latency.



  • 31.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 10:20 PM

    So If you were to create a new volume and put nothing on it would you see this behavior?  Im trying to replicate it here but so far have been unable.  I am only running 5.0 update 1 here, never had 4.x installed.



  • 32.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 28, 2012 10:32 PM

    Its not about the volume, its about the ISCSI paths (not physical paths). If I have a Dell MD3200i and two hosts and it has two datastores. If I just power on VMs and let them sit idle, I get horrible latency. If I have one host access a datastore, I still have high latency. If both hosts access the datastore, the latency drops to normal.

    I have had my connections, my drivers, my setup, etc etc all reviewed by numerous engineers. vMware wants to blame the vendor of the ISCSI product (I.e. Dell and its MD3200i). Dell and FalconStor have perfomed hundreds of tests and it just doesn;t make sense that both have the same issue at the same time. I have gone so far as to completely start fresh installs of Esxi 5 with the latest updates, reloading and configuring everything.

    I don't think this is a fixed bug with Esxi 5 for all users. There is something about particular conditions for particular users with software ISCSI. I just think its a vMware problem under certain conditions. It is driving me nuts.



  • 33.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Mar 30, 2012 11:44 PM

    This "may" be premature, but I found some references to DelayedAck - a setting that can be setup at mutiple levels for software ISCSI. I edited the advanced settings for the software ISCSI adapter, turned off DelayedAck (at the highest level - all software ISCSI sources would not use it) and rebooted each host. So far (knock on wood) the latency issue has vanished and I am getting normal (low latency) performance.

    We will see what happens over the next few days.



  • 34.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Apr 02, 2012 02:39 PM

    Dell recommends that delayed ack be disabled for most, if not all of their iscsi devices.
    Below is a message that was a sent to me for a performance ticket I had open with dell.

    I've disabled delayed ack on the whole iscsi initiator level as i didnt want to have to do it for each connection.

    Tcp Delayed Ack

    We recommend disabling tcp delayed ack for most  iscsi SAN configurations.

    It helps tremendously with read performance in most cases.

    WINDOWS:

    On windows the setting is called  TCPAckFrequency and it is a Windows registry key.

    Use these steps to adjust Delayed Acknowledgements in Windows on an iSCSI interface:

    1. Start Registry Editor.

    2. Locate and then click the following registry subkey:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<Interface GUID>

    Verify you have the correct interface by matching the ip address in the interface table.

    3. On the Edit menu, point to New, and then click DWORD Value.

    4. Name the new value TcpAckFrequency, and assign it a value of 1.

    5. Quit Registry Editor.

    6. Restart Windows for this change to take effect.

    http://support.microsoft.com/kb/328890

    http://support.microsoft.com/kb/823764/EN-US  (Method 3)

    http://support.microsoft.com/kb/2020559

    ---------------------------------------------------------------------------------------------------------------------

    ESX

    For ESX it can be set in 3 places and is actually called Tcp Delayed Ack.  It can be set in 3 ways:

    1.  on the discovery address for iscsi  (recommended)

    2.  specific target

    3.  globally

    Configuring Delayed Ack in ESX 4.0, 4.1, and 5.x

    To implement this workaround in ESX 4.0, 4.1, and 5.x use the vSphere Client to disable delayed ACK.

    Disabling Delayed Ack in ESX 4.0, 4.1, and 5.x
    1. Log in to the vSphere Client and select the host.
    2. Navigate to the Configuration tab.
    3. Select Storage Adapters.
    4. Select the iSCSI vmhba to be modified.
    5. Click Properties.
    6. Modify the delayed Ack setting using the option that best matches your site's needs, as follows:

    Modify the delayed Ack setting on a discovery address (recommended).
    A. On a discovery address, select the Dynamic Discovery tab.
    B. Select the Server Address tab.
    C. Click Settings.
    D. Click Advanced.

    Modify the delayed Ack setting on a specific target.
    A. Select the Static Discovery tab.
    B. Select the target.
    C. Click Settings.
    D. Click Advanced.

    Modify the delayed Ack setting globally.
    A. Select the General tab.
    B. Click Advanced.

    (Note: if setting globally you can also use vmkiscsi-tool
    # vmkiscsi-tool vmhba41 -W -a delayed_ack=0)


    7. In the Advanced Settings dialog box, scroll down to the delayed Ack setting.
    8. Uncheck Inherit From parent. (Does not apply for Global modification of delayed Ack)
    9. Uncheck DelayedAck.
    10. Reboot the ESX host.

    Re-enabling Delayed ACK in ESX 4.0, 4.1, and 5.x
    1. Log in to the vSphere Client and select the host.
    2. Navigate to the Advanced Settings page as described in the preceding task "Disabling Delayed Ack in ESX 4.0, 4.1, and 5.x"
    3. Check Inherit From parent.
    4. Check DelayedAck.
    5. Reboot the ESX host.

    Checking the Current Setting of Delayed ACK in ESX 4.0, 4.1, and 5.x
    1. Log in to the vSphere Client and select the host.
    2. Navigate to the Advanced Settings page as described in the preceding task "Disabling Delayed Ack in ESX 4.0, 4.1, and 5.x."
    3. Observe the setting for DelayedAck.

    If the DelayedAck setting is checked, this option is enabled.
    If you perform this check after you change the delayed ACK setting but before you reboot the host, the result shows the new setting rather than the setting currently in effect.

    Source Material:

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1002598

    http://www.vmware.com/support/vsphere4/doc/vsp_esx40_vc40_rel_notes.html

    http://www.vmware.com/support/vsphere4/doc/vsp_esx40_u2_rel_notes.html

    http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html



  • 35.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Apr 07, 2012 02:54 PM

    I tried setting the DelayedAck per previous post - however my particular instance is not using iSCSI targets - all of my drives are simple SATA drives directly connected to the host.   I continue to see the warning messages - and ESXi did complain about not finding the appropriate iSCSI stuff when I tried to force set my config...

    On a whim, I upgraded my ESXi host to the latest patchset the VMware has: 5.0_update1 - seems to work great as an update, but I still seem to see the log entries even in this latest patchset.

    Doug



  • 36.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Apr 12, 2012 04:56 PM

    I am still updating the continuing progress/saga here. With the DelayedAck change, I do get substantially better performance (better latency). However, I still have two weird issues.

    1) If I have a t least one virtual machine actively doing something on an ISCSI datastore, I get this kind of performance:

    It is what I would expect with the hardware involved. However, I STILL get events in the event log that the "performance has deteriorated". The event lists a datastore and a time and the values that triggered the event. The problem is that I was watching during the time, I was monitoring with esxtop during that time. I was monitoring with IOMeter during that time. There WAS NO LATENCY ON THAT DATASTORE AT THAT TIME! It was not in the vCenter performance log, nor did it display in esxtop, nr did it show in IOMeter. Clearly, there is a major bug with the code that triggers this event.

    2) Now, my SECOND issue. If I do NOT have, at least, one active virtual machine (reading and/or writing data). If there are VMs powered on, but, essentially, sitting idle. Then I get significantly worse latency and many, many mor eevents in the event log reporting latency errors. Here is a sample:



  • 37.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jun 05, 2013 06:52 PM

    I did the same thing to address the issue, but the ii latency issues have returned. The host reboot is most likely what alleviated it, but for what reasons I am not entirely sure. It could be a failover issue in my iSCSI port groups that gets corrected when the connections are refreshed.



  • 38.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Oct 09, 2013 09:16 AM

    I too have been experiencing this error, of recent in a VMware 5.1 environment - and it started after the systems were brought up after a power outage.

    What I noticed were - the messages largely refer to SATA based LUNs. And there is hardly any traffic on them.

    I see this discussion thread has been active for sometime, with no concrete conclusion other than the subtle suggestion of ignoring the alerts.



  • 39.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 11, 2012 10:02 PM

    I recommends applying the NMP policy to all of your ESX hosts datastores to use ‘Round Robin” to maximize throughput because ‘Round Robin’ uses an automatic path selection that rotates through all of its available paths and enables the distribution of the load across those paths.  The default is fixed setting.  Hopefully, this will resolve the errors relating to latency.

    "Device naa.60a980004335434f4334583057375634 performance has deteriorated. I/O latency increased from average value of 3824 microseconds to 253556 microseconds."



  • 40.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 21, 2012 02:06 PM

    If you do use VMware Round Robin, you will need to change the IOPs per path value from 1000 to 3.  Otherwise you will not get the full benefit of multiple NICs.

    For Equallogic devices, you can use this script to set EQL volume to Round Robin and also set the IOPs value.  You can modify it for other vendors.

    This is a script you can run to set all EQL volumes to Round Robin and set the IOPs value to 3..


    esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk '{print $7}'|sed 's/(//g'|sed 's/)//g'` ; do esxcli storage nmp device set -d $i --psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done

    After you run the script you should verify that the changes took effect.
    #esxcli storage nmp device list

    This post from VMware, EMC, Dell, HP explain a little bit about why the value should be changed.

    http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

    Another cause of the latency alerts is having multiple VMDK (or Raw Device Maps) on a single virtual SCSI controller.  You can have up to four in each VM, and assigning a unique SCSI adapter greatly increases IO rates and concurrent IO flow.  As with a real SCSI controller, it will only work with one VMDK (or RDM) at a time before selecting the next VMDK/RDM.   With each having their own, the OS is able to get more IOs in flight at once.   This is especially critical for SQL and Exchange.   So the logs, database and C: drive should all have their own virtual SCSI adapter.

    This website has info on how to do that.  Also talks about the "Paravirtual" Virtual SCSI adapter which can also increase performance and reduce latency.

    http://blog.petecheslock.com/2009/06/03/how-to-add-vmware-paravirtual-scsi-pvscsi-adapters/

    Regards,

    Don



  • 41.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 03:21 AM

    I am having the same problem with FC storage, which has caused hosts to disconnect from the VC server. That has only happened since I installed SRM 5.

    I checked my path profile and several were on Fixed instead of Round Robin, so I changed them. I am still getting the latency messages, although the hosts are not disconnecting.

    The main culprit is a RDM attached to a Linux VM that actually has 7 RDMs attached.

    All the RDMS are on the 1 datastore, what can I do to improve the performance? Should I consolidate the RDMs, or split them over diferent datastores?



  • 42.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 11:15 AM

    Make sure that the IOPS value on all your volume aren't at the default value.  The default is 1000, which won't leverage all available paths fully.  For iSCSI I use 3.  A similar low value should work well with Fibre Channel as well.  The script I posted would need slight modification to work with FC. 

    Also, on that Linux VM, how many virtual SCSI controllers are there?   I suspect only one.   "SCSI Controller 0" and the drives are at  SCSI(0:0), SCSI(0:1), etc..) under the Virtual device node box on the right hand side.

    If so you need to create additional SCSI controllers.   You can have up to four virtual SCSI controllers per VM.  So you'll need to double up on a copy of RDMs in your case.  But any VMs that have multiple VMDKs or RDMs need to have this done if they are doing any significant IO.

    Shutdown the VM, edit settings.   Select the VMDK/RDM you want to add a controller to and under "Virtual Device node" change the ID from SCSI(0:2) (for example) to SCSI(1:0) using the drop down button and scroll the list until you see SCSI(1:0).    Repeat until you have done this for all the busiest RDMs.  You'll need to double up some.  So your boot drive at SCSI(0:0) should share a controller with the least busy RDM you have, and that would be set at SCSI(0:1).   The two remaining would also need to be on different SCSI adapters, again pair next least busy RDMs.  So they'd be at SCSI(1:1),  SCSI(2:1).

    Then boot the VM.  You should notice a big difference.  

    If you have problems with this procedure, let me know.  I have a draft of doc that I put together on how to do this.  Includes screenshots,etc.. 

    Regards,

    Don



  • 43.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 11:25 AM

    Just remember like in a windows MSCS with RDM's, if your Linux is "reservating" those LUNS you migth end up with problems when using RR...



  • 44.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 12:21 PM

    I am also seeing these messages "performance has deteriorated...." on vSphere 5 update 1, with local disks. Except in my case there is a real I/O problem. This is a fresh build, with just one VM, copying a few GB of data and I can get I/O latency as high as 2500ms....yes, 2500ms, yes, 2.5 seconds!

    In addition to these types of messages, vmkernel.log also has lots of suspicous looking vscsi reset log entries...

    The hardware vendor (Cisco, UCS C210) cannot find anything wrong, we have replaced the RAID card, all drivers and firmware check out as supported, VMware also cannot find anything wrong....

    I see this across two distinct servers too, buth vSphere 5 update 1, so I can only assume a driver/firwmare isue at this point, even though both cisco and vmware say it is all supported.



  • 45.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 01:37 PM

    Hello,

    If each IO had an average of 2.5 secs then the server/VM would completly stop.  Is that's what's happening?

    I would check the cache setting on the controller.  Sounds like it's set to WRITE-THROUGH instead of WRITE-BACK.   What's the status of the cache battery?   Some will periodically drain the battery to insure it actually has a full charge.



  • 46.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 02:27 PM

    Hi Don, thanks for the input. The VM does not stop, I/O just slows from the 100+MB/sec to anywhere down to a few hundred KB/sec.

    esxtop shows bad DAVG values going up/down anywhere from 50 - 600 and beyond.

    The cache settings are configured on the virtaul drive, and the Write Cache Policy is set to Write Through.

    The adapter has no battery installed.

    Standby...looking into changing the cache setting....



  • 47.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 02:34 PM

    re: Cache..  Write Through is almost assuredly your issue.  You really need a battery backed RAID controller card so you can run write back.  That makes a HUGE difference on writes.  Also since writes aren't cached, writes tend to have higher priority to READS and therefore reads get blocked by the writes. Also w/o write cache, the adatper can't do "scatter-gather" and bring random IO blocks together before writing to disk.  This greatly improves write performance since when you go to one area of the disk, you write out all the blocks for that address range. It helps sequentialize random IO loads.

    If you can use a VM that's not production on a server with write back enabled (even w/o battery) I think your errors will go away or dropped significantly.

    Then set it back to WT when using production VMs.

    How many drives and what RAID level are you using on that card?

    I suspect maybe Cisco offers another RAID card with battery?   

    Regards,

    Don



  • 48.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 02:58 PM

    ok, I changed the cache policy to always write back, and performance has gone through the roof. On a Linux guest I can now see consistent 450+MB/sec writes, over 1000IOPS and the DAVG values are not going over 2. The worst recorded latency was 30ms.

    Stressing a windows guest as far as I can with multiple large file copies, the performance is less stellar, but still over 150MB/sec, DAVG seeing up to 50 or so, latency maxed out at 80ms.

    Now to get some batteries so I can leave it like this...

    Thank you Don for pointing out what I had overlooked!



  • 49.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 03:00 PM

    The config is 6 drives, 300GB SAS, single RAID 5.

    Apparently the battery is an option.....wtf? Who makes a RAID battery an option? Also just for grins, they dont tell you about this "option" when you order the server, silly me for assuming a RAID card would come with a battery......



  • 50.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 03:05 PM

    You are VERY welcome!!  Glad I could help out.

    I don't recall the last RAID card that came w/o batteries. Until you get them I would not leave it in WB.  Very risky.

    Windows copy is not very efficient, each copy is single threaded.    Using Robocopy or better yet Rich copy yields better results.

    Regards,



  • 51.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 01:33 PM

    That reservation issue is when you use SCSI-3 Persistent Reservations.  By default Linux doesn't use them.  (outside of clusters)   MCS has used them since Windows 2003.

    I run RH, Ubuntu, Mint, Debian, SuSE with RDMs using RR and Dell EQL MEM w/o any issues.



  • 52.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 23, 2012 11:13 PM

    Thanks for the reply, this will really help. The only question is, how do I change the IOPS for FC? I can't see the option anywhere.

    As for changing the SCSI controllers, I will have to schedule an outage etc as these are production systems. However, you have shown me ther is light at the end of the tunnel!



  • 53.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted May 24, 2012 01:57 AM

    Earlier in this thread I posted a script to change the IOPs value.  There's no GUI option to do so.

    #esxcli storage nmp device list

    When you run the above command you'll get a list of your current devices, their path policy and for RR policied volumes the IOPS=1000. 

    I'm not sure what FC storage you are connecting to but it will have a VENDOR ID.  On EQL volumes that ID is EQLOGIC.  If yours is EMC then you need to change the line in the script from EQLOGIC to EMC.

    esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_EQL ; for i in `esxcli storage nmp device list | grep EQLOGIC|awk '{print $7}'|sed 's/(//g'|sed 's/)//g'` ; do esxcli storage nmp device set -d $i --psp=VMW_PSP_RR ; esxcli storage nmp psp roundrobin deviceconfig set -d $i -I 3 -t iops ; done

    After you run the script you should verify that the changes took effect.
    #esxcli storage nmp device list

    Regards,

    Don



  • 54.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 17, 2012 06:23 PM

    I saw that most of you says that just want to know a way to deactivate the messages but in my case I am having a degraded performance in one of my vm's and packet loss in that VM is not just the message I have other simptoms.



  • 55.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 17, 2012 06:32 PM

    Are you connecting to a Dell/Equallogic array?    That's what I'm most familiar with.

    Common causes of performance issues that generate that alert are:    (Most will apply to all storage)

    1.)  Delayed ACK is enabled.

    2.)  Large Recieve Offload (LRO) is enabled

    3.)  MPIO pathing is set to FIXED 

    4.)  MPIO is set to VMware Round Robin but the IOs per path is left at default of 1000.  Should be 3.

    5.)  VMs with more than one VMDK (or RDM) are sharing one Virtual SCSI adapter.  Each VM can have up to four Virtual SCSI adapters.

    6.)  iSCSI switch not configured correctly or not designed for iSCSI SAN use.

    If this is a Dell array, please open a support case.   They can help you with this.

    Regards,



  • 56.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 18, 2012 12:27 AM

    Lately we've seen huge increases in performance with a few simple iSCSI tuning methods (NetApp FAS2040 - 4x 1GbE).   We've gone from latency alarms several times per day to none at all.  

    I haven't seen these concisely documented anywhere, so here's what we did:

    1. Using bytes=8800 (with Jumbo frames) rather than an IOPS value (or the default)
    2. Make sure the active Path count matches the number of storage adapter NICs on your VM host or Storage system (whichever is less). 
      1. Previously we had iSCSI Dynamic Discovery which added all 4 NetApp paths for each storage adapter vmk (resulting in 16 paths per LUN);  this resulted in "Path Thrashing".   Changed to Static discovery and manually mapped only 1 iSCSI target per vmk.  
    3. Don't use LACP on either side.   LACP completely ruins RR MPIO.
    4. Fix VM alignment.   We had a handful of Windows 2003 and Linux guests with bad alignment.  They didn't do much IO so we ignored them in the past,  big mistake.  (NetApp's performance advisor really helped to nail this down)
    5. Stagger all scheduled tasks.   We found a number of IO-intensive tasks (AV updates, certain backups) all running at the same times in our environment.  

    From this article:  http://blog.dave.vc/2011/07/esx-iscsi-round-robin-mpio-multipath-io.html

    The command we used is:

    esxcli storage nmp device list |grep ^naa.FIRST_8_OF_YOUR_SAN_HERE | while read device ; do
        esxcli storage nmp psp roundrobin deviceconfig set -B 8800 --type=bytes --device=${device}
    done

    Throughput results:

    • Original:  95 MB/s
    • IOPS=1 or IOPS=3:   110-120 MB/s
    • Bytes=8800:   191 MB/s   (hurray!)
      • 4KB IOPS also saw a 3x improvement over the original configuration

    NOTE:   We also changed back from Software iSCSI to the Broadcom NetXtreme II "Hardware Dependent" driver now that the new June 2012 version supports Jumbo frames: https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXi50-Broadcom-bnx2x-17254v502&productId=229

    If I could do this all over again I would skip iSCSI altogether.   What a complete PITA it has been to get decent performance compared to spending a few grand more for FC.



  • 57.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 18, 2012 05:52 PM

    With ESXi 5, Delayed ACK keeps re-enabling itself on my hosts resulting in high Latency on my SAN. It is getting really annoying. Has anyone else experienced this problem? I am disabling it globally on the software iSCIS initiator. I believe a reboot is required when you disable it, so when it re-enables itself I am not sure if it takes effect till the next reboot or when it turns itself back on.



  • 58.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 18, 2012 06:04 PM

    Are you at the current build of ESXi v5?

    What I've been seeing is that if you just disable the Delayed ACK, it's not updating the database that stores the settings for each LUN. Any NEW luns will inherit the value.

    You can check by going on the ESXi console and entering:

    #vmkiscsid --dump-db | grep Delayed

    All the values should be ="0" for disable.

    I find that removing the discovery address, and removing the discovered targets in the "Static Discovery" tab to clean out the db. Then add discovery address back in with Delayed ACK disabled, AND make sure the login_timeout value is set to 60. Default is 5. Then do rescan.

    Go back to CLI and re-run #vmkiscsid --dump-db | grep Delayed to verify.

    Also you should run #vmkiscsid --dump-db | grep login_timeout to check that setting as well.



  • 59.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 19, 2012 04:54 PM

    I am at 5.0.0, 721882

    I got 17 `node.conn[0].iscsi.DelayedAck`='x' results back with only 6 of them reporting a 0 and all the rest 1.

    I have some scheduled maintenance this weekend, so I am going to install the latest ESXi patch and clean out the discovered addresses.

    I also found this Article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007829 referencing recommendations for EqualLogic arrays and iSCSI logins. We use EqualLoic and the article recommends15 and that is what it is currently set at.  I am not getting any initiator disconnect errors from the SAN.

    Is 15 too conservative from your experience?

    Thanks



  • 60.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Jul 19, 2012 06:03 PM

    The default in ESXi v5 is 5 seconds, in larger groups with many connections, that timeout will be too short.   Setting to 60 covers all scenarios.

    Also, VMware will be releasing a patch for 4.1 that will also allow the login timeout to be extended from 15 second default to 60.

    Re: Delayed Ack.  I've seen that also.   Worst case I've gone to the static discovery and manually modified each target.  Then repeated on the other nodes.  :-(   No fun if you have allot of volumes.



  • 61.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Nov 29, 2012 06:08 PM

    @IrvingPOP2

    I have been receiving these messages since we first built our solution which consists of HP Blades with 2x1Gb NICs per server, a Cisco 3120G switch and a NetApp FA2040. I've been researching this issue for a long time, and your post has given me hope that there might be a light at the end of the tunnel.  I'm planning on implementing some of your same steps, but I'm curious about a few things from your post:

    Make sure the active Path count matches the number of storage adapter NICs on your VM host or Storage system (whichever is less)

    We only have 2 links per server to attach to the network, but the FAS2040 has 4 NICs.  The FAS2040's NICs are setup using LACP (Dynamic Multimode-VIFs).  Are 2 links per server enough for this configuration or would you recommend more?

    Previously we had iSCSI Dynamic Discovery which added all 4 NetApp paths  for each storage adapter vmk (resulting in 16 paths per LUN);  this  resulted in "Path Thrashing".   Changed to Static discovery and manually  mapped only 1 iSCSI target per vmk.


    When I configured my ESX hosts for Static Discovery, the next time I rebooted those hosts the iSCSI paths were gone.  Have you run into this issue?

    Don't use LACP on either side.   LACP completely ruins RR MPIO.

    NetApp’s documentation (TR-3802) discusses link aggregation and LACP (Dynamic Multimode) looks like the best option on paper as opposed to using EtherChannel (Static Multimode) due to the fact that EtherChannel is susceptible to a “black hole” condition.  I’m curious which way you configured your storage and switch since you removed LACP.  Would you be so kind as to paste the configs from your NetApp and Switch?

    Lastly, out of all the changes that you made, which would you say was the most helpful?

    Thanks!



  • 62.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Nov 29, 2012 08:03 PM

    iBahnEST,

    Many months now since we implemented these changes,  many more lessons learned.   Let me summarize them:

    Regarding the NetApp FAS2040:

    Lower-end NetApps give poor throughput (MB/s) compared to "dumber" arrays.  However they give much better IOPS, so the trade-off is yours to make.   The summary of my many conversations with NetApp I learned:

      1. FAS2040 has a really tiny NVMEM cache (512MB, but only 256MB usable at a time).    Your statit and sysstat output will show huge amount of flushing to disk during write because of "nvlog full"
      2. WAFL is spindle-greedy.   If your aggregate RAID groups are less than the recommended size (16-20 disks) your throughput will suffer badly (like 15 MB/s per disk).  a 2040 only has 12 disks (split among 2 controllers) so the RAID groups are super un-optimized no matter what kind of disk you use.
      3. ONTAP 8 is RAM-greedy, especially with fancy features like Dedupe.    FAS2040 controllers only have 4GB of RAM each,  and NetApp will tell you that only 1.5GB is left to work with once the OS is booted.  See NetApp communities,  people with 4GB RAM filers (2040, 3100) are getting crushed by the upgrade to 8.1 when dedupe is involved.   Remove Dedupe and don't go higher than ONTAP 8.0.4.

    In our case, we shifted our backups (Netbackup direct-style off-host backup) from iSCSI to FC, thinking our iSCSI setup was still sub-optimal.   Sustained throughput (read only) still around 90-110 MB/s.       

    For the math-challenged, that is still comparable to what a single iSCSI gigabit line can achieve with Jumbo frames enabled).

    Regarding iSCSI

    • In summary, I would never use iSCSI on another production system.  Ever again.   The amount of effort required to tune and monitor is huge and you STILL get sub-par performance.  Just not worth it.  
      • For NetApps, use NFS.  Even NetApp will tell you that the performance will be much better.
    • The biggest performance improvements we got (in iSCSI were):
      1. Reducing the number of iSCSI paths per LUN.   1-2 is enough, especially if you are storage throughput limited.
      2. 2 paths between VM host and storage doesn't mean 2 paths.   Because you'll map iSCSI session per "path" per LUN,  you will still have contention on your paths between various LUNs.  
      3. Definitely don't use LACP with iSCSI MPIO.    Remember that once a mac address pair has been assigned to an LACP channel it is stuck there until that channel goes down.  We found lots of link contention on both the NetApp and VM host side because LACP is dumb in the way it assigns and then never re-balances.   NetApp recommends LACP for NFS only.
      4. We went back from bytes=8800 to iops=1 as we found during business hours there was less latency spikes.     Because of point #2 above,  2 iscsi sessions will try to cram 8800 bytes down a single path (causing contention)

    Regarding your static discovery question:   Are you getting the paths by Dynamic discovery and then removing the dynamic entries?    Best to remove all the dynamic stuff, reboot, and then add the entries manually.

    I can share with you a NetApp rc section which simply shows all 4 gigabit interfaces configured for iSCSI only.   Ports going to 2 different switches:

    ifconfig e0a 192.168.15.11 netmask 255.255.255.0 partner e0a mtusize 9000 trusted -wins up
    ifconfig e0b 192.168.15.12 netmask 255.255.255.0 partner e0b mtusize 9000 trusted -wins up
    ifconfig e0c 192.168.15.13 netmask 255.255.255.0 partner e0c mtusize 9000 trusted -wins up
    ifconfig e0d 192.168.15.14 netmask 255.255.255.0 partner e0d mtusize 9000 trusted -wins up

    Sorry for the lengthy post, hope thats helpful.



  • 63.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Sep 27, 2012 09:08 AM

    I too am seeing this on 2 of my 3 hosts.  1 host is hardly doing anything (at the moment) the other 2 coming up with these messages mainly out of hours.

    All 3 are the same spec, using local storage - raid 6 16x drives. 

    esxi v5.0.0, 469512.

    Is there a fix?



  • 64.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Oct 09, 2013 04:45 PM

    For Equallogic iSCSI SAN there is now a best practices guide for ESX deployments.

    http://en.community.dell.com/techcenter/extras/m/white_papers/20434601/download.aspx

    Re: Local storage.  I would strongly suggest upgrading to a l current version of ESX v5.0.  (Or better 5.1).   The build numbers are way over 1 million compared to the 400K you reported.

    A common cause of latency is having multiple VMDKs on a single Virtual SCSI adapter inside the VM.   ESX allows up to 4x Virtual adapters per VM.   More adapters mean more concurrent IO operations are possible.  A single adapter can only talk to one "drive" / VMDK at a time.  If you have more than 4x, don't put your busiest disks on a single controller.  Spread them out over the 4x adapters.  

    If you search for Virtual SCSI adapters and paravirtual SCSI adapters you will find more info.

    Regards,



  • 65.  RE: Event: Device Performance has deteriorated. I/O Latency increased

    Posted Nov 10, 2014 06:58 AM

    Hi, experiencing the same issue. This problem occurs when copying something from inside the VM. I copy a file, which is 3GB, from C:\ to C:\Temp inside the OS of the VM. Then the warnings start to appear. I also experienced this when installing/upgrading the VMware Tools of the VM. It is really frustrating, because I absolutely see no slow performance inside the VM's. 

    We upgraded firmware, OS, drivers of all parts. No result.

    Erwin