VMware vSphere

 View Only
  • 1.  Maintenance Mode Question

    Posted May 06, 2014 01:16 PM

    We currently do not utilize Admission Control. I know, I know. But it's not my call. Well now I have a two-node cluster with each host showing over 90% memory utilization. I'm concerned that I will not be able to put one host into maintenance mode. Is there anything I can do in a situation such as this short of powering off select VMs?



  • 2.  RE: Maintenance Mode Question

    Broadcom Employee
    Posted May 07, 2014 06:13 AM

    I did not come across such situation any time but without powering down some of VMs, the host on which you are going to migrate powered on VMs will be super loaded and VMs may not perform well. there also can be impact on vMotion operation, it will not be that smooth as it can be. As you are going to put host into MM, it will not be stopped by VC itself due to memory constraint (admission control is also not enabled)  & if memory is over committed, alarm will be generated from memory perspective.

    If it is not production environment, then you can go ahead put host into MM and post your observation. (but you will have evacuate that host first manually) (I am sure in production, admin usually do not allow cluster to corss 90%)

    Also you can enable DRS on that cluster and put host into MM, DRS will automatically start evacuating that host. It is better if you could analyse if you can power of any VMs from either host.



  • 3.  RE: Maintenance Mode Question

    Posted May 08, 2014 01:57 PM

    Thank you for the reply. A colleague of mine went ahead and proceeded yesterday without powering down any VMs. The memory utilization on the hosts were 88%, and 95%. All of the VMs did successfully vMotion without issues.

    My concern is I had a similar situation where memory utilization was high, and as I was putting a host into MM, two hosts briefly disconnected from VC. I cancelled MM, and the hosts did come back. I'm not sure what would have happened if I had let it continue, but memory % was in the red. I've never seen this happen before.



  • 4.  RE: Maintenance Mode Question

    Posted May 10, 2014 10:11 PM

    You have to differentiate between memory consumption and active memory usage. The hypervisor does not have access to the VM guest OS free list and does not know about internal memory mapping of these guests.

    It therefore will usually not release memory that was once claimed by the guest until memory pressure is kicking in. So while a VM might have been showing alot of consumed memory the actual working set of this VM is lower most of the time.

    This of course needs to be evaluated on a case by case basis as there is no general rule of thumb and it is extremely workload dependent if having high memory consumption on the host is an actual issue or just a cosmetic issue (especially since ESXi 5.0 uses large pages and TPS is kicking in way later than it used to do in 4.x).



  • 5.  RE: Maintenance Mode Question

    Posted May 13, 2014 09:14 PM

    That's a good point you bring up Frank. I was thinking about turning large pages off to help TPS kick in sooner. What are the recommendations for this?