VMware vSphere

 View Only
  • 1.  Support for new Intel C204 or Q67 chipsets?

    Posted Apr 15, 2011 05:44 AM

    A little background:

    I'm planning on upgrading old Win2k8 file server and running ESXi 4.1 for daily use for the file server, and the extra capacity for learning VMware and some testing. I have two possible builds and was looking for a little advice:

    Supermicro X9SCM-F-O (C204 chipset, 2 Intel NICs, KVM over IP)

    Xeon E3-1230 (3.2Ghz, 4 cores, 4 threads)

    8GB of DDR3-1333 ECC

    or

    Intel DQ67SWB3 Motherboard (Q67, 1 onboard Intel NIC, KVM over IP)

    Intel i5-2400 (3.1Ghz, 4 cores, no multithread)

    8GB of DDR3-1333 ECC

    Intel Pro 1000 PT dual-NIC card

    The Xeon route is about $115 more expensive total. (mainly due to the ECC memory cost) This server will run 24/7 with 2 regular VMs, 2008 R2 for a file server, and WHS 2011 for client backups and remote access.  (6TB total storage, about 4TB allocated to the file server, 500GB to the WHS 2011 for backups, and the rest for "playtime" VMs)

    Questions:

    Storage will be an Acrea ARC-1220 RAID card with 5-6 drives in RAID 5 for storage.  I have a unused 300GB 10k drive that I was considering using as the ESXi and "OS" drive attached to the MB's SATA controller.  Would this be a good idea?  Or should I just do everything from the RAID array?

    What does the Xeon/ECC setup buy me that I don't get with the "desktop" processor and motherboard?  It's about $150 more initially, and more when/if I ever want to upgrade the RAM.

    From what I've read online so far, the Q67 and C204 chipsets SHOULD work ok, but are not officially supported yet.  It seems like the onboard Intel NIC chipset isn't supported on either board, but there's a way to manually load working drivers.  I'd like to get NEW hardware for this build to "future proof" as much as possible.  However if its going to be nothing but problems until a new version of ESXi comes out, it might not be worth it...

    I'm still learning this whole virtualization thing, so any advice or info would be appreciated.



  • 2.  RE: Support for new Intel C204 or Q67 chipsets?

    Posted Apr 15, 2011 01:14 PM

    Greetings Bandalo,

    For the past year I've been using the SuperMicro X8SIL, series of motherboards/systems for use in our VMware Academy, www.cccti.edu/vmware.  They work great.  The X9SCM, as you've noted takes advantage of the C204, chipset and uses the 1155 socket.  I plan on acquiring the X9SCM, myself as soon as I can catch a breath.

    To answer the question regarding the XEON and ECC memory, it would be if you ever need to bump up to 32G.  With standard desktop memory the max you can go is to 16G.

    The other comment/suggestion I would make to protect yourself as you put it "future proof", is to separate your storage from your host.  This may be a bit of overkill for a home/personal cloud environment and is something you can always consider at a later date.

    Although I cannot absolutely confirm full functionality of the X9SCM mb currently I have to believe everything will function out-of-box.  Particularly since it is obvious these mb were built to virtualize!

    I will likely be able to confirm this in about a month, give or take a week or two.

    Pete



  • 3.  RE: Support for new Intel C204 or Q67 chipsets?

    Posted Apr 17, 2011 06:36 AM

    Thanks for the information from both of you!  I had a couple follow-up questions if you have time..

    So for a home lab, I probably won't be going over 16GB max on the RAM.  So based on that, is there any real reason to go with the Xeon/ECC solution over the desktop setup?  I know the Q67 apparenlty supports every vitrualization tech that Intel has, so I should be OK on that side.

    For seperating the host from the storage, are you talking about some sort of SAN type arrangement (iSCSI or something like that?)  All I know about SANs is what I've read on Wikipedia.  It sounds like a fairly expensive option for a home setup though.  At least what I saw on Newegg for the small drive enclosures were running $800-2000...

    As for DSTAVERT's points:

    So with a 6TB array (on a hardware Acrea ARC-1220 controller) - I can only give a VM storage in 2TB chunks?  My plan was to give the file server about 4TB total.  Is there any way to assign more?  Other than just giving it 2 seperate 2TB drives?

    I thought about the NAS route when I started this project about a year or two ago, but I already have all the server hardware.  I was hoping not to spend $600-1000 more for a NAS if I can avoid it..I would think running the shares under a full version of 2008R2 would have more flexibility overall than a NAS.

    As for the RAID controller, it does NOT have the battery backup module, but I do have the server on it's own small UPS.  The BIOS does allow write-caching, but it does warn you that it's a bad idea without the battery backup.  If I have the server shut down when the UPS gets to 50%, I should be OK on that count, right?

    Thanks again for your time!



  • 4.  RE: Support for new Intel C204 or Q67 chipsets?

    Posted Apr 17, 2011 09:34 AM

    you can use extends to join to 1.9TB vmfs volumes together, but then you still need to create 2vmdk files and then use windows dynamic disks to create a RAID0 (spanned) array to give you your 4GB.

    The problem with this is that you are relying on software raid to join the VM's and any type of array bump (like a drive going offline) can cause the software array to corrupt but if its media files,etc then its probably not a huge loss - since I assume you are not gonig to be backing it up, but hoping the RAID is going to provide you with some redundancy.

    I have a couple recommendations - Make sure you bought the Battery controller for that Acrea or you won't be doing anything on that box as the performance will be horrible.

    Also, if you are not gonig to be doing backups then I highly recommend you purchase the Raid Edition level SATA drives vs trying to go with the cheap models.  The Raid edition all have the timeout values designed for RAID controllers...all the other drives now prevent you from changing the Timing and therefore under any type of load (and even under no load) drives will mark themselves offline and go into rebuild.

    I have a 3ware controller where I started out with cheap drives and then had to purchase 4 Expensive drives....the expensive Raid drives never have a problem..I am always seeing rebuild errors on the WD black Drives...

    I've experienced this in a couple different configurations and the Raid edition drives (I like the Western Digital) have never given me trouble.



  • 5.  RE: Support for new Intel C204 or Q67 chipsets?

    Posted Apr 17, 2011 06:54 PM

    Well, I do routine backups of all of my media files, just in case.  All the pictures and family videos are backed up to carbonite online, and all of the music, tv shows, and movies are backed up to external drives every couple weeks.  You can not imagine what would happen to me if I lost any of the photos or home videos.. :smileyhappy:

    Now, I do NOT have the battery unit for the Acrea raid controller.  I'm pretty sure I can enable write-caching without it though, and that should improve performance.  I have a UPS on the box, and I will have it set to shut down before the UPS goes out, so that should take care of that concern.  As for the drives, I am using the Samsung EcoGreen 1.5TB drives.  They're not super high performance by any means, but I got a good deal for them, so I can't complain.  I don't know how much of a performance concern this will be really, since it's mainly just me using it for learning.  (both VMware and Windows server/Active Directory)  Am I really going to see that much of a performance hit?

    I have been running these drives for about 4-5 months so far, and I haven't seen them drop out or have any problems so far.

    Also, I will probably be installing onto a spare 300GB 10k rpm WD drive, and running the "boot" partitions for the VMs from there.  (I was thinking an 80GB VHD for the file server OS, then add the 2TB VHDs from the raid array.  That way the faster drive is covering the boot sections.  Good idea?  Bad idea?



  • 6.  RE: Support for new Intel C204 or Q67 chipsets?

    Posted Apr 15, 2011 01:42 PM

    Do remember that the largest LUN that ESXi will recognize is slightly less than 2TB. The largest VMDK has the same limitation.

    Personnaly I would get a dedicated NAS device for the file serving needs or 2 and replicate.

    Make sure the individual component models (NICs, disk controllers etc) are on the HCL http://vmware.com/go/hcl otherwise you will be buying those seperately. ESX(i) does not support software based RAID controllers. A hardware RAID controller without a battery backed cache module will be painfully slow in a virtual environment. You need the battery to enable write caching.