I know this question has been asked ad-nauseum in various ways I'm sure, but what I need more of a general answer to it.I can dig up white box specs on various web sites. That's not what I'm asking. My actual question will follow more of my background first to help this all make more sense. Please feel free to rip apart any of my assumptions below, I don't mind citiciscm if it's constructive!
What I want to set up is a real physical (as opposed to virtual) lab. Please no "You can run ESXi on Workstation 8" type responses, I've moved well beyond that... I know enough now to be dangerous lol!
What I've never seen is any kind of answer to setup relating to a bare minimum HIGH AVAILABILITY / FAULT TOLERANT hardware only lab (which can scale out) that is specifically for learning all the features vSpehere/ESXi 5 offers... vMotion being a big one. I want something where I can pull the plug on one box to test if I've developed something that can handle hardware failures. I also want to break out the storage and learn iSCSI on the (affordable) hardware level.
Right now, so you know where I'm coming from, I'm running the following for a lab:
1. 1 physical box that is a dedicated Windows 2008 R2 AD Server. A typical 4GB desktop generic box. This is always static so I can tear down the other elements of my lab and re-build immediately. IP: 192.168.1.5
2. 1 generic whitebox that is an all-in-one server and all parts are on the VMWare HCL:
- 32GB of Ram
- 2 - 8Core AMD processors
- Supermicro motherboard
- 2 TB of internal drives
- 4 Nic's (though I have issues if I use more than 2, not sure why? They constantly would start/stop under ESXi4.1)
- I set one NIC for the VM's another for management (though with this setup i guess that's pointless)
- I set this box to IP: 192.168.1.2 and joined it to the domain
This whitebox runs the following Virtual Servers:
- A second Win 2008 R2 AD Server IP: 192.168.1.6
- A Windows 2008 R2 vCenter Server IP: 192.168.1.7
- At any given time 2-10 various Windows 2008 R2 or linux Servers for learning other software
What I'd like to do is move away from the single monolythic ESXi box, and get rid of my external primary AD server for the lab. I've got money I can spend (to a point), so this is what I was thinking I need:
- 3 identical physical boxes that can each run ESXi 5 hypervisor. (Amount of VM's I want to run aside...) how many NIC's should each box have? I'm assuming 1 is the minimum, but there will be literally no actual traffic, so I don't need to team nics for traffic only purposes, or deal with "cable failure" issues in regards to HA or FT. But having dedicated NIC's for learning purposes to separate out the management network, vs the VM networks vs the VMotion network is something else, as it relates to configuration learning aspects.
- I'd also like to set up the entire domain in VM's so I'll need resources to handle the following (evenly spread out 3-4 VM's per box):
- 2 Win2008 R2 AD Servers that will replicate DNS and DHCP info
- 1 Win2008 R2 Server dedicated to vCenter
- 1 Win2008 R2 Web/SQL server
- 2-6 Additional Win/Linux servers
- Assume each VM server is setup expecting 2 cores and 4GB of ram
I know with minimal traffic and how vSphere can share resources I don't need full specs for the RAM/CPU, but I'd like the system to stay 100% live (albeit I know slow) if one box fails and the the VM's on that box having to migrate to the other 2 and strain resources. I hope this makes sense? So would each whitebox having 16GB of RAM and an Intel or AMD 4 core CPU be adequate? Or would a 4 core CPU not be enough to handle 4 concurrent VM's, let alone more and not bog down to unusable? What if 2 boxes failed? Could one of these boxes still manage all this (Yes, I know it would be dog slow, but could it?)
I'm also looking into something like a Buffalo Technology iSCSI NAS or a whitebox self built OpenFiler iSCSI NAS. I'd like it to be small but fast, as in 6-128GB SSD drives in a raid 5 setup maybe? How many NIC's? Just 2? 4?
I currently have a 16 port Gigabit unmanaged switch everything is running through. Should I replace this with a managed Cisco switch? more than 1? how many ports?
I know this is a ton to ask in one post, so I hope the general over-arcing question is apparent!
Thank you for any help and advice!
~Michael