Hi All,
I have two ESXi hosts with 12 NICs each. I have installed ESXi on each and have vSphere 5. My setup looks like this:
[ESXi1]-------------------------|Production |
|Switch LAN|
(ESXi2]-------------------------| |
| |
| |
| |
[NetAPP SAN] | |
[ 1 SATA SHELF]------------------ | <---Vif0 etherchanneled back to switch for CIFS
|
[1 SAS SHELP]---------------------- <---Vif0 etherchanneled back to switch
Please excuse my crude diagram. Currently the two ESXi hosts have two NICs each teamed and connected to our Production LAN switch. I have not placed them in a vlan as of yet. VIF0 on my SATA shelf of the NetApp is etherchanneled back to our Production LAN switch for CIFS. VIF0 on the SAS shelf is etherchanneled back to the Production LAN switch. Both SAN etherchannel connections are in a vlan.
My question is how should I properly connect the ESXi hosts and best utilize the remaining 10 NICs on each host? I plan to use vMotion. How should I design the vlans for the ESXi hosts? Basically what I am asking is how to best utilize my resources and best practice. How would you design this. The SAN is NFS.
Also my production network is a 192.168.5.x network. Once I start creating VMs they will need to be on the same subnet, so do I just assign them 5.x IP addresses like I would on a physical switch and does the vSwitch need a 5.x IP as well?
Any help, suggestions or insight with this would be great. I want to design this in the best possible way.
Thanks again