I am having a really hard time architecting a solution for an environment I am working on. I have found documentation for NFS or iSCSI, but cannot figure out the best way to get both at the same time with the hardware we are working with. We are dealing with the following:
- 5 x ESXi4.1 Hosts
- 4 x 1Gb physical NICs per host for VM traffic (irrelevant)
- 4 x 1Gb physical NICs per host dedicated for storage
- 2 x Catalyst 3750G-E
- In a stack, not enough physical ports for both VM traffic and storage so it is only used for VM traffic
- 2 x Catalyst 2960G
- Separate switches dedicated to storage traffic.
- No connectivity between them. Could possibly bond 4 x 1Gb interfaces together if it would help?
- 2 x NetAppFAS2050s
- One is a clustered unit with two heads (total of 4 x 1Gb physical NICs, 2 per head)
- The other is a single head (total of 2 x 1Gb physical NICs)
- 1 x StoreVault S500
We have the need to use both iSCSI and NFS on the above devices. I cannot wrap my head around how the vSwitches are going to look. One note is that we are working with Enterprise Plus so we do have vNetwork Distributed Switches and the ability to "Route based on physical NIC load" which sounds really intriguing for the NFS traffic. For iSCSI, I have always been told there should be a 1 to 1 mapping between VMKernel port and physical NIC, and each path to the SAN should be on a separate subnet to ensure that the traffic is sent/received on the expected interfaces and to allow for proper multipathing. From what I can tell with NFS, multipathing is not possible and it is recommended to team all the physical NICsinto a single VMKernel port. The NFS traffic will then balance across datastores (different target IPs).
Anyway I guess a couple of questions that I am struggling with are:
- Do I use vNetwork Distributed Switches for storage traffic or do I need to stick with traditional switches for some reason?
- Given that we have 4 paths setup in the iSCSI configuration, how can NFS compete with that unless we have several (at least 4) NFS exports? Maybe I do not understand iSCSI multipathing entirely.
- How would the networking configuration look as far as vSwitch, etc? I envision it as a single vSwitch with "Route based on physical NIC load" as the teaming. 4 x VMKernel ports with a single physical NIC active in each and a different subnet in each for iSCSI. 1 x VMKernel port with all 4 physical NICs active using the inherited "Route based on physical NIC load". On the FAS/StoreVault side I am still confused about using VIFs with aliases vs individual interfaces. Seems like individual interfaces for iSCSI makes sense while LACP or multi-mode VIF's make sense for NFS.
- Where does LACP come into play? I know the 2960G's do not do cross-switch LACP, so do I do a 2 port LACP from each switch back to each host?
- Am I trying to make a bowling ball fit through a garden hose? Do I need to get the storage traffic in the stack and do cross stack LACP? Do I need to break up the NFS and iSCSI traffic into two separate vSwitches with different physical NICs?
I am very much open to suggestions and any advice or articles that will help clarify. I've combed through numerous articles only to find more questions that needed answers. They all seem to be one or the other but never both on the same environment.
Any help would be greatly appreciated!
Thanks!