We would like any advice on "best practices" for how to setup redundant physical switches that connect our ESXi host(s) to a NFS NAS via 10GbE.
Please see the two attached diagrams. We cannot decide between stacking the switches or leaving them unstacked and separate (and using NFS multipathing). I wanted to make sure either scenario was feasible.
STACKED
The primary downside I see to stacking the switches is that we wouldn't be able to do firmware updates without taking the entire cluster offline, because both switches in the stack would reboot at the same time. Firmware updates are rare, but I want to be able to do them without shutting every VM down. This is a significant disadvantage of stacking to me and I would like to avoid stacking if only for this reason.
In a stacked scenario, we would most likely employ LACP on the NAS for redundant links to the switch stack, although I suppose we could still try multipathing with two NAS IPs. I understand that I would probably use IP hash if we setup LACP instead of the default NIC teaming.
UNSTACKED
I believe we would have to use multipathing if we do NOT stack the switches and want to keep a single VMK. I don't know which option I would want for the vmnic teaming in this scenario.
My understanding is that round-robin is used with multipathing, so vmnic1 and vmnic2 would choose either of the two NAS's IPs automatically. This is why the switches would need a link between them, so that either vmnic could connect to either NAS IP through either of the switches.
This setup would let us reboot either switch without taking down the entire cluster.
Can anyone offer any insight into either STACKED or UNSTACKED design, or using NFS Multipathing in general? Am I overlooking anything important, or should either of these scenarios work?
PS: I think both of these models cover if either switch were to have an actual outage.