Following various best practice articles and bit of our own testing , we settled on a model as given in the figure below. the highlights of the setup are
- all nics are broadcom iscsi offload (Dependent HBA) type
- seperate switches (physical) for storage and vm traffic
- broadcom iscsi adapters are driven by built in s/w adaptors
- vSwitches are setup for round robin load balancing for external Nics
Few concepts we eliminated en route to our deployment
- jumbo frames : not worth the effort and inconsistent performance with loss of service.
- shared switch with storage and vm traffic separated using vlans : degraded performance since it is still within the same switch fabric
- SFP+ connectors on server : not sure about backwards compatibility with SFP slots.
To answer your query , yes it is imperative that you have seperate dedicated nics for iscsi traffic. You could get away with vlans on a single nic but that would not be ideal in a production environment.
anjanesh babu
www.itgeeks.info