Dear Community,
I was hoping to see if anyone here had some thoughts on multiple iSCSI storage arrays on the same fabric / backbone and was looking to see if i'm either being fanatic about over using VLANS or if I will actually see a performance gain from doing so.
The long storage short, my case scenario is that I have 3 host 4.1 HA cluster that will be accessing 3 seperate IP addressable storage arrays for iSCSI datastore/guest traffic.
Specifically I am using:
1 - Dell MD3220i SAN for our dev needs.
1 - Dell MD3000i SAN for production
1 - OpenFiler iSCSI SAN for backups using VEEAM.
The issue that I have currently is that my hosts only have 4 NICS that I can dedicate to iSCSI traffic at this juncture. The 4 NICS are configured across 2 switches and both swithes already have 2 static based VLAN's assinged to them via the switch. (VLAN3 and VLAN4)
I have at this point basically segregated my VLANS so that the Dell MD3220i (Which has 8 ports across 2 controllers) can be accessed by the hosts using MPIO for 4 ethernet's at a time with 4 on the backup controller (Depending on where the LUNS prefer).
From there on my hosts, I am using the same NICS dedicated to iSCSI traffic to the devleopment SAN to also access the MD3000 (Production san) and have a guest inside of my hosts that require an iSCSI initiator for VEEAM backups to work properly.
My question is - should I or could I use seperate VLAN assignments on the 4 NIC's that are associated with my hosts to access each one of the 3 storage arrays on seperate segregated VLAN's as well, keeping in mind that at least 2 of my NICS will have to be able access traffic on any one of the SANS?
For Instance should I take my iSCSI vSwitch that I've created and then further create another VMkernal port per iSCSI VLAN that I want to access? I already had to create a Virtual Machine Port Group under my iSCSI vSwitch to support the veeam gues that needs access to this iSCSI LUN, I have not at this point assigned a VLAN ID to it however.
If I should be creating VLAN's per Array that I am trying to access, is this something I should do at the switch level and then trunk my each NIC to associate the VLAN ID of each array that I am using OR should I simply be assigning VLAN associations on the Arrays and the port group per iSCSI vSwitch?
As you can see there is a TON of options here, I'm really just looking to understand if it really matters that I am segregating my iSCSI traffic amonst itself on VLAN's or if I'm just being fanatic about the whole ordeal. I realize that having seperate physical NICS goign to the switch/VLAN that is destined for XYZ array would be the best solution but I don't have this option at this time. So considering that in my mind, I had figured that by VLANing off the Arrays through the seperate physical NICS I would be able to using better spread across traffic through my physical switches and NICS and provide better throughput for all devices on the network.
Any thoughts or suggestions on this are apprecaited!!
Thanks!