Hello all this is my first post here, if this is in the wrong section please accept my apologies.
I have an issue that I would like some advice on and maybe if we get to the solution it can help someone else in future.
I have 3x Dell R710 Servers running VMWare 5.5 update 2 with latest patches as of 06/02/15, they are running in a cluster, with DRS and HA.
In each server I have 2x Qlogic HBAs, each HBA is connected to a Brocade 300 Switch, HBA1 to Switch 1 and HBA2 to Switch 2.
I also have the following storage attached to the Brocade Switchs.
- 1x HARRIS JBOD with 12x 300GB Seagate 15K SAS disks.(2x HBAs)
- Dell Equalogic SAN for shared VMDK storage via iSCSI for each of the hosts (no issues here)
On the brocade switches I have zones setup as follows.
Zone1
- ESX1_HBA1
- JBOD_LUN1 (which has in it 4x individual Seagate SAS disks these are not raided in hardware or configurable in a LUN in hardware, my Alias name is just something I have created and added the disks to)
Zone2
- ESX2_HBA1
- JBOD_LUN1 (same disks as above)
Zone3
- ESX3_HBA1
- JBOD_LUN1 (again same disks as above)
Zone4
- VirtualServer_VHBA1 (with the NPIV WWPN manually added)
- JBOD_LUN1 ( again same 4x disks that the ESX hosts have above)
Same setup on Switch 2 but for HBA2..
What I want to achieve is to have a Virtual Server Running Windows 2012R2 Standard, having each of those 4 disks in that "JBOD_LUN1" attached to the VM via RDM, I will then use Windows Storage spaces to create a software RAID over those 4 disks.
Then I want to be able to migrate the VM to any of the ESXi hosts and have the VM still keep access to the "JBOD_LUN" so that if I have a host issue the VM system wont be affected and lose connectivity to the disks.
The issue I have is that if I assign the RDM and put them on a new Paravirtual SCSI controller (1:0. 1:1 etc) and save the settings, then try to power up the VM it starts and reaches 46% and never goes any further.
Any help would be appreciated., am I doing this wrong?
Thanks,
Tom