vCenter

 View Only
Expand all | Collapse all

6 esx hosts zoning lun

  • 1.  6 esx hosts zoning lun

    Posted Feb 25, 2013 02:41 PM

    hello,

    we have 6 esx host with 2 hbas each.2 brocade 300 fc switches and on hus 110 with raid 5 7+1.!

    from the scenario i need only one datastore for vms or maybe another 2.

    in the storage array i will add the luns in one port and then i will zone each esx with this port combining the 2 fc switches?

    or should i make another raid group or lun at the other storage port?

    is there a diagram?



  • 2.  RE: 6 esx hosts zoning lun



  • 3.  RE: 6 esx hosts zoning lun

    Posted Feb 25, 2013 03:25 PM

    ok this but i should make one raid 5 group 7+1 with 2 or 3 volumes.i need only lun for vms and mybe in future i will use another 3.or do i have to split the raid group in 2 3+1.

    and the volume that i create from the raid should be 'connected' at the two sp in storage arrays?as active passive or active active?



  • 4.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 08:29 AM

    As best prastice you should have to create only one RAID Group RAID 5 or RAID 1+0 with your all volumes, then you can create LUN whatever size you want.

    RAID Group should be connected at the two Storage Processor in Storage Arrays as Active Active or Active Passive

    Thanks..

    Pramod



  • 5.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 09:09 AM

    so first i create the raid group raid 5 7+1.

    the raid group is connected to sp or the volumes or luns i am creating?

    then how many luns can i create as best practice for vms.is it good 2 luns and then i i have in each lun(datastore) 8-10 vms.

    or one lun with all vms there.?



  • 6.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 10:47 AM

    so first i create the raid group raid 5 7+1.

    the raid group is connected to sp or the volumes or luns i am creating?

    Your RAID Group / LUNS Should be connected to Both Storage Processor via Zoning.

    then how many luns can i create as best practice for vms.is it good 2 luns and then i i have in each lun(datastore) 8-10 vms.

    or one lun with all vms there.?

    Its totally depend on your requirement. How much VMDK size you are allocating to VM Guets ? is there any additonal disk you are adding to VM Guests ? 1 or 2 TB is enough for 8 to 10 VMS.

    Thanks

    Pramod



  • 7.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 12:30 PM

    i have 6 esx host with 2 hbas each.

    each hba connected in different fc switch.with zoning each hba see the 2 sp.so i will create 12 zones ?



  • 8.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 01:02 PM

    I think you'll have to create only 4 Zones for your 6 hosts.


    HBA0 to storage processor SPA0
    HBA0 to storage processor SPB1
    HBA1 to storage processor SPA1
    HBA1 to storage processor SPB0

    Let other guys to confirm if i 'm worng..

    Thanks

    Pramod



  • 9.  RE: 6 esx hosts zoning lun
    Best Answer

    Posted Feb 26, 2013 01:24 PM

    You will have to create 4 zones per host, two on each FC switch, so a total of 24 zones for your 6 hosts with one storage.

    Edit: did a quick and dirty "drawing". Take a look at the attached jpg. When you start at the storage, there are 4 possible ways to reach each server, that are your needed zones.

    Regards



  • 10.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 01:36 PM

    can i create for each hba one zone with 1 fc switch but in the same zone the 2 sp from storage array.single to multi ports in array!?

    or is better each hba in each fc switch with 1 sp and the same hba with the other sp?

    is there a big difference with this?



  • 11.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 04:33 PM

    You *can* do what you are suggesting (this is called multi-target zoning), but its *safer* and generally recommended to do zones with only a single initiator and a single target.



  • 12.  RE: 6 esx hosts zoning lun

    Posted Feb 26, 2013 02:21 PM

    Thanks schepp for correting me..  :smileyhappy: