vSAN1

 View Only
  • 1.  Data Locality in stretched Clusters

    Posted May 24, 2019 03:09 PM

    We have a vmdk file in a stretched all flash cluster. It only needs raid1 with protection in one site ie no geo redundancy required

    I have set PFTT=0 and SFTT =1 for this

    The question is do I need to set  affinity storage policy rule as well considering we have put the VM in a DRS must rule and the assumption being that the vSAN algorithms will ensure data locality

    Thanks in advance



  • 2.  RE: Data Locality in stretched Clusters

    Posted May 24, 2019 03:58 PM

    Hello Seamus,

    Yes, DRS/HA rules should be aligned with Data Locality - as always with this (unless there is something requiring host-pinning e.g. passthrough devices) I would advise going with 'Should' rules as opposed to 'Must' rules as if for whatever reason a VM running on the other site of the cluster and accessing its data across the ISL is better than the VM not being able to power-on at all.

    Bob



  • 3.  RE: Data Locality in stretched Clusters

    Posted May 28, 2019 08:08 AM

    Hi

    Thank you for your reply but I was talking about the vSAN storage affinity rule We have the drs rules in place but was wondering if we needed vsan storage affinity rules that are used in stretched clusters to ensure that data is kept at either site



  • 4.  RE: Data Locality in stretched Clusters
    Best Answer

    Posted May 28, 2019 09:04 AM

    Yes, you will need to specify the locality, other wise it may be randomly placed, and when you have multiple objects forming a single VM they could be in different locations. So define the location where the data needs to reside and ensure it aligns with the DRS rules!



  • 5.  RE: Data Locality in stretched Clusters

    Posted May 28, 2019 10:19 AM

    I will write a short post about it for my blog, just so others can easily find it. Good question!



  • 6.  RE: Data Locality in stretched Clusters

    Posted May 28, 2019 10:32 AM

    Thanks Duncan

    For what its worth I set the policy below to achieve this which gives one copy at the preferred site. This would be useful for Oracle Rac or AD servers



  • 7.  RE: Data Locality in stretched Clusters

    Posted May 28, 2019 10:40 AM

    Yeah I will use the below, which is the H5 interface, which is different then the Webclient. Thanks, will use your screenshot as well for those who  still use the webclient.



  • 8.  RE: Data Locality in stretched Clusters

    Posted Jun 05, 2019 09:46 AM

    Great post, I have more questions around the stretched cluster setup.

    So, if one creates a policy with "None - keep data on Preferred (stretched cluster)", then the data will reside in the fault domain that is set to preferred?

    In our stretched cluster, site A is set as the preferred fault domain.

    We do have a number of virtual machines with policies that "stretch" them between site A and B, i.e. PFTT = 1.

    However we also have virtual machines with a storage policy that is set to "None - keep data on Preferred (stretched cluster)", i.e. PFTT = 0. These virtual machines also have a "should" DRS rule that keeps the virtual machines running on hosts located in site B.

    Two questions based on the above:

    1. Does this mean that those VMs running in site B with "None - keep data on Preferred (stretched cluster)", are actually having their reads and writes served from site A?

    2. To expand on the above question, for the VMs that are "stretched" between both sites with PFTT = 1, where does their reads get served from? Would it be the host that they are running on (and the site they are currently located in)?

    Thanks!



  • 9.  RE: Data Locality in stretched Clusters

    Posted Jun 05, 2019 10:01 AM

    1. My answer would be yes so I would change the must rule to servers in the preferred site

    2. If you have two copies 1 in preferred and 1 in secondary and the VM is running in the preffered site then reads will be from the replica closest to the hypervisor the vm is running on ie the same site. The previous sentence would only apply to a stretched cluster that is running normally so if there were outages then this might change

    So to have one copy it is always secondary =1 PFTT=0 and set the locality to preffered or secondary with the appropriate DRS must rule