Hello MJMSRI
"regarding the different IP Range at each DC, i am trying to see why that is in place and the incumbent is not around anymore to discuss."
Is there any documentation or other resources (e.g. email threads) that might elaborate on why this design was chosen?
Can you check what the das.isolationAddressX are set to for HA in this cluster? This may be able to (an extent) confirm if each gateway was configured for this on each site and thus why they configured it like this (as opposed to just configuring this for an virtual IP in this subnet on this site instead of the default gateway)
"However this document does allude to this detailing 'IP address on vSAN network on site 1' and 'IP address on vSAN network on site 2'"
My understanding of this has always been that it doesn't need to be (and maybe shouldn't be?) the DG IP and should be an addressable IP in the same subnet as the vSAN hosts on that site, depping or GreatWhiteTec might be able to elaborate on whether this is the case (and/or whether the DG vs an IP in range is beneficial/detrimental) as they tend to eat such queries for breakfast.
But going back to your original question, if they are all in the same /16, 172.0.x.x network then there should be no problem putting them all in the same 172.0.[N].x - however that being said, please please please (on behalf of GSS and anyone else that tends to fix things when they go sideways!), please validate with a single node in Maintenance Mode that switching the network causes no partition before carefully proceeding one node at a time to do the rest (e.g. I wouldn't advise scripting it to do all at once).
You can even have a plan B when doing this by just adding a new vmk in the desired IP range, enabling vSAN traffic on it then disabling the original - then validate that it stays clustered (can also check the vmnic + vmk traffic via esxtop 'n' to se it switch over) - if it doesn't work as expected (and you partitioned the host from the cluster) then you can simply re-enable vSAN traffic on the original vmk. (Advisable to do this with node in MM, but you may not see any/much traffic in esxtop in this state, but should still see cluster membership change from 'esxcli vsan cluster get' or monitoring clomd.log for changes to add/remove CdbObjectNode).
Bob