Hi folks,
I am specifying a new VMWare cluster. The core network fabric[1] will be 10gig (or better) - 2 links per host, one each to a separate switch for redundancy. The switches will have an interlink. Main Internet links will go on separate 1gig links.
[1] iSCSI, vMotion, management etc.
Someone put it to me that there may be some savings in cost using DAC links rather than 10GbE copper as I was planning. I understand the latter, but I have never used DAC cables. The only time I've gone near SFP(+) I was actually plugging (mini)GBICs in to use fibre.
Apart from restricted length (not a problem, all in one cabinet) are there any subtle pros and cons to using DAC cables and matchings switches and network cards in the ESXi hosts? For this discussion, assume I will be using vSAN so there will be no hardware SAN to worry about.
With SFP+ is it true that I might be able to get the links running at 4x10gig per link with ESXi6 ? Sorry - silly question, but this particular technology is new to me.
Ta :smileyhappy:
Tim