Blogs

Wednesday May 10, 2017, Blog 3: NVMe over FC Fabrics

By mdetrick posted May 10, 2017 07:00 AM

  

Wednesday May 10, Blog 3: NVMe over FC Fabrics

This is my last blog of Dell EMC World 2017. We had yet another momentous show! I’m already looking forward to next year! I leave you with some scholarly words on NVMe over FC fabrics…

 

It wasn’t long ago, all the rage at EMC World was FCoE!  ¡wow, FCoE, wow!... We all know that went bust! I’m a CCIE and personally, I never saw FCoE as viable for a wide variety of reasons. Here’s a few of the big ones:

 

  1. The storage guys don’t go to lunch with the network guys. You know what I mean, right? OK, think about it…
  2. The network guys are going to want to own the DCB Ethernet network that FCoE uses.
  3. The storage guys don’t want to rely on the network guys for anything.
  4. FCoE+DCB is a new unproven technology with the added expense of an entirely new network.

 

Anyways, you get the idea. FCoE was a non-starter for most of the world. Still is! Primarily, you could attribute FCoE's failure to a Converged Ethernet (DCB) network requirement that the storage guys and frankly many of the network guys wanted nothing to do with.

 

Now, it’s 2017 and we have NVMe over ROCEv2 (RDMA over Converged Ethernet v2). For those who don’t know, ROCEv2 runs on the same DCB network that FCoE was supposed to run on (DCB = Data Center Bridging = Converged Ethernet). Well, this is very interesting! What do you think? Will history repeat itself? Have we not learned from our past mistakes? Revisit items 1-4 above. To me, those same reasons prognosticate that NVMe over ROCE will be as successful as FCoE or the DeLorean Automobile Company. Yep, a total train wreck in the making! Except this second time around, it’s the Myth Busters crashing the train just to see if they can do it. There possibly may be some small market adoption, but nothing significant is going to happen with ROCEv2. I wouldn’t bet my enterprise on ROCE! Will NVMe over FC fabrics become a massive thing? You bet’cha! Absolutely it will! Why, you so politely ask? Read on my wayward son!

 

Existing fabrics

How many FC fabrics are currently in production? A lot, a lot, a lot! It is the primary storage network deployed on the planet and for good reason, it works fantastically well! In fact, most enterprise SANs are Brocade and most have been upgraded to Gen5 at this point and Gen6 is now making great headway.

 

Why is this significant? Primarily because these FC fabrics are already installed, working and proven. When NVMe HBAs (like the one from Broadcom, see URL below) are included in new server sales during the coming years, and those servers proliferate, installed Brocade fabrics will already be NVMe ready. NVMe will coexist with SCSI/FCP in the foreseeable years as the transition progresses. Both legacy SCSI/FCP and NVMe will operate across the same fabrics. How else can it be done? I mean, really? No migrations necessary! A simple and natural transition occurring mostly at the end-devices, not in the SAN. There’s no need for “those” network guys to manage a brand spank’n new DCB network. No experimental protocol (I’m referring to ROCEv2), instead SOLID PROVEN Enterprise class FC and all the tools and robust maturity that goes with well-known Brocade fabrics.

 

NVMe over FC HBAs (Emulex and QLogic):

 

 

Known & Proven

You are likely already using Brocade, or even Cisco MDS, SAN products. NVMe over FC fabrics will be available from both. Do you really want to learn how to configure, deploy and operate a Data Center Bridging (DCB) Ethernet network? Right, I didn’t think so! Do you really want to relinquish your SAN to the network guys? Most of you don’t because you know it will be a long hard road to success if you do, and most people try to avoid failure. I have nothing against network guys, in fact I’m one (CCIE R&S to boot), but storage vs. network are very diverse cultures with very different endgames. So, the bottom line is go with what you know! There are ZERO reasons to change from a FC fabric to a DCB fabric just to do NVMe. Trust me, FC fabrics operate much more deterministically and perform more consistently with higher performance than any DCB fabric ever has.

 

Not Experimental

FC is not experimental! It is proven and deployed ubiquitously. On the other hand, ROCEv2 which is integral to NVMe over ROCEv2 has not been proven and Converged Ethernet (DCB) is not deployed ubiquitously. Not to mention, there’s no compelling reason to implement NVMe on a DCB network. You may think, I’ll purchase, build, operate and maintain a single DCB network for my entire data center vs. a traditional Ethernet LAN plus a separate FC network. Hmmm, really? Let’s think this through... First, you don’t want to do DCB flow-control in your entire data center, what a mess that would make. Second, are you willing to put all your storage eggs into the DC LAN basket? That would scare the hell out of me if the basis of my employment was to keep applications online. So, I find that ill-advised!

 

DC LANs are much more fluid compared to DC SANs. A small outage in the LAN world often goes unnoticed. A small outage in the SAN world means hours of database recovery, applications being offline and potentially millions of dollars’ worth of lost revenue, not to mention the many pissed off customers. It’s just not the same. I know... I’ve been there, done that! And, I don’t recommend it, so, don’t say I didn’t tell you so!

 

Speed & Performance

If it's NVMe speed and performance you’re most concerned with, then you’ve come to the right fabric! Brocade Gen6 is switching NVMe frames at 2.4 µs worst case. If you happen to be on the same ASIC, you’ll get local switching at 0.8 µs but there’s no requirement or recommendation for that. In fact, that would just bring you from fantastic to super fantastic. Fantastic is perfect.

 

Gen6 is 32 Gbps FC, which means that the frame serialization time onto the wire is crazy fast. Yes, there’s more BW than you’ll likely ever use, but speed is important with NVMe. After all, if speed wasn’t important we’d just stick with good old SCSI/FCP. We’re now using All FLASH Arrays (AFA) and soon they will be NVMe enabled AFA and you're going to want a SAN that suits your VMs, Hypervisors, NVMe HBAs, Applications, Storage... Wink Wink!

 

This concludes my blogs for this year :-(

Have fun at the concert tonight! It’s been great seeing you all again!

 

Goodbye & Be Good!

Mark Detrick @Extension_Guru

Brocade Principal Solutions Architect


#FLASHArrays
#fc
#NVME
#BrocadeFibreChannelNetworkingCommunity
0 comments
2 views