Brocade Fibre Channel Networking

What Does The Future Hold For Fibre Channel?

By Juan Tarrio posted 04-16-2020 07:59 AM

  
Over the past few weeks, we have explored the set of capabilities that make Fibre Channel the technology best suited to support the highest demanding storage environments. It’s now time to look at the future and think about what can be expected of the technology going forward. We’re right at the moment when NVMe over Fabrics is going to start to take off in a very significant way. Fibre Channel switch vendors — both Brocade and Cisco — already support NVMe/FC in our Gen 5 and Gen 6 switches and directors, HBA vendors too — there are drivers for most major operating systems for both Emulex and Qlogic— and all-flash array vendors are starting to release arrays that support NVMe/FC on the front-end host ports, with at least three or four major vendors with solutions already in the market and many more to come in the coming months — enabling end-to-end NVMe/FC from server, through fabric, and into the storage array. We have shown with real-world test results and in collaboration with Demartek, Emulex and NetApp, that NVMe/FC can provide as much as a 58% performance improvement in terms of IOPS over SCSI/FC, and as much as a 34% reduction in latency, so there are real benefits to be obtained from adopting the new technology.
Intel 3D XPoint is the basis for Optane

But the performance benefit obtained by replacing the protocol that is used to address flash-based storage devices alone, while interesting and significant, will ultimately pale and be marginal compared to the performance improvement that will be derived from the emergence in the storage and memory market of a new class of memory technology that is just starting to become available. Current NAND-based flash technology will gradually give way to next-generation non-volatile memory products like Intel Optane, based on Intel’s and Micron’s co-developed 3D XPoint (read as ‘cross-point’) memory technology, in what’s coming to be known as Storage-Class Memory (SCM) or Persistent Memory (PM), depending on whether the revolutionary memory technology is used as faster flash storage connected over a PCIe interface—or networked—or as almost-as-fast-as-RAM non-volatile DIMMs (NVDIMMs) connected directly to the memory bus to complement and/or replace DRAM. There is a lot that can be said about this new technology and the new use cases it will enable, including how applications could be rearchitected to take advantage of much faster storage access or non-volatile memory at nearly the speed of DRAM, but that is potentially a topic for future blog posts. Suffice it to say that just used as a replacement for existing NAND-based flash storage, it will provide a significant performance improvement of the same order of magnitude that the transition from spinning disks to current flash storage provided, so it will place an even bigger performance and reliability burden on whatever network is used to transport it once it makes its way out of the server, which it inevitably will.

Past and future storage technology performance gains

After an initial slow start marketing-wise from Fibre Channel when it came to making itself visible as a transport for NVMe, the industry — not just experts, pundits and vendors, but, more importantly, customers — is starting to realize that Fibre Channel is best positioned to become the dominant transport for NVMe, and the attention is starting to shift. Not only is it best positioned from a performance, reliability, availability and existence of a wealth of storage-specific tools point of view as has been mentioned in this article, but also because it provides the smoothest transition by being supported on the same infrastructure that is already running most organizations’ storage environments, enabling simple deployments and migration without investing in new infrastructure or skills which, as I hope is clear now, aren’t ‘just Ethernet’ skills.


A tale of two Ethernet-based protocols… or is it three?

I have discussed NVMe over RoCE at length in one of the previous articles, but as I also mentioned back then, the NVMe-oF specification outlined an additional Ethernet-based transport for NVMe: iWARP. As I explained, iWARP is another RDMA-based technology that is to Infiniband as an HPC protocol essentially what iSCSI is to SCSI over Fibre Channel as a storage protocol: just take the ‘native’ protocol (in this case Infiniband/RDMA) and transport it over a TCP/IP network. iWARP has limited market traction when it comes to alternatives to native Infiniband in HPC environments—where RDMA is actually necessary—and even less when it comes to the storage (NVMe) use case. To the best of my knowledge, there is only one vendor (Chelsio) that has adapters that support NVMe over iWARP, and it’s not exactly a mainstream Ethernet adapter vendor. No storage array vendor has ever expressed the intention of delivering support for NMVe over iWARP in their array host ports. For this reason, iWARP is not expected to gain any momentum for NVMe.

It seemed pretty clear, given this picture, that RoCE(v2) would come out on top as the winner among the Ethernet-based alternatives to Fibre Channel for NVMe-oF, and therefore as the official slayer of Fibre Channel (remember that this time it was “for real”). However, things took a dramatic turn in the last couple of years and rumors of a new Ethernet-based NVMe transport started to emerge. Initially dubbed by some as ‘iNVMe’—because it is to NVMe/FC what iSCSI is to SCSI/FC—NVMe over TCP (officially named NVMe/TCP) came into the scene backed by vendors such as Facebook and Intel, and later others like Dell EMC, NetApp or VMware (among others).

The idea behind NVMe/TCP is to transport the NVMe protocol directly over a TCP/IP network while completely doing away with the RDMA layer. Because, let’s face it, while flash storage is based on memory technology, we are still talking about storage, and the RDMA layer is completely unnecessary—remember how Fibre Channel has supported zero-copy from the start—adds no value whatsoever and only adds complexity to the protocol stack. Plus, running NVMe directly over TCP means you can use any ‘vanilla’ Ethernet NIC without support for RDMA. And that’s precisely the point of NVMe/TCP, to be the equivalent to iSCSI in the SCSI/FC world, as a cost-effective and ubiquitous connectivity option for workloads that don’t have the performance and reliability requirements that demand Fibre Channel as the transport. It is now generally believed in the industry that plain old TCP—with our without DCTCP—will be Fiber Channel’s biggest challenger for NVMe, while RoCE is widely perceived to be an arcane and complex technology for which it is even harder than for Fibre Channel to find people with the right skills to deploy. This will leave us pretty much in the same situation we have been for many years where iSCSI was the main challenger for SCSI-based Fibre Channel networks, and we all know how that story went. There’s no reason to believe things will be any different between NVMe/FC and NVMe/TCP.

Fibre Channel is therefore very well positioned to take on this revolutionary transition that is starting to happen in the marketplace, and if the flash transition has already acted as a boost to Fibre Channel port shipments and revenue over the past few years, we can only expect this trend to accelerate in the next few with NVMe/FC and SCM coming to the market. Most storage vendors are acknowledging this by bringing NVMe/FC to market on their storage arrays before any alternative based on Ethernet, mainly because roughly 60–70% of all their all-flash arrays are already attached to a Fibre Channel SAN, and because it requires much less engineering effort to support NVMe/FC over any other alternative. Plus, they’re in a wait-and-see attitude with regards to which Ethernet-based NVMe alternative becomes favored by the market, because the three options are not really interoperable between each other—you can’t have an NVMe/RoCE initiator talking to an NVMe/TCP target, or an NVMe/TCP initiator communicating with an NVMe/iWARP target. Customers are realizing that they can deploy NMVe/FC on their existing SAN environments with near-zero risk, providing the most seamless transition to this new and exciting technology, while at the same time not having to learn new ways to provision storage, and leveraging all the management, monitoring, analytics and troubleshooting tools they already know and love — well, perhaps love is a strong word. I’ve said it before — more than once — and I’ll say it again: choice is good. And when customers are presented with choice for their mission-critical storage environments, they choose, time and again, Fibre Channel.

Storage flows prefer Fibre Channel

So, is Fibre Channel dead (again) as we find ourselves in one of the most exciting technological transitions for the storage market in the last few decades?

You better believe it isn’t.

If you want to read more about why and how Fibre Channel continues to be the best and most trusted infrastructure to run the most demanding next-generation storage workloads, don't miss the new Networking Next-Gen Storage For Dummies® book by the great AJ Casamento.

If you missed the previous entries in this series, make sure to check them out here:
0 comments
24 views

Permalink