Part 2 of a 2-part Series. You can find the first part of the series, here.
In the first blog, we discussed the three observations from the Flash Memory Summit: (1) how flash is becoming the storage device technology of choice; (2) how NVMe is the protocol for flash; and (3) how networks are now becoming the critical bottleneck in the storage systems architecture. This last point is the troubling concern … without a modern network built for flash, the investments made in this technology will not achieve its desired benefits. Good news – there is a solution in the works. Bad news – IT leaders may not be aware and could be making the wrong infrastructure decisions.
What is NVMe over Fabrics and why must you be concerned today?
If you subscribe the three observations above, then the future is not difficult to predict. Flash dominates the storage agenda and NVMe is the protocol for flash. Flash storage arrays continue to expand in functionality and capacity, and become integral to low-latency, shared storage applications. Feeding these data-hungry apps will require a new ultra-low latency network called NVMe over Fabrics, where the protocol can perform natively on a robust network transport. This performance boost is too big to ignore. Simply stated - NVMe over Fabrics will be a future technology requirement for a new breed of ultra-low latency applications.
There are a few preferred fabric choices
A new research report from the Evaluator Group outlines the various benefits of NVMe over Fabrics in general, as well as a comparison of the primary fabric alternatives today – namely Fibre Channel (FC), Ethernet (RoCE or iWARP), or InfiniBand (IB). This report recommends NVMe over FC (NVMe-oF) and NVMe over RoCE (RMDA over Converged Ethernet) as the best options for this transport. See the full report details .
We learned (the hard way) from the industry’s failure with FCoE adoption that when it comes to storage traffic, you need an optimized storage network with deterministic response and high availability. Storage traffic is different from general corporate LAN traffic and must be treated differently. This is why FC and RoCE are the preferred transports. InfiniBand will also have a place, but in very niche HPC environments.
Taking an evolutionary path to NVMe performance
In the prior blog, I commented how the “SSD guys got it right.” Leading edge technologies are introduced in to the data center as evolutions, not revolutions. The best practice is to leverage the existing infrastructure wherever possible and make technology advances one step at a time.
Today, the storage network of choice for critical applications in the data center is Fibre Channel, used across most industries for decades all over the world. And the new Gen 6 FC products on the market today are NVMe-ready now. This means you can upgrade your flash infrastructure now with the highest performing Gen 6 FC network to keep existing applications running at their best. Then in the future, when you want to bring on ground-breaking, low-latency applications – you can do so seamlessly, without disrupting your existing flash infrastructure whatsoever. Both protocols will co-exist on the same network so IT leaders can “evolve” their applications to NVMe-based as they wish, on their schedule. This is the optimal low-risk (evolutionary) migration path to new NVMe over Fabrics environments serving larger storage systems. And it is based on the proven Fibre Channel transport. This is an obvious solution for storage teams. For more background on this topic, see the recent Storage Switzerland video:
But there are also growing IP SAN deployments in data centers today, where IP-based flash storage arrays are being deployed to support business critical applications. Although this storage traffic does not need the identical functionality of mission critical storage traffic, it still needs the essential services provided by a SAN such as high performance, deterministic responses, low latency, high availability and seamless expansion. In this situation, it makes sense for Evaluator Group to identify RoCE as the Ethernet protocol of choice since it is based on Converged Ethernet, designed to better support storage traffic needs. But RoCE is based on the same converged technology as FCoE, designed to minimize the impact of “best-efforts” Ethernet design. It will be curious to see if RoCE follows the same low adoption situation we experienced with FCoE.
Three Easy Steps to Prepare for the Future
You need to care about NVMe over Fabrics today, because you may be making data center infrastructure decisions today, which will set the foundation for your business success tomorrow. Making the wrong infrastructure decision now will become extremely expensive and disruptive to repair at a later date. But these three steps will allow a frictionless migration to the future of ultra-low latency applications:
- Modernize your flash storage network today, based on Gen 6 FC with NVMe-ready technology. Even if “NVMe apps” are not ready for a few years, prepare the flash network you need right now so you are not bottlenecked when you are required to move your business forward. If you are considering IP-based flash storage, make sure your network solution supports NVMe over RoCE.
- When upgrading your application servers for new ultra-low latency apps, install Gen 6 / NVMe-ready HBAs. These new FC-NVMe interfaces are being demonstrated from the leading vendors like Cavium/QLogic and Avago/Emulex. These HBAs run both Gen 6 FC and NVMe traffic over the same interface card, again, allowing for a seamless and risk-mitigated migration to new NVMe apps.
- And then when new, “native NVMe” high capacity storage arrays arrive, admins simply connect to the modernized network, turn them on, and start enjoying the new ultra-performance levels. It is about that easy, just like the SSD migration mentioned earlier.
Flash is here today and will dominate data centers over time. Be prepared for the future of flash with a modern storage network built to grow with your demands of the digital economy.
Take the next step to understand the future of NVMe over Fabrics by visiting Brocade.com or the www.NVMexpress.org to get more information. #BrocadeFibreChannelNetworkingCommunity#NVMeoverFibreChannel#3DNAND#NVMeoverIB#NVMe-oF#NVME#NVM#NVMexpress#NVMeoverRoCEv2#NVMeoverFabrics#PCIeSSD#3DXpoint#NVMeSSD#NVMeoverEthernet