The speed of flash-based storage is critical to accelerating application performance and throughput. This need is driving rapid adoption of flash in the data center. New solid-state storage devices are many orders of magnitude faster than traditional hard drives, as well as having other benefits like power and size. But for optimal results, and to take full advantage of this investment, you need a network that is as fast as the infrastructure.
Typically, that will involve upgrading your network to Generation 6 Fibre Channel (FC). There’s no need for rip and replace. Simply upgrading to 32Gbps with Gen 6 FC can quadruple application performance and deliver 71% faster response times (compared to a legacy 8Gbps network)1.
As Gartner2 shows in its report “The Future of Storage Protocols”, “Fibre Channel is a mature low-latency, high-bandwidth, high-throughput protocol due to its deterministic nonblocking design. This high-link efficiency makes FC well-suited for storage traffic” So much so, that FC “will remain as the data center storage protocol of choice for the next decade”.
Why? As Gartner reports, “Future protocols (such as 40GbE used for iSCSI), file-based protocols (such as NFS and SMB) and current block protocols (such as 16Gbps Fibre Channel) will be too slow for the next generation of solid-state storage and hybrid arrays.”
Flash needs a faster network, much faster.
At the recent Flash Memory Summit, numerous presentations highlighted the fact that the network has become the bottleneck in today’s storage infrastructure. And that bottleneck is with current SSD technologies.
However, flash is evolving rapidly. New 3D NAND flash technology (like Intel’s 3D XPoint) promises ultra-low latency and high density memories—offering up to 10,000 times the performance benefit over traditional hard drives.
To deliver the speed of these new flash devices the industry has embraced a new flash storage protocol called NVMe (Non Volatile Memory Express). This protocol replaces the aging SCSI driver, used with HDDs for decades, which has a significant performance penalty.
But with all of the performance and capacity gains at the device level, what about the network? How do we avoid it becoming the restricting element in today’s data center?
NVMe over FC is the future
With brilliant insight, the industry has adopted a new network standard for NVMe devices, called NVMe over Fabrics or NVMe-oF. This standard allows NVMe traffic to be transported on fabric technologies such as FC, Ethernet or InfiniBand. Solutions for most fabrics types are in development now and trial versions are appearing regularly.
One very natural technology progression is with NVMe over Fibre Channel. Given high performance FC, when data centers deploy new flash arrays it is typically with Gen 5/6 FC as the preferred network. As these flash storage devices transition to NVMe, IT leaders can seamlessly and transparently deploy new NVMe-based ultra-fast applications without upgrading their FC network. New Gen 6 Director-class switches are NVMe-ready today!
Flash is getting much faster and the new NVMe protocol is a welcome enhancement. But many applications will still require shared storage and, when performance matters, NVMe over FC will be the logical choice. Building a modern storage network for flash will unlock the full capabilities of the all-flash data center and prepare IP leaders for a future of insanely fast and reliable applications.
Read “The Future of Storage Protocols” paper for Gartner’s view on the most common storage networking protocols and their role in digital transformation.
Find out more at: http://www.brocade.com/en/possibilities/technology/storage-fabrics-technology.html
1 Emulex/Broadcom TPC-H benchmark Testing: See http://www.brocade.com/en/backend-content/pdf-page.html?/content/dam/common/documents/content-types/industry-report/demartek-emulex-gen6-fibre-channel-hba-evaluation-ir.pdf
2 Gartner “The Future of Storage Protocols”, Valdis Filks, Stanley Zaffos, 29 June 2016.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.