New generation Fibre Channel doubles performance to 64Gbps with switches and HBAs now available. But what future for Fibre Channel with NVMe-over-fabrics hitting the datacentre?
Published: 10 Sep 2020 10:17
Broadcom has launched its first Fibre Channel Gen7 storage switches, which allow array-server connectivity at 64Gbps, which is double the performance of Gen6.
The new switches – the Brocade G720 and X7 – complement the Emulex LPe35000 host bus adapters (HBAs) that Broadcom launched in 2018, which was aimed at Gen7 Fibre Channel and mission-critical flash storage-based use cases.
Without the switches, the HBAs only allowed for a direct connection between server and SAN array, which could be a costly way of doing things as the number of HBAs and cabling built up. Lacking a switch also means a lack of traffic optimisation between servers and SAN.
Cisco has also announced it will launch Gen7 Fibre Channel switches in blade format for its MD9700 chassis, but has not given a release date.
Industry watchers believe Fibre Channel Gen7 is a long way from gaining a firm foothold in the datacentre, however. And while Fibre Channel is based on the use of SCSI and flash arrays will benefit from 64Gbps connectivity, there is the added complication that the most efficient SSD connection is via NVMe.
Certainly, Brocade emphasises support for NVMe via Fibre Channel in its use of NVMe-over-Fibre-Channel. But it’s also true that using ROCE-based NVMe-over-Fabrics connectivity method on 100Gbps Ethernet is less costly.
Having said that, Fibre Channel networks suffer from less latency than Ethernet. And, on paper, an FC-NVMe infrastructure at 64Gbps is better-performing and more stable than NVMe/ROCE at 100Gbps.
The difference in performance between Fibre Channel and Ethernet is further amplified with the market moving towards NVMe-over-TCP to the detriment of the ROCE implementation. NVMe-over-TCP costs even less because it makes use of TCP/IP switches and controller cards.
One possible scenario is that, over time, SAN array products will be divided between NMVe-over-Fibre Channel solutions for higher performance and those based on NVMe-over-TCP for lower cost. The intermediate NVMe-over-ROCE will then occupy a niche for use in internal NVMe SAN connections to extension shelves.
When it comes to specifications, the G720 occupies 1U of rack space, so can be installed nearer servers, while the X7 is 8U or 14U and is intended for the core of the network to connect the G720s, which have between 24 and 56 optical SFP+ ports.
The 14U version of the X7 can take up to eight vertically aligned blades with 48 SFP+ ports, to make for 384 64Gbps ports out to G720s. The X7 can also take two supplementary blades with 16 ICL ports (32 in total) to interconnect with another X7 chassis. Each ICL port corresponds to four 64Gbps ports.
The 8U version takes up to four horizontal blades with 48 SFP+ ports each, which can total 192 64Gbps ports. You can also add two interconnect blades with eight ICL ports each.
Three types of blade are possible. The FC64-48 has 48 ports of 64Gbps and is compatible with Fibre Channel equipment of 8, 10, 16 and 32Gbps. The FC32-X7-48 is at the more economical end of things with ports of a maximum 32Gbps supported. The FC32-64 is an intermediate version with 64 32Gbps ports.
As well as the Brocade-badged products, the switches have already been announced by Dell EMC and Hitachi Vantara, while Fujitsu, HPE, IBM, Lenovo, NetApp and Pure Storage should do the same in the coming weeks.
Content Continues Below