Fibre channel-over Ethernet not the only way
Charles Ferland weighs up the alternatives for storage traffic in the datacentre environment
Ferland: FCoE is just one option for next-gen datacentre deployments
With fibre channel-over Ethernet (FCoE), you do not need separate fibre channel and Ethernet networks, potentially slashing the operating cost. But is it what your customers need?
Different networking technologies have been deployed to address certain characteristics required by the applications. For example, fibre channel offers low latency and guaranteed orderly frames delivery, which is exactly what the storage world requires. Ethernet connects servers because it is fast, easy to configure and cheap.
10GB Ethernet (10GbE) datacentre bridging (DCB) or converged enhanced Ethernet (CEE) is a single switching technology that offers low latency, losslessness, low cost and low power, and the IEEE has been developing standards that should help Ethernet become a reliable lossless network suitable for transporting storage traffic such as FCoE.
FCoE requires a 10GbE DCB infrastructure if it is to work. FCoE adapters exchange with the Ethernet switches to find out if the network is DCB-capable. This exchange will fail if the adapters are connected to a ‘normal’, non-DCB switch.
The adapters should provide a seamless transition while encapsulating everything into Ethernet frames. In essence, FCoE is still fibre channel, just over a new wire.
Since there are few storage devices supporting 10GbE natively and a huge fibre channel legacy, FCoE gateways will be required to encapsulate and de-capsulate FCoE traffic between the fibre channel and Ethernet worlds.
All that really changes is the transportation in the middle, which is now done over a reliable, lossless, low-latency 10GbE network instead of having a separate and expensive fibre channel switching infrastructure.
The advantage of FCoE is network convergence, not FCoE itself. So we can use different technologies to achieve the same benefit.
Technologies such as iSCSI, CIFS and network file system (NFS) data storage have matured and benefit from the speed, latency and continuous advantages of the new 10GbE DCB infrastructure. Unlike FCoE, IP-based storage solutions can work with ‘normal’ Ethernet.
They do not require DCB because they can handle, for example, loss of packets or packets arriving in different orders and increased latency. The applications themselves have ways to handle this. Of course, it is not ideal and does affect performance.
There are huge advantages of using these IP-based storage solutions over 10GbE DCB networks. If packets are not being dropped, they do not need to be retransmitted. And there is also a way to regulate the flows within DCB that makes any IP storage solutions run faster and more efficiently.
When the switch memory buffers become full, the switch tells the server to stop sending information for a moment while it transmits the data already in the buffers.
You might not notice if you are downloading emails, but the storage and database applications will notice. With priority flow control, you can prioritise which traffic is paused. Effectively we can carve up the 10GbE pipe into eight lanes and assign applications per lane. The DCB switch can pause transmission from the server only on certain lanes and not others.
The same iSCSI solution running over 10GbE DCB versus a normal best-effort Ethernet infrastructure will provide better performance and reliability.
If FCoE is not your cup of tea, that same 10GbE DCB infrastructure can still run all sorts of mission-critical applications, offering low latency, losslessness, continuity, low-power and low-cost functionality that can be used by all IP-based storage solutions today.
When I look at the vendor choice, price and performances of 10GbE iSCSI or NFS solutions, I wonder why one would consider growing any fibre channel infrastructure and having to deal with gateways, along with everything else.
Charles Ferland is EMEA sales vice president at Blade Network Technologies