Fibre Channel – foundation of storage of data center’s connectivity?

Fibre Channel – foundation of storage of data center’s connectivity?

Why Fibre Channel? Fibre Channel is still the most secure, reliable, cost effective, and scalable protocol for connecting servers and storage together—and the only protocol that is purpose-built for handing storage traffic.

Due to expanding amount of data, businesses either make it or fail competing in the market place. Nowadays data is becoming a currency itself, that’s why businesses have to know how to manage and act up on it, in order to stay in the game. That’s why easy, proficient and reliable access data is key, same as the basal infrastructure connecting data storage systems to it’s user is imperative as never before.

In modern data center, architects can choose between many different connectivity options, yet Fibre Channel has been and will be the lifeblood of shared storage connectivity. It’s because of Fibre Channel’s reliability, security, cost effectivity and scalable protocol to connect storage and servers together, also there’s no other protocol that is well-adjusted for coping with storage traffic.

Even though Fibre Channel was created decades ago, it is and undeniably will be the first choice for data center’s connectivity to shared storage.  When using Fibre Channel, a dedicated storage network is created, and SCSI storage commands are routed between server and storage devices which bandwidth is up to 28.05Gbps (32GFC) and with IOPS in excess of 1 million. Because of Fibre Channel’s original purpose for storage traffic, it’s accurate for delivering high-performance connectivity.  HPE StoreFabric 16GFC and 32GFC adapters and switching infrastructure provide the kind of bandwidth, IOPS and low latency required in the data center today and for years to come.

Up to date improvements in Fibre Channel technology keep it on top in the aspect of connectivity.

For example, HPE StoreFabric 16GFC and 32GFC infrastructure can already support NVMe storage traffic, even before NVMe native storage arrays are mainstream. Other advanced capabilities include simplified deployment and orchestration, advanced diagnostics and enhanced reliability such as T-10 PI, dual-port isolation and more.

iSCSI is another popular option for storage connectivity. With iSCSI, the storage commands on a standard TCP/IP network, what makes it great for low-end to mid-range systems where performance and security are not the primary requirements. Many people have a wrong idea about Fibre Channel, that because it uses a dedicated storage network, it is more pricey than iSCSI. iSCSI can run on the same Ethernet network as all the regular network traffic.
Nonetheless, to deliver the performance that most customers need from their storage systems iSCSI needs to run on a segmented or dedicated ethernet network, isolated from the regular network traffic. In other words, a completely dedicated Ethernet network or Complex VLAN configurations and security policies are necessary. The one significant difference between iSCSI and FC is when DAC cables are used in iSCSI implementations. This solution can be used for an SMB customer only with a single storage array, because it limits the facility to 5 meters. Because of this curtailment DAC’s wouldn’t work well in bigger-scale data centers.

Why will Fibre Channel remain a mainstay in the data center?

Most likely, the SCSI command set will be replaced by NVMe or Non-Volitile Memory Express commands.

NVMe – a streamlined command set designed for SSD and storage class memory (much more efficient than SCSI). It’s a multi-queue architecture with up to 64K I/O queues, with each I/O queue supporting up to 64K commands, compared to SCSI with a single queue and 64 commands, NVMe delivers a much higher performance.

Latest HPE StoreFabric 16GFC and 32GFC infrastructure that supports SCSI commands can also run NVMe commands in the SAN, or over the fabric as it is called. In order to take full advantage of NVMe, customers using Ethernet, will need to implement low latency RDMA over Converged Ethernet or RoCE. Nevertheless, this solution requires a complex loss-less Ethernet implementation using Data Center Bridging (DCB) and Priority Flow Control (PFC). The network complexity for NVMe over Ethernet might become an inextrictable barrier for most customers, especially once FC SAN deployed today would work just fine with the NVMe storage of tomorrow.