400G in the data center: options for optical transceivers

As larger data centers confront their inevitable leap to 400G, network managers face a multitude of challenges and decisions. In this three-part blog, CommScope’s James Young provides our take on the technologies and trends behind the move to higher speeds. In part one, we look at the role of optical network modules.

Data-Center-300x203The first measure of an organization’s success is its ability to adapt to changes in its environment. Call it survivability. If you can’t make the leap to the new status quo, your customers will leave you behind.

For cloud-scale data centers, their ability to adapt and survive is tested every year as increasing demands for bandwidth, capacity and lower latency fuel migration to faster network speeds. During the past several years, we’ve seen link speeds throughout the data center increase from 25G/100G to 100G/400G. Every leap to a higher speed is followed by a brief plateau before data center managers need to prepare for the next jump.

Currently, data centers are looking to make the jump to 400G. A key consideration is which optical technology is best. Here, we break down some of the considerations and options. 

400G_part 1_chart_1

Source: NextPlatform 2018

 

CLICK TO TWEET:   CommScope’s Jim Young explains how optical network modules will benefit data centers and their inevitable leap to 400G.

400GE optical transceivers 

The optical market for 400G is being driven by cost and performance as OEMs try to dial in to the data centers’ sweet spot. In 2017, CFP8 became the first-generation 400GE module form factor to be used in core routers and DWDM transport client interfaces. The module dimensions are slightly smaller than CFP2, while the optics support either CDAUI-16 (16x25G NRZ) or CDAUI-8 (8x50G PAM4) electrical I/O. Lately, the focus has shifted to the second-generation 400GE form factor modules: QSFP-DD and OSFP. Developed for use with high port-density data center switches, these thumb-sized modules enable 12.8 Tbps in 1RU via 32 x 400GE ports and support CDAUI-8 (8x50G PAM4) electrical I/O only.

While the CFP8, QSFP-DD and OSFP are all hot-pluggable, that’s not the case with all 400GE transceiver modules. Some are mounted directly on the host printed circuit board. With very short PCB traces, these embedded transceivers enable low power dissipation and high port density. Despite the higher bandwidth density and higher rates per channel for embedded optics, the Ethernet industry continues to favor pluggable optics for 400GE; they are easier to maintain and offer pay-as-you-grow cost efficiency. 

Start with the end in mind

For anyone who’s been in the industry for any length of time, the jump to 400G is yet another waystation along the data center’s evolutionary path. There is already an MSA group working on 800G using 8 x 100G transceivers. CommScope—a member of the 800G MSA group—is working with other IEEE members seeking solutions that would support 100G-per-wavelength server connections using multimode fiber. These developments are targeted to enter the market in 2021, followed by 1.6T schemes, perhaps in 2024.  

While the details involved with migrating to higher and higher speeds are daunting, it helps to put the process in perspective. As data center services evolve, storage and server speeds must also increase. Being able to support those higher speeds requires the right transmission media. In choosing the optical modules that best serve the needs of your network, start with the end in mind. The more accurately you anticipate the services needed and the topology required to deliver those services, the better the network will support new and future applications.

In part two of this three-part blog, we discuss the impact of 400G on data center interconnects. Stay tuned.