Optical Interconnects

Optical interconnects use light (photons) rather than electrical signals (electrons) to transmit data between computing components — from chip-to-chip within a server to rack-to-rack within a datacenter. As AI training clusters demand ever-higher bandwidth, optical interconnects are becoming essential infrastructure, moving from the datacenter's network edge into its core and eventually onto the chips themselves.

The physics favors photons for data transport. Light travels through fiber optics with virtually zero signal degradation over datacenter distances, carries orders of magnitude more bandwidth per strand than copper cables, and consumes significantly less energy per bit. A single optical fiber can carry 100+ Tb/s using wavelength-division multiplexing (WDM), where different colors of light carry independent data streams simultaneously.

In AI datacenters, optical interconnects serve multiple roles. Datacenter fabric: Optical transceivers (currently 400G and 800G, with 1.6T arriving) connect top-of-rack switches to spine switches, forming the high-bandwidth network backbone. GPU cluster interconnect: InfiniBand and Ethernet fabrics for AI training increasingly use optical links for inter-rack connections, especially as clusters span multiple rows or buildings. Front-panel optics: NVIDIA's ConnectX adapters and switch ASICs use optical transceivers for external connectivity.

The next frontier is co-packaged optics (CPO), where optical components are integrated directly into or adjacent to switch ASICs and eventually GPU/accelerator packages. This eliminates the power-hungry electrical serializer/deserializer (SerDes) circuits that currently convert between electrical and optical signals at each end of a link. Broadcom, Intel, and TSMC are all developing CPO technology for next-generation networking.

Silicon photonics — fabricating optical components using standard semiconductor manufacturing processes — is the enabling technology for CPO. By building lasers, modulators, and detectors on silicon, optical interconnects can be mass-produced at semiconductor scale and costs. Intel, Cisco (Acacia), and Ayar Labs are leading silicon photonics development.

Looking further ahead, optical computing proposes using light not just for communication but for computation itself. Photonic matrix multiplication — using interference patterns to perform the dot products that dominate AI workloads — could theoretically achieve higher throughput at lower power than electronic computation. Startups like Lightmatter and Luminous Computing are pursuing this vision, though practical photonic AI processors remain in early development.

For AI infrastructure, the optical interconnect roadmap is critical. As training clusters scale from thousands to hundreds of thousands of accelerators, the bandwidth and energy requirements of the interconnect fabric grow superlinearly. Optical solutions are the only known path to sustaining this scaling without the interconnect becoming the dominant power consumer and performance bottleneck.

Further Reading