Sharada Yeluri on LinkedIn: #memories #ai #networking #hbm #junipernetworks | 33 comments (original) (raw)

Sharada Yeluri’s Post

Technologist and Sr. Director of Engineering @ Juniper Networks

1mo Edited

End of life? As high-performance GPUs refresh approximately every two years, memory vendors follow suit by doubling HBM bandwidth at a similar pace. While this rapid innovation benefits GPUs and accelerators, it leaves older HBM generations with unexpectedly short lifespans. Many networking switches, especially those with flexible packet processing, can remain relevant for decades. For example, 100G/400G links will be a popular choice for many SPs well into the future. Unfortunately, if these switches rely on HBM, they face a significant lifecycle mismatch: memory vendors eventually stop producing certain HBM parts long before the switch reaches its end of life (EOL). This can force costly re-spins or expensive last-time buys. That begs the question... Do we still need deep buffers in data centers? 🤔 The assumption that switches must have deep buffers is fading in modern data centers. Shallow buffer designs offer lower latency and jitter, while congestion notification methods like ECN help endpoints react quickly to prevent drops. Even modular systems inside data centers, used mainly for their higher radix for DSF, often tune ECN thresholds so that packets rarely enter their deep buffers. Deep buffers remain important in WAN, core, and DCI networks outside data centers. They help absorb transient congestion in these networks, where the RTTs are tens of milliseconds. Even here, end-to-end congestion control mechanisms, lately enhanced by machine learning, adjust flows dynamically based on RTT and ECN feedback, reducing the need for buffering. Many high-end designs now rely heavily on oversubscribing external memory bandwidth, assuming that only a few ports are congested at any time. When external memory is a must, how can we reduce the cost of re-spins by reducing/eliminating the reliance on HBMs? 🤔 Shim Chiplets: A chiplet between the core die and HBM can isolate the core from HBM changes. When moving to next-gen HBM, only the chiplet needs a re-spin. Keeping the chiplet on a mature, lower-cost process node could reduce the re-spin cost. However, ensuring pin compatibility of the ASIC package with newer re-spins can be tricky, potentially causing HW board redesigns. Alternate Memories: GDDRx faces similar product lifecycle concerns. LPDDRx memories generally have much longer availability, but delivering the same bandwidth as a single HBM3e may require 5-6x the “beachfront” area for the PHYs and 7-8 LPDDR5x parts! This makes it hard to fit higher WAN bandwidth in the same die. Another approach is to move the core's memory controller onto a separate chip that interfaces with HBMs or commodity LPDDRs. While this adds extra latency, adding additional processing engines/pipelines could minimize performance impact. However, all these options have higher costs and power than the integrated HBM 🥹. Would love to hear some thoughts on alternate memory solutions for networking silicon...#Memories #AI #networking #HBM #junipernetworks

See more comments

``

More from this author

Explore topics