Karthik Rajagopal - Logitech | LinkedIn (original) (raw)
Cupertino, California, United States Contact Info
442 followers 418 connections
Activity
Experience & Education
More activity by Karthik
View Karthik’s full profile
Explore more posts
- Sharada Yeluri End of life? As high-performance GPUs refresh approximately every two years, memory vendors follow suit by doubling HBM bandwidth at a similar pace. While this rapid innovation benefits GPUs and accelerators, it leaves older HBM generations with unexpectedly short lifespans. Many networking switches, especially those with flexible packet processing, can remain relevant for decades. For example, 100G/400G links will be a popular choice for many SPs well into the future. Unfortunately, if these switches rely on HBM, they face a significant lifecycle mismatch: memory vendors eventually stop producing certain HBM parts long before the switch reaches its end of life (EOL). This can force costly re-spins or expensive last-time buys. That begs the question... Do we still need deep buffers in data centers? 🤔 The assumption that switches must have deep buffers is fading in modern data centers. Shallow buffer designs offer lower latency and jitter, while congestion notification methods like ECN help endpoints react quickly to prevent drops. Even modular systems inside data centers, used mainly for their higher radix for DSF, often tune ECN thresholds so that packets rarely enter their deep buffers. Deep buffers remain important in WAN, core, and DCI networks outside data centers. They help absorb transient congestion in these networks, where the RTTs are tens of milliseconds. Even here, end-to-end congestion control mechanisms, lately enhanced by machine learning, adjust flows dynamically based on RTT and ECN feedback, reducing the need for buffering. Many high-end designs now rely heavily on oversubscribing external memory bandwidth, assuming that only a few ports are congested at any time. When external memory is a must, how can we reduce the cost of re-spins by reducing/eliminating the reliance on HBMs? 🤔 Shim Chiplets: A chiplet between the core die and HBM can isolate the core from HBM changes. When moving to next-gen HBM, only the chiplet needs a re-spin. Keeping the chiplet on a mature, lower-cost process node could reduce the re-spin cost. However, ensuring pin compatibility of the ASIC package with newer re-spins can be tricky, potentially causing HW board redesigns. Alternate Memories: GDDRx faces similar product lifecycle concerns. LPDDRx memories generally have much longer availability, but delivering the same bandwidth as a single HBM3e may require 5-6x the “beachfront” area for the PHYs and 7-8 LPDDR5x parts! This makes it hard to fit higher WAN bandwidth in the same die. Another approach is to move the core's memory controller onto a separate chip that interfaces with HBMs or commodity LPDDRs. While this adds extra latency, adding additional processing engines/pipelines could minimize performance impact. However, all these options have higher costs and power than the integrated HBM 🥹. Would love to hear some thoughts on alternate memory solutions for networking silicon... #Memories #AI #networking #HBM #junipernetworks
- Charnwit Lesakul Ref: Intel 64 and 32 Bits Architectures Dear Folk, Reference is made to Intel 64- and 32 - bit Architectures, the basic execution environment has been uploaded onto my website (www.4nopparats.com/ /slides/intel-architectures-intel-64-and-ia-32-8). With this material (chapter 3), you should have a clear picture of: 1) Memory models - Flat model, Segment memory model, Real address model 2) Mode of operations in:- 32-bit: Protected mode, Real-address mode, System management mode 64-bit: Compatibility mode, 64-bit mode 3) General-purpose registers 4) Segment registers 5) EFLAGS/RFLAGS register 6) FPU registers (floating-point data register) 7) etc.. Furthermore, you also should have the idea of : - how the Intel processors compute and store the offset or address for operands, instructions and pointers. - how many operand types that Inter processor supports and how they are addressing. - what are the usages of each general-purpose registers. - the mechanism of the register selection in segment registers in different memory model. - how the EFLAGS are organized in 32- and 64-bit Base on a clear narration of the semantics of each topic in the material I posted, I would assume that you should not have any problem reading through it. However, if you have any enquiry, just drop me an email via charnwit.lesakul@yahoo.com. Thank you very much. Charnwit Lesakul
- Laban Eilers Like saving money in AWS? Check out this AWS re:invent talk: the story of Honeycomb's migration of their Lambda services to Graviton (ARM64) by the incomparable Liz Fong-Jones. Having been through a similar process of adding ARM64 to our Kubernetes platform, this very much tracks with my experience. Modern language and platform support for ARM64 is really good, and in practice, the challenge isn't getting code to compile/work correctly. The actual work is mostly around build/CI changes and managing the risk of a cutover (including observability). While the Honeycomb team was able to use Go's support for cross-compilation, we have a polyglot fleet of microservices, and needed a more general solution. We ended up creating a Docker remote build service (using Buildkit) that could build either AMD64 or ARM64 (or multi-arch) containers. This was pretty seamless, since our dev tools just add a few CLI flags to docker invocations, and voilà, we get multi-arch container images. For risk mitigation and observability, we used a number of tools (including Honeycomb), its a much more varied and nuanced story. Liz's experiences here tracked very much with what we went through- the cost (and energy!) savings were real, but vary by workload. Liz's talk has inspired me to do a more involved write up of this in my blog, hopefully coming soon. #PlatformEngineering #Kubernetes #graviton
- Ryan Sanders This is a builder. Congratulations and well deserved on all your success Thomas S Hatch (Tom). While it says “Distinguished Engineer at Broadcom - we all know you fondly as the creator of Salt Project and a Founder of Saltstack with Marc Chenn. I would have loved to have seen it stay on the outside and independent as it was clear there was more to do and more to serve - but am thrilled to hear how it has flourished under VMware and Broadcom. I loved this part of your post, “…all difficulties in life are given to us so that we can grow and improve, in this life and in the life to come. For each of us, change is one of those difficulties, no matter how amazing the outcome may be. As such, I strive to view every difficulty as an opportunity, and while I have not always been successful at this endeavor, I do feel that I have improved at it over time.” For those who don’t know Tom well - you ought to find him and ask him more about his story. The next part is inspiring. ”…it is this belief that helped keep me strong when I had cancer, then later when I had a brain tumor, which required me to learn how to walk and talk again. One of my most core beliefs is this very thing. The greatest blessings in life are, quite literally, our greatest trials.” Thank you for building and choosing to be an entrepreneur Tom. #entrepreneur #entrepreneurship #siliconslopes Silicon Slopes https://lnkd.in/g7zy__Mv
- Kamran A. Remember the days of juggling OS, network, database, and system engineers to keep your infrastructure running smoothly? Those were the days! 🤯 It used to be a complex symphony of coordination, with OS engineers fine-tuning CPU usage and RAM, network engineers ensuring ironclad connectivity, DBAs managing intricate schema updates, and system engineers and architects tackling a myriad of other challenges. It was a juggling act that required constant attention and expertise. Cloud computing has revolutionized IT operations, condensing these roles into one powerful position: the Cloud Architect. Cloud Architects are the modern-day IT superheroes, possessing a deep understanding of everything from infrastructure to applications. They design, implement, and manage cloud solutions, ensuring optimal performance, security, and cost-efficiency. From architecting scalable systems and optimizing resource utilization to implementing robust security measures and ensuring high availability, Cloud Architects are the linchpin of successful cloud adoption. If you're looking to streamline operations, reduce costs, and unlock the full potential of your business, a Cloud Architect is your secret weapon. Let's discuss how cloud computing can transform your organization. #cloudarchitect #digitaltransformation #cloudmigration #infrastructure #integriti
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Others named Karthik Rajagopal in United States
20 others named Karthik Rajagopal in United States are on LinkedIn
See others named Karthik Rajagopal