Source link : https://tech365.info/accelerating-ethernet-native-ai-clusters-with-intel-gaudi-3-ai-accelerators-and-cisco-nexus-9000/
Fashionable enterprises face important infrastructure challenges as massive language fashions (LLMs) require processing and shifting huge volumes of information for each coaching and inference. With even essentially the most superior processors restricted by the capabilities of their supporting infrastructure, the necessity for sturdy, high-bandwidth networking has turn out to be crucial. For organizations aiming to make the most of high-performance AI workloads effectively, a scalable, low-latency community spine is essential to maximizing accelerator utilization and minimizing pricey, idle assets.
Cisco Nexus 9000 Sequence Switches for AI/ML workloads
Cisco Nexus 9000 Sequence Switches ship the high-radix, low-latency switching material that AI/ML workloads demand. For Intel® Gaudi® 3 AI accelerator1 deployments, Cisco has validated particular Nexus 9000 switches and configurations to make sure optimum efficiency.
The Nexus 9364E-SG2 (Determine 1), for instance, is the premier AI networking swap from Cisco, powered by the Silicon One G200 ASIC. In a compact 2RU kind issue, it delivers:
64 dense ports of 800 GbE (or 128 x 400 GbE / 256 x 200 GbE / 512 x 100 GbE through breakouts)
51.2 Tbps combination bandwidth for non-blocking leaf-spine materials
256 MB shared on-die packet buffer, which is vital for absorbing the synchronized visitors bursts attribute of collective operations in distributed coaching
512 high-radix structure that reduces the variety of switching…
—-
Author : tech365
Publish date : 2026-01-20 18:53:00
Copyright for syndicated content belongs to the linked Source.
—-
1 – 2 – 3 – 4 – 5 – 6 – 7 – 8