Cisco has announced its latest additions to the Silicon One family of processors, aimed at providing support for large-scale artificial intelligence (AI) and machine learning (ML) infrastructure for enterprises and hyperscalers. The new processors are expected to bring networking enhancements, making them ideal for demanding AI/ML deployments or other highly distributed applications.
New additions to the Silicon One family
Cisco has integrated the latest 5nm 51.2Tbps Silicon One G200 and 25.6Tbps G202 into its growing portfolio of Silicon One processors. Both models are customizable for routing or switching from a single chipset, thereby eliminating the need for different silicon architectures for each network function. The Silicon One family of processors has grown to 13 members with the new additions, all designed to be programmable and flexible in an era that requires agility and adaptability. Cisco has created the Silicon One portfolio to allow its customers to choose the best device for their use case, rather than forcing them to use predetermined devices.
Enhanced Features of the New Silicon One Processors
There are specific features of the new Silicon One processors that make them more advanced than the previous models. One of the most notable features is the P4-programmable parallel-packet processor, capable of performing more than 435 billion lookups per second. Another notable feature of each Silicon One device is the ability to support 512 Ethernet ports. This upgrade from the previous models allows customers to build a 32K 400G GPU AI/ML cluster that requires 40% fewer switches than other devices. This is a significant cost-saving measure, which makes the new processors more attractive to hyperscalers and enterprise customers with large-scale AI/ML infrastructure.
Ideal for demanding AI/ML deployments or highly distributed applications
The new Silicon One processors are positioned at the top of the Silicon One family and bring networking enhancements that make them ideal for demanding AI/ML deployments or other highly distributed applications. Many organizations require a more powerful and efficient computing infrastructure to support their AI-based strategies. According to a recent report by IDC, global spending on AI is forecast to reach $110 billion by 2024.
Growing Market for AI Networking
The AI networking market has been thriving for the past two years, and it is expected to continue growing. According to a recent blog from the 650 Group, the market, which includes Broadcom, Marvell, Arista, and Cisco, is expected to reach $10 billion by 2027, up from the current value of $2 billion. Being part of this growing market is significant for Cisco. The company is now in a better position to take advantage of the increasing investment in AI and ML technologies worldwide.
Testing and availability
The Cisco Silicon One G200 and G202 are currently being tested by unidentified customers and are available on a sampled basis. Cisco has implemented a unique go-to-market strategy for these devices, which will help to gain market share over competitors.
One of the most important features of the new Silicon One processors is the creation of a Scheduled Fabric
Essentially, a Scheduled Fabric is a highly automated, programmable network fabric that provides a rich set of APIs to enable seamless integration across multi-vendor environments. By combining silicon-level innovations with software-defined capabilities, Cisco’s Silicon One platform delivers unparalleled performance, flexibility, and scalability. The result is a paradigm shift that will boost the productivity, efficiency, and innovation of hyperscalers and end-to-end enterprise customers.
With the growing demand for AI/ML infrastructure, Cisco is well-positioned to capture market share and emerge as a dominant player in this space. The Silicon One G200 and G202 will be game-changers for hyperscalers and enterprises, providing them with the advanced features they need to build high-performance, flexible, and secure AI/ML infrastructures.