Can Broadcom (NASDAQ: AVGO) overtake Marvell and NVIDIA in AI networking silicon?

Broadcom's Tomahawk 6 switch and AI ASICs challenge NVIDIA and Marvell in hyperscale networking. See why AVGO is gaining investor momentum.

TAGS

Broadcom Inc. (NASDAQ: AVGO) is intensifying competition in the artificial intelligence infrastructure market with its latest generation of networking silicon, challenging both (NASDAQ: MRVL) and (NASDAQ: NVDA) in the race to power the data centers of the future. Recently, Broadcom revealed its Ethernet switch chip, capable of 102.4 terabits per second throughput, aimed specifically at large-scale AI clusters. This announcement comes on the back of a strong second quarter in fiscal year 2025, where Broadcom reported $15.004 billion in total revenue and $4.4 billion in AI chip sales, a 46 percent increase year-over-year.

The networking silicon battle reflects broader structural changes in the hyperscale data center industry. As cloud providers scale infrastructure to support generative AI, large language models, and autonomous systems, the limitations of legacy interconnects like InfiniBand are becoming more apparent. This has opened a path for companies like Broadcom to offer open, Ethernet-based alternatives that promise lower cost, higher scalability, and better interoperability. At the same time, custom AI silicon is gaining preference over general-purpose compute, further positioning Broadcom as a supplier of choice across compute, interconnect, and memory I/O.

Representative image comparing Broadcom, Marvell, and NVIDIA as contenders in the AI networking silicon race for hyperscale data center dominance.
Representative image comparing Broadcom, Marvell, and NVIDIA as contenders in the AI networking silicon race for hyperscale data center dominance.

What is Broadcom’s latest product in the AI networking segment?

Broadcom Inc. introduced its sixth-generation Tomahawk switch chip in early June 2025. The chip delivers 102.4 Tbps of bandwidth and is manufactured using Taiwan Semiconductor Manufacturing Company’s 3-nanometer node. Tomahawk 6 adopts a chiplet-based architecture, allowing the integration of multiple dies into a single package. This modular design improves yield, reduces latency, and supports extremely large AI workloads across thousands of GPUs.

In design terms, the Tomahawk 6 provides the core interconnect fabric for AI clusters of up to 100,000 compute nodes. This development aligns with emerging trends among hyperscalers seeking to decouple networking hardware from proprietary software stacks. Broadcom’s decision to remain Ethernet-centric is being rewarded by major clients like Google, Meta, and ByteDance, all of whom are reportedly deploying Broadcom-based networking for their custom silicon AI deployments.

See also  Lenovo unveils ThinkReality VRX VR headset for enterprise

These product developments coincide with Broadcom’s strong financial momentum in AI. In Q2 FY25, AI-specific semiconductor sales reached $4.4 billion, with expectations of $5.1 billion in Q3. This level of sustained growth is shifting investor perception, historically anchored in Broadcom’s wireless and broadband businesses, toward a more AI-driven thesis.

How does Marvell Technology compare in the AI networking market?

Marvell Technology Inc. has also carved out a substantial footprint in AI infrastructure, albeit through a different approach. Marvell specializes in custom silicon and electro-optical interconnects. It has collaborated with Amazon Web Services on its Trainium 2 accelerators and with Microsoft on the MAIA XPU architecture. In fiscal year 2025, Marvell generated over $1.5 billion in AI revenue and expects that figure to exceed $2.5 billion in fiscal 2026.

Marvell’s value proposition lies in its advanced optical interconnects and tailored system-on-chip solutions, which are particularly attractive for clients building proprietary AI accelerators. However, its enterprise networking segment saw a 35.3 percent year-over-year decline, reflecting a pivot away from general-purpose infrastructure toward AI specialization. This transition creates volatility, particularly as Marvell depends heavily on large design wins to support scale.

Institutional sentiment toward Marvell remains positive but cautious. Analysts acknowledge the firm’s technical advantages but flag concerns about margin pressure and inconsistent volumes. Compared to Broadcom’s vertically integrated model and broader product portfolio, Marvell faces a narrower growth path and more pricing sensitivity.

How is NVIDIA positioned in AI networking and where are the cracks emerging?

NVIDIA Corporation is the dominant force in AI compute through its high-performance GPU accelerators. Its position in networking is supported by its InfiniBand interconnects, acquired through its 2019 acquisition of Mellanox Technologies. InfiniBand offers ultra-low latency and high throughput, making it ideal for tightly coupled supercomputing environments.

See also  Infosys and Stanford HAI team up: Major AI breakthroughs expected soon

However, hyperscalers are increasingly favoring open, Ethernet-based architectures for AI workloads, citing cost and flexibility concerns. Broadcom’s Tomahawk series and Marvell’s electro-optics directly challenge InfiniBand by enabling Ethernet to close the performance gap while maintaining interoperability across hardware vendors.

As of mid-2025, NVIDIA continues to enjoy strong institutional support, driven by the success of its Blackwell architecture and record GPU shipments. Yet there is a growing recognition among analysts that NVIDIA’s reliance on proprietary networking could become a bottleneck. Some data center operators have begun deploying hybrid models that combine with Broadcom networking, breaking away from vertically integrated designs.

What are institutional investors and analysts saying about Broadcom?

Institutional investors have begun to recognize Broadcom Inc. as a credible long-term play in AI infrastructure. The company’s ability to simultaneously grow revenue, preserve a 66–67 percent EBITDA margin, and convert over 40 percent of revenue into free cash flow distinguishes it from more capital-intensive semiconductor peers. In Q2 FY25, Broadcom generated $6.411 billion in free cash flow and returned $7 billion to shareholders through dividends and buybacks.

Portfolio managers at several large asset managers have started rotating into Broadcom as an AI exposure alternative to NVIDIA, citing valuation discipline and diversification. Trading volumes surged on June 5, 2025, after Q2 results, reflecting hedge fund and ETF activity centered on Broadcom’s guidance for $15.8 billion in Q3 revenue, driven by higher AI shipments and VMware cross-sell.

Sell-side analysts from at least four major investment banks have raised their price targets on Broadcom in the past week, noting the strength of its AI business and its alignment with cloud-scale architectural shifts. The Tomahawk 6 release, in particular, has triggered comparisons with Marvell’s electro-optics and NVIDIA’s closed stack, with some analysts suggesting Broadcom is best positioned to benefit from the coming Ethernet-dominated era.

See also  Reka selects Oracle Cloud Infrastructure to enhance AI model development

What does the future look like for Broadcom in AI networking?

Broadcom Inc. appears well-positioned to gain market share in AI networking over the next 12–18 months. The shift among hyperscalers toward open standards, modular chiplets, and custom silicon creates a structural advantage for Broadcom’s product design philosophy. The company is expected to benefit not only from AI-specific networking but also from expansion in edge AI, enterprise virtualization, and AI-managed security—segments where it can leverage synergies with VMware and its infrastructure software stack.

With over $9.47 billion in cash and equivalents and modest capital expenditures of $144 million in Q2 FY25, Broadcom also has ample flexibility to pursue bolt-on acquisitions, should gaps emerge in optical, memory, or cloud-native orchestration capabilities. Analysts are closely watching whether Broadcom will expand its ASIC partnerships beyond existing cloud partners or pursue deeper vertical integration in AI-managed software-defined networking.

While Marvell and NVIDIA remain strong competitors, Broadcom’s multipronged strategy—combining ASICs, Ethernet switching, and AI-optimized interconnects—offers a broader and potentially more sustainable growth path. If AI infrastructure spending continues to rise as forecasted, Broadcom could see sustained revenue acceleration well into fiscal 2026.


Discover more from Business-News-Today.com

Subscribe to get the latest posts sent to your email.

CATEGORIES
TAGS
Share This