SAN FRANCISCO, July 15 (Reuters) - Broadcom's ( AVGO )
chip unit unveiled on Tuesday a new networking processor that
aims to speed artificial intelligence data crunching, which
requires stringing together hundreds of chips that work
together.
The new chip is the latest piece of hardware that Broadcom ( AVGO )
has brought to bear against rival AI giant Nvidia ( NVDA ).
Broadcom ( AVGO ) helps Alphabet's Google produce its AI chips,
which are perceived by developers and industry experts as one of
the few viable alternatives to Nvidia's ( NVDA ) powerful graphics
processors (GPUs).
Dubbed the Tomahawk Ultra, Broadcom's ( AVGO ) chip acts as a traffic
controller for data whizzing between dozens or hundreds of chips
that sit relatively closely together inside a data center, such
as inside a single server rack.
The chip aims to compete with Nvidia's ( NVDA ) NVLink Switch chip
which has a similar purpose, but the Tomahawk Ultra can tie
together four times the number of chips, Ram Velaga, a Broadcom ( AVGO )
senior vice president, told Reuters in an interview. And instead
of a proprietary protocol to move the data, it uses a
boosted-for-speed version of ethernet.
Both companies' chips help data center builders and others
tie as many chips as possible together within a few feet of each
other, a technique the industry calls "scale-up" computing. By
ensuring close-by chips can communicate with each other quickly,
software developers can summon the computing horsepower
necessary for AI.
Taiwan Semiconductor Manufacturing ( TSM ) will
manufacture the Ultra line of processors with its five
nano-meter process, Velaga said. The processor is now shipping.
It took Broadcom's ( AVGO ) teams of engineers roughly three years to
develop the design, which was originally built for a segment of
the market known as high-performance computing. But as
generative AI boomed, Broadcom ( AVGO ) adapted the chip for use by AI
companies because it is suited to scaling up.