*
Nvidia's ( NVDA ) Rubin AI chip to include GPU, CPU and networking
chips
*
Nvidia ( NVDA ) faces delay with current flagship Blackwell chip
due to
design flaw
*
AI startups claim competitive chatbots need fewer Nvidia ( NVDA )
chips
By Stephen Nellis and Max A. Cherney
SAN JOSE, California, March 18 (Reuters) - Nvidia ( NVDA ) CEO
Jensen Huang is expected on Tuesday to reveal fresh details
about the company's newest artificial intelligence chip at its
annual software developer conference.
Nvidia ( NVDA ) stock has more than quadrupled in value over
the past three years as the company powered the rise of advanced
AI systems such as ChatGPT, Claude and many others.
Much of that success stemmed from the decade that the Santa
Clara, California-based company spent building software tools to
woo AI researchers and developers - but it was Nvidia's ( NVDA ) data
center chips, which sell for tens of thousands of dollars each,
that accounted for the bulk of its $130.5 billion in sales last
year.
Huang hinted last year the new flagship offering will be named
Rubin and consist of a family of chips - including a graphics
processing unit, a central processing unit and networking chips
- all designed to work in huge data centers that train AI
systems. Analysts expect the chips to go into production this
year and roll out in high volumes starting next year.
Nvidia ( NVDA ) is trying to establish a new pattern of introducing a
flagship chip every year, but has so far hit both internal and
external obstacles.
The company's current flagship chip, called Blackwell, is coming
to market slower than expected after a design flaw caused
manufacturing problems. The broader AI industry last year
grappled with delays in which the prior methods of feeding
expanding troves of data into ever-larger data centers full of
Nvidia ( NVDA ) chips had started to show diminishing returns.
Nvidia ( NVDA ) shares tumbled this year when Chinese startup DeepSeek
alleged it could produce a competitive AI chatbot with far less
computing power - and thus fewer Nvidia ( NVDA ) chips - than earlier
generations of the model. Huang has fired back that newer AI
models that spend more time thinking through their answers will
make Nvidia's ( NVDA ) chips even more important, because they are the
fastest at generating "tokens," the fundamental unit of AI
programs.
"When ChatGPT first came out, the token generation rate only
had to be about as fast as you can read," Huang told Reuters
last month. "However, the token generation rate now is how fast
the AI can read itself, because it's thinking to itself. And the
AI can think to itself a lot faster than you and I can read and
because it has to generate so many future possibilities before
it presents the right answer to you."