*
DeepSeek's R1 model attracted global attention in January
*
Article in Nature reveals R1's compute training costs for
the
first time
*
DeepSeek also addresses claims it distilled OpenAI's
models in
peer-reviewed article
(This Sept 18 story was updated on Sept 19 to add context on
distillation in paragraphs 14-20)
By Eduardo Baptista
BEIJING, Sept 18 (Reuters) - Chinese AI developer
DeepSeek said it spent $294,000 on training its R1 model, much
lower than figures reported for U.S. rivals, in a paper that is
likely to reignite debate over Beijing's place in the race to
develop artificial intelligence.
The rare update from the Hangzhou-based company - the first
estimate it has released of R1's training costs - appeared in a
peer-reviewed article in the academic journal Nature published
on Wednesday.
DeepSeek's release of what it said were lower-cost AI
systems in January prompted global investors to dump tech stocks
as they worried the new models could threaten the dominance of
AI leaders including Nvidia ( NVDA ).
Since then, the company and founder Liang Wenfeng have
largely disappeared from public view, apart from pushing out a
few new product updates.
The Nature article, which listed Liang as one of the
co-authors, said DeepSeek's reasoning-focused R1 model cost
$294,000 to train and used 512 Nvidia H800 chips. A previous
version of the article published in January did not contain this
information.
Training costs for the large-language models powering AI
chatbots refer to the expenses incurred from running a cluster
of powerful chips for weeks or months to process vast amounts of
text and code.
Sam Altman, CEO of U.S. AI giant OpenAI, said in 2023 that
the training of foundational models had cost "much more" than
$100 million - though his company has not given detailed figures
for any of its releases.
Some of DeepSeek's statements about its development costs
and the technology it used have been questioned by U.S.
companies and officials.
The H800 chips it mentioned were designed by Nvidia ( NVDA ) for the
Chinese market after the U.S. in October 2022 made it illegal
for the company to export its more powerful H100 and A100 AI
chips to China.
U.S. officials told Reuters in June that DeepSeek has access
to "large volumes" of H100 chips that were procured after U.S.
export controls were implemented. Nvidia ( NVDA ) told Reuters at the
time that DeepSeek has used lawfully acquired H800 chips, not
H100s.
In a supplementary information document accompanying the
Nature article, the company acknowledged for the first time it
does own A100 chips and said it had used them in preparatory
stages of development.
"Regarding our research on DeepSeek-R1, we utilized the A100
GPUs to prepare for the experiments with a smaller model," the
researchers wrote. After this initial phase, R1 was trained for
a total of 80 hours on the 512 chip cluster of H800 chips, they
added.
Reuters has previously reported that one reason DeepSeek was
able to attract the brightest minds in China was because it was
one of the few domestic companies to operate an A100
supercomputing cluster.
MODEL DISTILLATION
DeepSeek also responded for the first time, though not
directly, to assertions from a top White House adviser and other
U.S. AI figures in January that it had deliberately "distilled"
OpenAI's models into its own.
DeepSeek has consistently defended distillation as
yielding better model performance while being far cheaper to
train and run, enabling broader access to AI-powered
technologies due to such models' energy-intensive resource
demands.
The term refers to a technique whereby one AI system
learns from another AI system, allowing the newer model to reap
the benefits of the investments of time and computing power that
went into building the earlier model, but without the associated
costs.
DeepSeek said in January that it had used Meta's open-source
Llama AI model for some distilled versions of its own models.
DeepSeek said in Nature that training data for its V3 model
relied on crawled web pages that contained a "significant number
of OpenAI-model-generated answers, which may lead the base model
to acquire knowledge from other powerful models indirectly".
But it said this was not intentional but rather
incidental.
OpenAI did not respond immediately to a request for comment.