financetom
Business
financetom
/
Business
/
China's DeepSeek says its hit AI model cost just $294,000 to train
News World Market Environment Technology Personal Finance Politics Retail Business Economy Cryptocurrency Forex Stocks Market Commodities
China's DeepSeek says its hit AI model cost just $294,000 to train
Sep 20, 2025 10:28 PM

*

DeepSeek's R1 model attracted global attention in January

*

Article in Nature reveals R1's compute training costs for

the

first time

*

DeepSeek also addresses claims it distilled OpenAI's

models in

peer-reviewed article

(This Sept 18 story was updated on Sept 19 to add context on

distillation in paragraphs 14-20)

By Eduardo Baptista

BEIJING, Sept 18 (Reuters) - Chinese AI developer

DeepSeek said it spent $294,000 on training its R1 model, much

lower than figures reported for U.S. rivals, in a paper that is

likely to reignite debate over Beijing's place in the race to

develop artificial intelligence.

The rare update from the Hangzhou-based company - the first

estimate it has released of R1's training costs - appeared in a

peer-reviewed article in the academic journal Nature published

on Wednesday.

DeepSeek's release of what it said were lower-cost AI

systems in January prompted global investors to dump tech stocks

as they worried the new models could threaten the dominance of

AI leaders including Nvidia ( NVDA ).

Since then, the company and founder Liang Wenfeng have

largely disappeared from public view, apart from pushing out a

few new product updates.

The Nature article, which listed Liang as one of the

co-authors, said DeepSeek's reasoning-focused R1 model cost

$294,000 to train and used 512 Nvidia H800 chips. A previous

version of the article published in January did not contain this

information.

Training costs for the large-language models powering AI

chatbots refer to the expenses incurred from running a cluster

of powerful chips for weeks or months to process vast amounts of

text and code.

Sam Altman, CEO of U.S. AI giant OpenAI, said in 2023 that

the training of foundational models had cost "much more" than

$100 million - though his company has not given detailed figures

for any of its releases.

Some of DeepSeek's statements about its development costs

and the technology it used have been questioned by U.S.

companies and officials.

The H800 chips it mentioned were designed by Nvidia ( NVDA ) for the

Chinese market after the U.S. in October 2022 made it illegal

for the company to export its more powerful H100 and A100 AI

chips to China.

U.S. officials told Reuters in June that DeepSeek has access

to "large volumes" of H100 chips that were procured after U.S.

export controls were implemented. Nvidia ( NVDA ) told Reuters at the

time that DeepSeek has used lawfully acquired H800 chips, not

H100s.

In a supplementary information document accompanying the

Nature article, the company acknowledged for the first time it

does own A100 chips and said it had used them in preparatory

stages of development.

"Regarding our research on DeepSeek-R1, we utilized the A100

GPUs to prepare for the experiments with a smaller model," the

researchers wrote. After this initial phase, R1 was trained for

a total of 80 hours on the 512 chip cluster of H800 chips, they

added.

Reuters has previously reported that one reason DeepSeek was

able to attract the brightest minds in China was because it was

one of the few domestic companies to operate an A100

supercomputing cluster.

MODEL DISTILLATION

DeepSeek also responded for the first time, though not

directly, to assertions from a top White House adviser and other

U.S. AI figures in January that it had deliberately "distilled"

OpenAI's models into its own.

DeepSeek has consistently defended distillation as

yielding better model performance while being far cheaper to

train and run, enabling broader access to AI-powered

technologies due to such models' energy-intensive resource

demands.

The term refers to a technique whereby one AI system

learns from another AI system, allowing the newer model to reap

the benefits of the investments of time and computing power that

went into building the earlier model, but without the associated

costs.

DeepSeek said in January that it had used Meta's open-source

Llama AI model for some distilled versions of its own models.

DeepSeek said in Nature that training data for its V3 model

relied on crawled web pages that contained a "significant number

of OpenAI-model-generated answers, which may lead the base model

to acquire knowledge from other powerful models indirectly".

But it said this was not intentional but rather

incidental.

OpenAI did not respond immediately to a request for comment.

Comments
Welcome to financetom comments! Please keep conversations courteous and on-topic. To fosterproductive and respectful conversations, you may see comments from our Community Managers.
Sign up to post
Sort by
Show More Comments
Related Articles >
Chegg Likely Faces 'Tough Road' Ahead Amid Q3 Guidance Miss, Morgan Stanley Says
Chegg Likely Faces 'Tough Road' Ahead Amid Q3 Guidance Miss, Morgan Stanley Says
Aug 6, 2024
11:03 AM EDT, 08/06/2024 (MT Newswires) -- Chegg's ( CHGG ) Q2 financial results topped market estimates but its Q3 revenue outlook fell short, indicating the company likely faces a tough road ahead, Morgan Stanley said in a note to clients Tuesday. Late Monday, the company reported Q2 adjusted earnings of $0.24 per share, down from $0.28 a year earlier,...
Verizon Communications Gets $176 Million Emergency Communications Contract
Verizon Communications Gets $176 Million Emergency Communications Contract
Aug 6, 2024
11:01 AM EDT, 08/06/2024 (MT Newswires) -- Verizon Communications ( VZ ) said Tuesday it won a 10-year contract worth $176 million for providing government emergency telecommunications and wireless priority service. The company said the new contract is a renewal of its service, awarded by the Emergency Communications Division of the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency....
Marathon Digital Reports Lower Bitcoin Production in July from Year Ago
Marathon Digital Reports Lower Bitcoin Production in July from Year Ago
Aug 6, 2024
10:58 AM EDT, 08/06/2024 (MT Newswires) -- Marathon Digital ( MARA ) said Tuesday it produced 692 bitcoins in July, down 41% from a year earlier but up 17% compared with the company's June output. The firm said it had 20,818 bitcoins at the end of July, up 61% from a year ago and 12% from June. Price: 17.14, Change:...
Montero Mining Seeking $1.3 Million in a Private Placement of Shares
Montero Mining Seeking $1.3 Million in a Private Placement of Shares
Aug 6, 2024
11:04 AM EDT, 08/06/2024 (MT Newswires) -- Montero Mining and Exploration ( MXTRF ) , on Tuesday said it aims to raise $1.3 million in a private placement of shares. The company is offering 4.81-million shares priced at $0.27 in the financings. Funds will be used for general corporate and working capital purposes, the company said. Montero shares were last...
Copyright 2023-2026 - www.financetom.com All Rights Reserved