* Ministry publishes proposed AI rules
* Rules are move to extend Russian control over AI sector
* Moscow wants Russian user data stored inside Russia -
RIA
* Rules say AI should respect 'traditional Russian
values'
By Andrew Osborn
March 20 (Reuters) - Foreign AI tools like Claude,
ChatGPT and Gemini could be banned or restricted inside Russia
if they fail to adhere to new rules that would give Moscow
sweeping powers to regulate the sector, according to government
proposals published online.
The proposals, published by Russia's Ministry for Digital
Development, would extend to the burgeoning AI sector Russia's
drive to establish a sovereign internet - protected from foreign
influence and respecting what it calls "traditional Russian
spiritual and moral values."
Russia's Ministry for Digital Development said in a
statement that the new rules were designed to "help protect
citizens from covert manipulation and discriminatory
algorithms."
RESTRICTING CROSS-BORDER AI TECHNOLOGY
The initiative, which is likely to benefit home-grown AI
tools being developed by state lender Sberbank and technology
group Yandex, has been made public at a time when the Russian
state is tightening state control over the internet.
The regulations are expected to enter into force next year
after further review and government approval.
"The operation of cross-border artificial intelligence
technologies may be prohibited or restricted in cases specified
by the legislation of the Russian Federation," the rules state.
The state-run RIA news agency reported on Friday that
foreign AI tools would fall under the new rules because they
inevitably transferred the data of Russian citizens abroad.
"Cross-border artificial intelligence technologies refers to
all foreign AI models, including ChatGPT, Claude and Gemini,
where the use of such models results in user data, queries and
dialogues being transmitted to the developers of these models
outside Russia," RIA cited a specialised technology lawyer,
Kirill Dyakov, as saying.
All three models mentioned by Dyakov were developed by U.S.
companies: OpenAI, Anthropic and Alphabet's Google,
respectively.
Other foreign but open AI tools, such as China's Qwen or
DeepSeek, could however be safely adapted and rolled out in a
closed environment on the proprietary infrastructure of Russian
government organisations and companies, Dyakov said, since any
data processed would remain within that infrastructure.
RIA said AI models used by more than 500,000 people per day
would need to store Russian user information on Russian
territory for three years to be compliant under the new
regulatory regime. Western tech companies have in the past
refused to comply with such demands.