Dec 12 (Reuters) - Meta said on Thursday it was
releasing an artificial intelligence model called Meta Motivo,
which could control the movements of a human-like digital agent,
with the potential to enhance Metaverse experience.
The company has been plowing tens of billions of dollars
into its investments in AI, augmented reality and other
Metaverse technologies, driving up its capital expense forecast
for 2024 to a record high of between $37 billion and $40
billion.
Meta has also been releasing many of its AI models for free
use by developers, believing that an open approach could benefit
its business by fostering the creation of better tools for its
services.
"We believe this research could pave the way for fully
embodied agents in the Metaverse, leading to more lifelike NPCs,
democratization of character animation, and new types of
immersive experiences," the company said in a statement.
Meta Motivo addresses body control problems commonly seen in
digital avatars, enabling them to perform movements in a more
realistic, human-like manner, the company said.
Meta said it was also introducing a different training model
for language modeling called the Large Concept Model (LCM),
which aims to "decouple reasoning from language representation".
"The LCM is a significant departure from a typical LLM.
Rather than predicting the next token, the LCM is trained to
predict the next concept or high-level idea, represented by a
full sentence in a multimodal and multilingual embedding space,"
the company said.
Other AI tools released by Meta include the Video Seal,
which embeds a hidden watermark into videos, making it invisible
to the naked eye but traceable.