NEW YORK, May 28 (Reuters) - Meta said on
Wednesday it had found "likely AI-generated" content used
deceptively on its Facebook and Instagram platforms, including
comments praising Israel's handling of the war in Gaza published
below posts from global news organizations and U.S. lawmakers.
The social media company, in a quarterly security report,
said the accounts posed as Jewish students, African Americans
and other concerned citizens, targeting audiences in the United
States and Canada. It attributed the campaign to Tel Aviv-based
political marketing firm STOIC.
STOIC did not immediately respond to a request for comment
on the allegations.
WHY IT'S IMPORTANT
While Meta has found basic profile photos generated by
artificial intelligence in influence operations since 2019, the
report is the first to disclose the use of more sophisticated
generative AI technologies since they emerged in late 2022.
Researchers have fretted that generative AI, which can
quickly and cheaply produce human-like text, imagery and audio,
could lead to more effective disinformation campaigns and sway
elections.
In a press call, Meta security executives said they did not
think novel AI technologies had impeded their ability to disrupt
influence networks, which are coordinated attempts to push
messages.
Executives said they had not seen AI-generated imagery of
politicians realistic enough to be confused for authentic
photos.
KEY QUOTE
"There are several examples across these networks of how
they use likely generative AI tooling to create content. Perhaps
it gives them the ability to do that quicker or to do that with
more volume. But it hasn't really impacted our ability to detect
them," said Meta head of threat investigations Mike Dvilyanski.
BY THE NUMBERS
The report highlighted six covert influence operations that
Meta disrupted in the first quarter.
In addition to the STOIC network, Meta shut down an
Iran-based network focused on the Israel-Hamas conflict,
although it did not identify any use of generative AI in that
campaign.
CONTEXT
Meta and other tech giants have grappled with how to address
potential misuse of new AI technologies, especially in
elections.
Researchers have found examples of image generators from
companies including OpenAI and Microsoft ( MSFT ) producing
photos with voting-related disinformation, despite those
companies having policies against such content.
The companies have emphasized digital labeling systems to
mark AI-generated content at the time of its creation, although
the tools do not work on text and researchers have doubts about
their effectiveness.
WHAT'S NEXT
Meta faces key tests of its defenses with elections in the
European Union in early June and in the United States in
November.