March 20 (Reuters) - Meta Platforms ( META ) will ask
advertisers to disclose the use of AI or other digital
techniques to create or alter a political or social issue ad,
the Facebook-owner said on Thursday, aiming to curb
misinformation ahead of the Canadian federal elections.
The disclosure mandate will apply if an ad contains a
photorealistic image, video or realistic-sounding audio that has
been digitally created or altered to depict a real person as
saying or doing something they did not actually say or do.
It also extends to ads that show a person who does not exist
or a realistic-looking event that did not happen, alters footage
of a real event or depicts an event that allegedly occurred, but
is not a true image, video or audio recording of the event.
In November last year, Meta said it would extend its ban on
new political ads after the U.S. election, in response to
rampant misinformation during the previous presidential
election.
Meta also barred political campaigns and advertisers in
other regulated industries from using its new generative AI
advertising products in 2023.
However, Meta scrapped its U.S. fact-checking programs
earlier this year - in addition to curbs on discussions around
contentious topics such as immigration and gender identity -
succumbing to pressure from conservatives to implement the
biggest overhaul of its approach to managing political content.
The Instagram-owner also claimed in December last year that
generative AI had limited impact across its apps in 2024,
failing to build a significant audience on Facebook and
Instagram or use AI effectively.
Meta has also added a feature for people to disclose when
they share AI-generated images, video or audio, so it can label
it.
(Reporting by Rishi Kant in Bengaluru)