*
Kenyan court allows lawsuit against Meta for Ethiopia
violence
posts
*
Plaintiffs demand restitution fund and algorithm changes
from
Meta
*
Meta faces multiple lawsuits in Kenya over content
moderation
issues
NAIROBI, April 4 (Reuters) - A Kenyan court has ruled
that Facebook's parent company Meta can be sued in the
East African country over its alleged role in promoting content
that led to ethnic violence in neighbouring Ethiopia, a
plaintiff in the case said.
The landmark case, which stems from alleged hate speech on
the platform during the 2020-2022 civil war in northern
Ethiopia's Tigray region, could have implications for how Meta
works with content moderators globally.
The company has argued that local courts do not have the
power to hear cases against it where it is not registered as a
company.
Kenya's High Court rejected that argument in its ruling on
Thursday, said the Katiba Institute, which is a plaintiff in the
case alongside two Ethiopian researchers.
"The court here has refused to shy away from determining an
important global matter, recognising that homegrown issues must
be addressed directly in our courts," said Nora Mbagathi, the
institute's executive director.
A Meta spokesperson did not immediately respond to a request
for comment.
The plaintiffs allege that Facebook's recommendation systems
amplified violent posts in Ethiopia during the Tigray war.
Plaintiff Abrham Meareg alleges that his father, Meareg
Amare, was killed in 2021 following threatening posts on
Facebook. Fisseha Tekle, an Amnesty International researcher,
says he faced online hate for human rights work in Ethiopia.
They are demanding that Meta create a restitution fund for
victims of hate and violence and alter Facebook's algorithm to
stop promoting hate speech.
Meta has previously said it invested heavily in content
moderation and removed hateful content from the platform.
The case is the third to be brought against Meta in Kenya,
where the company also faces lawsuits from content moderators
employed by a local contractor who say they faced poor working
conditions and were fired for trying to organise a union.
Meta has responded that it requires partners to provide
industry-leading conditions.
The company, which had invested billions and hired thousands
of content moderators globally over the years to police
sensitive content, in January scrapped its U.S. fact-checking
program.
It also said at the time that it would stop proactively
scanning for hate speech and other types of rule-breaking,
reviewing such posts only in response to user reports.