According to reports in the media, networks of paedophiles who commission and sell videos featuring child sex abuse are being supported by Instagram's recommendation algorithms, which are controlled by Meta.
A Wall Street Journal article claims that the platform "helps connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content".
This was discovered during a joint examination by scholars from Stanford University and the University of Massachusetts Amherst and the Wall Street Journal.
"The Meta unit's systems for fostering communities have guided users to child-sex content" while the social networking platform has claimed it is "improving internal controls".
Accounts found by the researchers are advertised using blatant and explicit hashtags like #pedowhore, #preteensex, and #pedobait.
When researchers set up a test account and viewed content shared by these networks, they were immediately recommended more accounts to follow.
"Following just a handful of these recommendations was enough to flood a test account with content that sexualises children," the report claimed.
According to Variety, the story cited the discovery of 405 sellers of what the Stanford Internet Observatory research team termed "self-generated" child-sex material (accounts allegedly managed by minors themselves) using hashtags connected to underage sex.
The report even mentioned that “at the right price, children are available for in-person ‘meet ups’”.
Meta told WSJ that it had failed to act on these reports and that "it was reviewing its internal processes".
The company also noted that over the past two years, it had eliminated 27 paedophile networks in addition to removing 490,000 accounts that breached its kid safety regulations in just January.
Alex Stamos, head of Stanford's Internet Observatory and former chief security officer for Meta, was quoted as saying that a team of three academics with limited access could find such a huge network should set off alarms at Meta.
"I hope the company reinvests in human investigators," Stamos was quoted as saying.
The Stanford investigators found "128 accounts offering to sell child-sex-abuse material on Twitter, less than a third of the number they found on Instagram". However, Twitter does not seem to recommend these accounts to the same degree as Instagram and also removes such content and accounts “far more quickly”.
David Thiel, the chief technologist at the Stanford Internet Observatory, was quoted as saying that one has to "put guardrails in place for something that growth-intensive to still be nominally safe, and Instagram hasn't".
On Wednesday, European Union industry chief Thierry Breton said that he will meet Meta Platforms Chief Executive Mark Zuckerberg on June 23 and demand that he act immediately to tackle online child pornography.
Meta will also have to demonstrate the measures it plans to take to comply with European Union online content rules known as the Digital Services Act (DSA) after August 25 or face heavy sanctions, Breton said.
DSA fines for breaches can go to as high as six percent of a company's global turnover, according to Reuters.
While Snapchat and TikTok are off the list as per the researchers, Snapchat’s new My AI chatbot still seems to be giving inappropriate advice to youngsters despite them mentioning their age.
In a test by the Center for Humane Technology, the AI chatbot advised on how to make her first time special with her 31-year-old boyfriend to a 13-year-old, saying “You could consider setting the mood with candles or music”. The chatbot even helped the user make up a lie to cover up that she was going on a “romantic getaway” with her adult boyfriend.
It is crucial for companies like Meta and Snapchat to not only reevaluate their algorithms and content moderation practices but also invest in human investigators and adopt more effective strategies to ensure the online safety and well-being of all users, especially vulnerable individuals such as children and teenagers.
(With inputs from agencies)