Instagram will no longer recommend adult-run accounts that regularly post images of children to adults it deems “potentially suspicious”.
The tech major said that it has removed thousands of accounts that were leaving sexualised comments or requesting sexual images from adult-run accounts of kids under 13. Of these, 135,000 were commenting and another 500,000 were linked to accounts that “interacted inappropriately,” it said in a blog post.
Earlier this year, Meta removed nearly 135,000 Instagram accounts that left sexualised comments or requested explicit images from child-focused, adult-managed profiles. An additional 500,000 connected accounts across Instagram and Facebook were also taken down. In some cases, users were notified when an abusive account was removed and encouraged to block and report others.
The company states that it has also shared intelligence on these users with other platforms through the Tech Coalition’s Lantern programme, acknowledging that predators often operate across multiple sites.
What to expect from the new features?
Teen users will now also see the month and year that the account they’re messaging with joined Instagram to help them spot potential creeps and scammers. A combined block and report feature in Instagram DMs will help teens end a bad chat and report it to Instagram in one click.
Location Notices were also introduced earlier this year. This alerts users if they’re chatting with someone in another country and is designed to protect young users “from potential sextortion scammers who often misrepresent where they live”, according to Meta. Teen accounts run by adults, parents or talent managers that feature children will get added protection too. Meta says these profiles will now use the strictest message settings and will get automatic filters to hide harmful comments.
Meta claims that in June alone, teens blocked one million accounts and reported another one million after seeing safety notices. Meanwhile, its nudity protection tool, switched on by default 99% times, has helped reduce unwanted exposure to explicit content in DMs, with more than 40% of blurred images remaining unopened.
The company also disclosed that it had removed nearly 135,000 Instagram accounts for posting sexualised content or soliciting images from adult-managed profiles featuring children under 13. An additional 500,000 connected accounts across Instagram and Facebook were also taken down. Meta says it shared intelligence from these removals with other platforms via the Tech Coalition’s Lantern programme.
But not everyone is convinced the update goes far enough.
“Meta’s latest update feels more like a PR play than real progress,” said Ori Gold, CEO of Bench Media. “Muting notifications and limiting DMs is fine, but it's basic. If Meta was serious about protecting teens, accounts would be hidden from search by default and only visible to approved connections. That’s not radical, it’s just common sense.”
Gold also criticised Meta’s continued reliance on self-declared ages, despite the company having the tools to do more. “They’re still relying on self-declared ages, even though they’ve got AI tools that can detect when someone’s lying. Why not make that the standard now?”
While acknowledging the account removals were necessary, Gold questioned why so many made it onto the platform in the first place.
“Removing predator accounts is blessed, but the fact that so many even made it through says a lot. These updates look good in a media announcement, but they don’t get to the core of the issue. Until safety is built in from the ground up, changes like this are just window dressing.”
The latest update follow a 2023 lawsuit that accused Facebook and Instagram of becoming a “marketplace for predators”, enabling users to search for, share and sell child sexual abuse material (CSAM). A Wall Street Journal investigation the same year found Instagram’s recommendation engine was actively promoting paedophile networks.
Internal reports obtained by The Guardian in 2024 found that Meta staff had flagged the platform’s failure to moderate sexual harassment against minors as early as 2021, calling it “a consistent and critical gap” in enforcement. The company was accused of deprioritising safety in favour of product growth.
Meanwhile, the platform also continues to face heightened scrutiny over how its platform harm the mental health of children. Australia has taken the strictest route in the world, banning under-16s from social media use from December 10. Tech companies that don't comply can potentially be fined up to A$50 million ($32.5 million).