When Jonas (not his real name) posted a photo of himself in sports clothes to his own Instagram account, the last thing that he expected was for his account to be suspended for suspected child exploitation. Although Jonas is in his 20s, and his sports clothes do not sexualize him at all, evidently an AI image classifier used in automated content moderation had falsely flagged his upload as possible child sexual abuse material (CSAM). Jonas initially reacted with disbelief when this happened – but this was soon followed by a mounting sense of fear about what being under investigation for child exploitation might mean for him.
Adriana (not her real name) was also shocked when she was banned from a popular adult-only platform after its AI moderation system incorrectly flagged her use of common terms within BDSM communities. Despite the platform claiming to use human review, no one assessed her case before the ban was enforced, revealing a significant operational failure in how moderation systems are applied. She writes:
As a survivor of CSA, being banned for safe, sane, and consensual kink practices was deeply triggering. It feels hypocritical to punish adults for sexual expression while simultaneously failing to build stronger safeguards against child exploitation. Human review should be mandatory whenever content is flagged – both to protect children and prevent false positives.
Wrongful arrests and lost memories
Jonas and Adriana are both right to be concerned. When innocent content is reported as child exploitation, innocent lives can be ruined. In 2024, a grandmother was reported to Australian police by TikTok over a non-sexual massage video of her infant granddaughter, sent to the child’s mother; media coverage at the time falsely labeled her as a child abuser. In 2022, Google reported a father to police because he used Gmail to send medical photos of his toddler to the child’s doctor. Both were eventually cleared of wrongdoing, but have received no redress over being falsely accused.
There are many similar stories. Thousands of users not only of Instagram, but also Facebook and YouTube, are facing platform bans and the prospect of police investigation. Some victims of these bans talk of losing thousands of sentimental photos, while others have had online businesses ruined. For others, the bone-chilling fear of being reported to authorities weighs on them more heavily than losing their account. When a young gay man engaged in consensual sexual role-play with another adult was arrested by police on child exploitation charges in 2021, he couldn’t face the shame that the false charges brought upon him, and took his own life.
Internet platforms bear a heavy responsibility to keep their platforms free of real child sexual abuse, and some are not discharging this responsibility very well, or are even contributing to the problem. But shortcomings in online child-safety reporting systems cut both ways. While it is unacceptable when real CSAM remains online without being taken down, falsely accusing innocent people of serious crimes isn’t a trade-off that we should have to accept.
AI is worsening the problem
This problem isn’t going away, but rather it’s getting worse. One reason is increasing reliance on inaccurate AI classifiers, with even photos of family pets being flagged as child abuse, and apparently never undergoing human review. Platforms must report genuine CSAM, and the initial use of AI systems to help flag it in public uploads is justified. But this should never result in an immediate account ban without manual review by a human moderator.
Another factor contributing to the rise in innocent people being flagged for child abuse is that platforms are becoming increasingly risk-averse, under pressure from lawmakers and an electorate of concerned parents who aim to hold them responsible for online child sexual abuse. Laws both current and threatened are shifting increasing liability onto platforms for getting it wrong – with the predictable result that they are taking fewer chances, no matter the cost to users who are wrongly accused.
One option to address this problem is for victims of over-reporting to use the same tactic – hitting platforms in the hip pocket when they get it wrong. That’s what William Lawshe did, after being wrongly reported to authorities by Verizon over what were plainly 18+ erotic images, even bearing adult site watermarks. Lawshe sued Verizon and its CSAM-scanning service provider to seek compensation for the disgrace and health problems that he suffered after being carelessly and wrongly reported to authorities. A final decision is yet to be handed down, but the court has already allowed key claims to proceed.
How platforms can do better
Nobody should have to resort to a lawsuit simply to keep their name and their online presence clear of false child exploitation allegations: prevention is, as usual, better than cure. A large part of that responsibility falls on platforms simply to do a better job: to draw their child exploitation policies narrowly and precisely, and to involve humans before taking actions such as account bans or referrals to law enforcement.
For those that consistently fail to live up to their responsibilities towards innocent users, sunlight might be able to help. One of the first projects of the Center for Online Safety and Liberty (COSL) was the launch of the Harmful to Minors transparency archive, where false child exploitation takedowns are published for public critique.
Later in 2026, COSL will also be publishing a second edition of our Drawing the Line Watchlist. The first edition of the Watchlist evaluated ten countries around the world for how accurately their laws draw the line between personal expression and lived abuse. The second will extend this analysis to the policies and enforcement practices of Internet platforms, including how well they safeguard users against false positives, provide appeal mechanisms, and limit automated escalation.
Over the longer term, COSL is also pursuing a vision to foster and establish alternative platforms and tools that hold safety and liberty in better balance, allowing a diverse range of creative content and personal expression to flourish, without sacrificing safety. These projects include our privacy-first offshore hosting service Liberato, our upcoming fan community Fan Refuge, and our open source content warning system, Dead Dove.
Conclusion
The fight against child sexual abuse online is too important to be undermined by blunt, unaccountable systems that punish the innocent. When platforms treat false positives as an acceptable cost of doing business, they shift the burden of their own errors onto ordinary users, who are left to face fear, stigma, and lasting harm alone.
Child safety and civil liberties are not opposing values, and we should reject any approach that claims otherwise. The solution is not less vigilance, but better vigilance: narrower rules, human judgment, transparency, and accountability. Until platforms adopt those principles, innocent people will continue to pay the price for mistakes they did not make.
