Fiction or Felony? The Blurring of Art and Abuse

When Danish AI artist “Barry Coty” was arrested in 2023, it came as a surprise. He had believed that the fantasy AI porn images he was creating were a harmless outlet that might reduce demand for real child abuse material. Instead, he found himself at the center of a global Interpol operation and facing criminal charges that would help reshape laws across multiple countries.

For prosecutors, the case was clear-cut: Coty had created and distributed sexual imagery depicting minors, regardless of whether those minors were real or AI-generated. His case became a catalyst for an upcoming 2025 law in Denmark banning AI-generated sexual content involving minors, part of a global wave of legislation that has swept through 39 U.S. states and the European Parliament, which approved a directive to criminalize AI systems used to generate such content. Globally, pressure is rising for countries to criminalize more virtual sex offenses.

This movement reflects a seemingly “common sense” principle: content depicting anyone under 18 sexually should be illegal, whether the subjects are real or virtual. But scratch beneath this consensus, and a more complex picture emerges—one that encompasses far more than AI-generated images. As researcher Aurélie Petit recently discussed with me, while AI deepfakes may grab the headlines, the end result of a zero-tolerance approach is that AI images are treated alongside fan fiction, art, memoirs, and more, principally from queer creators and women, all within the thought-terminating category of child sexual abuse material (CSAM).

This article explores the views of a growing number of experts, including lawyers and psychologists, who challenge this approach, arguing that it drives over-criminalization, stifles artistic expression, disproportionately harms marginalized communities like LGBTQ+ individuals, and even obstructs effective sex abuse prevention efforts. Through these insights, we examine whether the rush to criminalize AI-generated content (and more) oversimplifies a complex issue—and what’s at stake when nuance is ignored. 

From protecting victims to policing fiction

In response to a proposal in the 2026 New York budget to redefine felony sex offenses to include AI-generated content, the New York City Bar Association, a 23,000-member organization of prosecutors, defense attorneys, and judges, voiced strong opposition, alongside other groups such as New York Legal Aid. The City Bar argued:

The purpose of criminalization is not to punish the distasteful proclivities of adults who view it, but rather to disincentivize recorded abuse and thereby prevent future victimization. Part L’s proposed amendments move away from this rationale of protecting children, as no actual children are involved—simply computer-generated representations—and embrace punishing people for their sexual fantasies. This would be a sea change in the justification for laws against child pornography.

While this shift toward criminalizing fantasies may seem novel in the U.S., other countries have embraced it for decades. In 1993, Canadian lawmakers confidently declared that “it is wrong to have these fantasies and wrong to write them down. Period.” Within a year of passing a law to criminalize such content, police were raiding art galleries and arresting artists and authors for works deemed child pornography. In Australia, where a parody Simpsons cartoon was prosecuted as “child abuse material”, author Lauren Tesolin-Mastrosa faces similar charges in 2025 over a fictional erotic novel depicting adults. Authorities there not only use the same laws against real and fictional crimes, but don’t even keep track of the difference in official statistics.

Now, the U.S. is following Canada’s lead, with police also literally raiding art galleries over works mislabeled as child pornography. Legal and human rights experts are pushing back, arguing that the distinction is critical. In the Australian Simpsons case, Justice Adams emphasized:

At the outset it is necessary to appreciate, as I think, that there is a fundamental difference in kind between a depiction of an actual human being and the depiction of an imaginary person. … There was a tendency in the arguments before me to suggest that the distinction is merely one of degree. This is quite wrong. Such an approach would trivialize pornography that utilized real children and make far too culpable the possession of representations that did not.

Challenging the link between fictional content and harm

This blurring of lines between fictional and real abuse is fueling a broader wave of U.S. laws that use similar logic to target sexual content more generally. The same unproven assumptions that drive AI-specific legislation—that consuming certain content leads to harmful behavior—now appear in laws targeting pornography broadly. California’s AB 1831 asserts, without evidence, that “pornography may increase sexually aggressive thoughts and behaviors” and that AI-generated content “normalizes and validates the sexual exploitation of children.” Age-verification laws like Texas’s HB 1181 apply this reasoning to restrict access to content deemed “harmful to minors,” sweeping in manga and anime alongside actual pornography. A proposed federal Interstate Obscenity Definition Act would further expand criminalization of sexual content based on these same theoretical harms. The common thread: laws justified by the claim that consuming sexual content promotes abuse—but does this “slippery slope” argument hold up?

The New York City Bar Association’s submission opposing the redefinition of felony sex offenses to include AI-generated content cites numerous studies that challenge that assumption. A 2010 study found sex crimes, including child sex offenses, declined during periods of unregulated pornography access. A 2023 meta-study on fictional sexual materials (FSM), including depictions of minors, found no link to sexual aggression and suggested FSM use might reduce harmful impulses in high-risk individuals through a “cathartic effect,” proposing it as a harm reduction outlet.

A 2008 study examined claims made during the passage of an earlier U.S. law, the 2003 PROTECT Act, which equated penalties for virtual image-based crimes with those involving real children. It found no evidence that FSM increases acceptance of child sexual abuse, directly rebutting the Act’s rationale. In 2012, a Danish sexological clinic also advised against banning FSM, citing no clear harm. Yet as the political winds changed, Denmark reversed course with its 2025 law criminalizing AI-generated content, spurred by an Interpol operation targeting synthetic CSAM networks. And this brings us back to Barry Coty.

An agenda to raise penalties for fictional sex crimes

Danish AI artist Barry Coty (a pseudonym) was arrested in 2023 for creating and distributing AI-generated depictions of minors in sexual contexts via a paid subscription platform, as part of an Interpol sting operation called Operation Cumberland, that also netted a string of other arrests around the world. Coty’s case, one of the first targeting an AI porn creator, highlights the global push to equate fictional content with real abuse. He contacted me unsolicited in order to tell his story.

In January 2025, Coty pled guilty to the charges and originally received a sentence of one year and three months, partly suspended, and 200 hours of community service. But the prosecution appealed the verdict, hoping to establish a stricter precedent around synthetic material. Today (June 12, 2025), they succeeded, increasing Coty’s sentence to eighteen months of actual jail time. Coty plans to appeal that decision to Denmark’s Supreme Court. He explains:

My intentions for making and distributing these images have always been to reduce the amount of real CSAM material being sought out and shared on the internet, with a less demoralizing replacement to the individual consuming them, and of course to reduce the suffering being continuously done to children that are victims of real CSAM by their abusive and non-consensual images being spread around the internet.

Policymakers face genuine challenges here. The rapid emergence of AI technology has outpaced existing legal frameworks, creating uncertainty about how to protect children while preserving legitimate rights. The visceral public reaction to any content involving minors—even fictional—creates enormous political pressure to act decisively.

There also seems to be no question—going purely from descriptions of them—that the images Coty produced would be perceived as confronting and offensive by most. There’s little question why authorities target these images as a starting point to broaden the criminalization of fictional content—it’s unsettling to consider their existence online, even in obscure sex forums, and to acknowledge that they fulfill a sexual interest for some. For many, criminalizing these images serves as a convenient stand-in for criminalizing that interest itself.

The wrong tool for a misunderstood problem

But the law is a blunt tool, and the wrong one, for managing the existence of paraphilic sexual interests within the community, especially among those, like Coty, who have gone out of their way to find harmless artistic outlets for them. Criminalization advocates have suggested that the distribution of such images in niche online sex forums carries a “terrifying potential to flood the internet with a tsunami of abuse imagery”, or even that this could trigger the “conversion” of those who unwittingly view them into pedophiles.

But both of these are far-fetched suggestions. The reality is that almost all online platforms strictly rule against AI-generated sexual content with characters resembling minors, and there are already effective tools to weed it out and to distinguish it from real abuse imagery. If such imagery were to be distributed on a mainstream platform, it would doubtless be reported to authorities, who have clearly affirmed that they already possess the legal authority to prosecute.

As to the possibility that unwitting viewers of these images could be transformed into pedophiles, this smacks even more strongly of fearmongering. It is widely accepted by experts that accessing a particular type of sexual content is a sign that a person already has an interest in it, rather than being the catalyst for them to develop a new sexual interest, especially not in something that they previously found disgusting. As Coty stated to me, “most people feel an innate repulsion towards such imagery”.

There are, and always will be, Internet users who reject child sexual abuse, while at the same time being drawn towards representations of underage sexuality. Their reasons for doing so, and the representations that they seek out, both exist on a spectrum. Within that spectrum are many legitimate artistic works, disproportionately created and consumed by sexual abuse survivors, LGBTQ+ people, young people, and women. Some may even escape misclassification as CSAM and enjoy critical acclaim. In a 2024 submission that I wrote to the Australian government urging reforms to its classification system, I wrote:

Mainstream TV series such as Euphoria (depicting characters represented as children having sex) and Game of Thrones (representing characters engaged in incest) are routinely passed with MA 15+ or R 18+ ratings… But while mainstream Hollywood TV and movies can be classified quite leniently, it is no exaggeration to say that if a person enters the country with Japanese cartoons that depict exactly the same subjects as Euphoria or Game of Thrones, they stand a very real risk of being arrested.

It will never be possible, nor would it be desirable, to erase all such representations from the Internet and to criminalize those who seek them out. There will always be those who create or consume representations of minors that make us uncomfortable, or who do so for reasons we find uncomfortable. In cases in which the consumption of such content crosses the line into fueling problematic behaviors, the approach that professionals recommend is one of harm reduction and prevention, not criminalization. So the real solution may simply be for us to make peace with this, and allow those professionals to do their jobs.

Fictional material is not CSAM

The conflation of fictional and real sexual abuse material represents a calculated political strategy decades in the making. Advocacy organizations and their allies in government have systematically engineered linguistic shifts to expand the scope of what constitutes sexual crimes, often in ways that serve interests beyond actual child protection.

For example, the 2016 Luxembourg Terminology Guidelines began to establish a new international norm that the term “child sexual abuse material” should be used in preference to “child pornography” when referring to “material that depicts and/or that documents acts that are sexually abusive and/or exploitative to a child.” While this makes sense, the devil in the details—made more explicit in a 2025 Revision—was to include fictional content also, a move completely at odds with the stated rationale that undermines the term’s gravity. Even the original term “child pornography” is a more accurate term for sexually arousing images that don’t record abuse, as FBI Special Agent Kenneth Lanning observes, writing:

The efforts to encourage use of this new term is a good example of well-intentioned people trying to solve a problem by emotionally exaggerating the problem… It is interesting to note some of those advocating for use of the term child-abuse images also advocate for criminalizing as child pornography visual images that do not even portray actual children. You cannot have it both ways.

Common justifications given for the criminalization of AI-generated images depicting minors are that they may be made in the image of real individuals, be generated using models that were trained on real CSAM, or be used in grooming children. Such cases involve specific abuses that should be prosecuted directly, not used to criminalize all fictional works. Barry Coty insists that none of those justifications apply in his case. No real CSAM was ever found in his possession, and he has no history of sex offending. While small volumes of illicit content have been inadvertently included in the training data of mainstream generative AI models, Coty insists he never used any unlawful content in editing his creations, and he described to me in some detail the technical process that he followed to achieve this. 

In such cases, whether AI tools are involved or not, equating fictional content to photos and videos recorded at actual crime scenes trivializes the real suffering of the victims of those crimes. When policymakers obscure this distinction with linguistic sleight-of-hand, it should be called out as dishonest, and their real political motivations exposed: inflating “child abuse material” statistics, providing moral cover for broader censorship campaigns, and creating new categories of criminals to justify expanded enforcement budgets and powers. Coty writes:

Why are the lawmakers to keen to include fictional abuse into real abuse statistics? I think there is some incentive to do so in order to argue why the state should have more oversight into private messages through online surveillance. In this way, fictive child pornography becomes a catalyst/scapegoat to finally get better tools to go after drug dealers and terrorists.

Calling this out is fraught, as honest discourse on this topic is often ruthlessly punished. Yet, one cannot claim moral authority while blurring actual abuse with offensive fiction for broader political ends.

The fight back begins with Fan Refuge

Censorship of fictional sexual materials is an issue that sits at the very nexus of the four priority areas of the Center for Online Safety and Liberty (COSL): promoting safer hosting, supporting fans, combating cyberbullying and abuse, and engaging in legal advocacy. At the very core of our mission—and even our name—is the firm belief that it is neither acceptable nor necessary to sacrifice online liberty for the sake of safety.

That includes upholding the liberty for creators and fans to express themselves without fear of prosecution over fictional content, while at the same time ensuring that nobody is exposed to potentially offensive content, even if it is fictional, without their consent. 

Here’s how we’re putting that into practice, starting right at home. First, this month we are launching a crowdfunding campaign for a new creator platform called Fan Refuge, which will launch as a testbed for some open source trust and safety tools that we’ve been developing. Fan Refuge won’t be an adult content platform, and it won’t allow AI generated content at all. But it will prioritize empowering its users to curate their own experiences, rather than imposing site-wide censorship on arbitrary moral grounds.

Justice for Real Survivors

Second, we’re launching a major new advocacy project titled Justice for Real Survivors, directed at the problem that politicians and policymakers are intentionally blurring the lines between fictional and non-fictional sex crimes. The project’s aim is to begin to reshape laws, policies, social norms, and language to prioritize real sex crimes with real victims, and to clearly distinguish them from crimes under obscenity or censorship laws.

To kick off the Justice for Real Survivors project, we are convening a diverse advisory board who will develop a statement of principles setting out the harms of conflating sex crimes with fictional, artistic, and educational texts. These principles will be opened for broader sign-on, and will create a framework for other activities under the project, including research, coalition-building, policy advisory, public campaigns, and strategic litigation support. (The opinions expressed in this article are my own, not those of the advisory board.) 

Only miniscule funding is made available for resources for the prevention of sexual abuse. So we’re grateful to have already secured the interest of a philanthropic donor in supporting the project’s first research output. This will be a major legal review which will provide a comparative analysis of the treatment of fictional sexual materials across ten countries, and assess the compatibility of these legal regimes with human rights standards. We hope to begin this survey in the third quarter of 2025. 

Conclusion

The rush to criminalize fictional content—from AI-generated images to erotic novels—promises safety but delivers censorship, punishing creators while diverting resources from real victims. Policymakers, swayed by unproven claims that niche fantasies will “flood” the mainstream or “convert” viewers into predators, employ linguistic shifts that equate offensive art with heinous crimes. Yet, as science shows, porn consumption reflects pre-existing interests, not new ones, and banning fictional outlets will only push consumers into darker corners and obstruct harm reduction.

The conflation of art and abuse isn’t just misguided—it’s harmful. With fewer than 4% of real sexual assaults leading to convictions and only 3.5% of CSAM reports investigated, survivors are sidelined as authorities pursue victimless prosecutions. Marginalized creators—often survivors themselves, along with LGBTQ+ artists, young people, and women—face censorship or prosecution for works that challenge norms, while honest discourse is stifled by stigmatizing attacks. The solution lies not in erasing uncomfortable content but in embracing nuance: prioritizing real victims, empowering creators, and trusting professionals to prevent abuse without sacrificing liberty.

Barry Coty’s case represents one end of the spectrum—his AI-generated content would be deeply offensive to most people, and few are likely to rush to his defense. But the legal principles established through his prosecution won’t stop with creators like him. The same frameworks now being used to criminalize his work will also expand to target fan fiction writers, manga artists, abuse survivors processing trauma through art, and LGBTQ+ creators exploring identity and sexuality. When we normalize prosecuting people for offensive but victimless content, we create precedents that reach far beyond the most unsympathetic cases.

Through Fan Refuge and Justice for Real Survivors, we’re forging a path forward—building platforms that respect user choice, reshaping laws to focus on actual harm, and amplifying survivors’ voices. But change demands courage. Will we confront the uncomfortable truth that fictional content isn’t abuse, or cling to moral panic at the cost of justice? The choice is ours, and the stakes are high.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive notifications of our new blog posts in your inbox.

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *