|
|
Newsletter #10—January 2026
|
|
|
|
|
Politics, fear, and the free Internet
Authoritarian politics does not begin with laws. It begins with fear. Fear is the raw material of extremism: fear of disorder, fear of outsiders, fear of moral decay, fear of losing control. Sex has long been one of the most effective levers for activating that fear — but it is only a lever. The underlying mechanism is always the same: manufacture anxiety, then offer repression as the cure.
|
The Trump political project has been unusually explicit about this. From immigration to policing to Internet governance, fear is not an accidental by-product; it is the strategy. Sexual panic, moral panic, national-security panic — these narratives are deliberately braided together to justify surveillance, censorship, and coercive state power.
|
Nowhere is this clearer than in the administration’s approach to the Internet. We are seeing serious efforts to require foreigners to hand over years of social-media history as a condition of travel or visa approval — a demand that treats online identity as a form of pre-crime risk profiling. At the same time, ICE and other federal agencies are actively expanding programs to monitor social-media activity across platforms like Instagram, TikTok, X, and Facebook, not merely for specific criminal threats, but for patterns of association, expression, and dissent.
|
This is not about isolated bad actors online. It is about normalising digital surveillance as a condition of belonging.
|
The justification is always framed in the language of fear: protecting children, preventing chaos, stopping “dangerous ideologies,” identifying “threats before they happen.” Sex frequently appears in this narrative — as obscenity, corruption, or danger — because it reliably provokes emotional reaction. But sex is not the point. Fear is the point. And once fear is established, the scope of control inevitably expands.
|
|
That expansion is not confined to screens. It is unfolding, violently, in physical space as well.
|
Today, across Minnesota, people are protesting a sweeping federal immigration enforcement operation that has flooded cities with ICE and Border Patrol agents. These protests follow multiple fatal shootings by federal officers, including the killing of an ICU nurse during an enforcement action in Minneapolis. The official response has been familiar: calls for “de-escalation,” assertions of lawful authority, and warnings about disorder — even as communities mourn the dead and demand accountability.
|
Here again, fear does the work. Fear of immigrants. Fear of protest. Fear of unrest. Each fear is invoked to justify militarised policing, surveillance, and the suspension of ordinary restraints on state power.
|
What connects immigration raids, online monitoring, and moral panic is not coincidence. It is a coherent political logic: the belief that safety requires visibility, and visibility requires surveillance — of bodies, of movement, of speech.
|
This is why Internet regulation cannot be treated as a neutral or technical policy debate. Content moderation rules, identity verification schemes, platform surveillance mandates, and cross-border data demands do not exist in isolation. They are shaped by — and in turn reinforce — the same political forces that expand policing powers, criminalise dissent, and erode due process.
|
|
When governments argue that the Internet must be controlled because it is “dangerous,” the critical question is not whether danger exists, but who defines it — and who pays the price.
|
History is clear on this point. The targets of fear-based governance are rarely those with power. They are immigrants, sexual minorities, political dissidents, journalists, activists, and ordinary people whose speech or presence makes authorities uncomfortable. Surveillance tools introduced “for safety” almost never stay narrowly confined. They metastasise.
|
The protests in Minnesota are not just about immigration enforcement tactics. They are about who is watched, who is believed, and whose lives are treated as expendable. The same questions apply online. Who gets flagged? Who gets silenced? Who is forced to justify their existence to an algorithm or an officer?
|
A free Internet is not merely about access or innovation. It is about resisting the politics of fear — the idea that liberty must always yield to control, that transparency must always flow upward, and that safety requires submission.
|
|
If we allow fear to govern our Internet, we will soon find that it governs far more than our screens. It will govern our borders, our streets, and our ability to speak without permission.
|
|
And by the time the silence feels normal, it will already be too late.
|
|
|
|
|
|
|
|
|
The Latest from Our Blog
|
|
|
False Positives, Real Harm: When Child Safety Systems Get It Wrong
|
|
|
|
When Jonas (not his real name) posted a photo of himself in sports clothes to his own Instagram account, the last thing that he expected was for his account to be suspended for suspected child exploitation. Although Jonas is in his 20s, and his sports clothes do not sexualize him at all, evidently an AI image classifier used in automated …
|
|
|
|
|
|
|
|
Stop Trump Immigration Social Media Dragnet
|
|
|
|
The Trump Administration is currently trying to put rules in place for foreigners to hand over several years of social media history, email history, and personal information on family members in order to enter the country. This is being done in the name of national security and to protect the citizens of the United States. However, foreigners are being denied …
|
|
|
|
|
|
|
|
Beyond the Filter: Australia's Under 16 Social Media Ban
|
|
|
|
This episode discusses Australia's groundbreaking social media minimum age law, which mandates that platforms block users under 16. The conversation explores the implications of this law on free expression, mental health, cyberbullying, and the responsibilities of both the government and parents. It also delves into the enforcement challenges and the potential impact on adult users' privacy, while considering alternative approaches …
|
|
|
|
|
|
|
|
|
|
|
Join COSL at RightsCon 2026
|
|
|
We are excited and privileged to share with you the news that COSL will be represented at RightsCon 2026, the world's leading summit on human rights in the digital age, which is being held this year in Lusaka, Zambia. We will be presenting a workshop "Drawing the Line Between Personal Expression and Lived Abuse", to discuss the Drawing the Line Watchlist 2025. The workshop is described as follows:
|
|
|
Across many countries, fictional sexual content such as books, illustrations, stories, images, animated works, and generative AI-produced content, are increasingly being treated under the same legal frameworks as real child sexual abuse material (CSAM). This legal conflation raises serious concerns about freedom of expression, proportionality in sentencing, and misallocation of law enforcement resources. It also distracts from survivor-centred justice and ignores the absence of a real victim in such content.
This workshop presents findings from a comprehensive legal review comparing how ten countries across six continents address fictional versus real abuse material, assessing compatibility with human rights standards. The research reveals troubling patterns: artists prosecuted for drawings, writers criminalized for fiction, survivors silenced when sharing their stories, and LGBTQ+ creators disproportionately targeted. Meanwhile, resources that should protect actual children are diverted to policing imagination.
The Center for Online Safety and Liberty's "Drawing the Line" program responds to this crisis through evidence-based advocacy that distinguishes between content depicting real abuse and fictional expression. Our comparative legal analysis demonstrates how current approaches fail both child protection and human rights objectives.
Participants will examine case studies from jurisdictions where fictional content prosecutions have occurred, review our policy recommendations for reform, and discuss the "Drawing the Line Principles" – a framework for advocates, policymakers, and platforms to maintain focus on real abuse while protecting legitimate expression. This session aims to build a coalition of organizations and individuals committed to evidence-based child protection policies that don't sacrifice fundamental rights or survivor voices in the process.
|
|
|
RightsCon takes place between May 5-8, 2026. Registration is open now, and there is no additional fee for attending our session. It will be streamed live for registered attendees who can't make it to Zambia, so register below to ensure your spot.
|
|
|
|
|
|
|
|
|
A Look Back to RightsCon 2019
Some of the same COSL team members and collaborators who are working on our 2026 RightsCon session were also collaborators at a previous session at RightsCon in Tunisia in 2019. The session was the second in a series of Multi-Stakeholder Dialogues on Internet Platforms, Sexual Content, and Child Protection, and it hosted the launch a set of Best Practice Principles for Sexual Content Moderation and Child Protection.
|
Despite being seven years old, these Best Practice Principles stand up remarkably well today, and have become perhaps even more relevant than ever:
|
Sexual content should be restricted where it causes direct harm to a child. Indirect harms should not be the basis for blanket content restriction policies unless those harms are substantiated by evidence, and adequate measures are taken to avoid human rights infringements.
|
Companies should evaluate the human rights impacts of their restriction of sexual content, meaningfully consult with potentially affected groups and other stakeholders, and conduct appropriate follow-up action that mitigates or prevents these impacts.
|
Companies and others involved in maintaining sexual content policies, databases or blocklists should describe the criteria for assessing such content in detail, especially when those policies would prohibit content that is lawful in any of the countries where such policies are applied.
|
Users whose lawful sexual conduct infringes platform policies should not be referred to law enforcement, and their lawful content should not be added to shared industry hash databases, blocklists, or facial recognition databases.
|
The context in which lawful sexual content is posted, and whether there are reasonable grounds to believe that the persons depicted in it have consented to be depicted in that context, should be considered before making a decision to restrict or to promote it.
|
Content moderation decisions should be applied to users based on what they do, not who they are.
|
Content should not be added to a hash database or blocklist without human review. Automated content restriction should be limited to the case of confirmed illegal images as identified by a content hash.
|
Users should be notified when their content is added to a hash database or blocklist, or is subject to context-based restrictions, unless such notification would be prohibited by law.
|
Companies should give priority to content removal requests made by persons depicted in images that were taken of them as children, and provide users with the means of filtering out unwanted sexual content.
|
Participants also endorsed a consensus statement that directly addressed the need for policymakers to use an evidence-based and human rights centre approach when addressing child sexual abuse (CSA):
|
|
|
|
We encourage policymakers to adopt a comprehensive approach to combating CSA that is guided by public health principles and human rights standards.
|
|
|
For more information on the Best Practice Principles for Sexual Content Moderation and Child Protection and the process that led to them, you can read the background paper for the Multi-stakeholder Dialogue at RightsCon 2019.
|
|
|
|
|
|
|
|
|
|
|
Join Us as an Activist
Once again COSL has an opening for an Activist to join our team to help drive strategic, high-impact advocacy initiatives. In this role, you’ll work closely with COSL’s management team to identify timely opportunities, design persuasive campaigns, and bring them to life through social media, open letters, policy submissions, petitions, and direct engagement. You’ll meet virtually with policymakers, influencers, and coalition partners, build relationships across borders, and help ensure COSL’s voice is heard in fast-moving debates around online safety, censorship, and digital rights.
|
This is a part-time, fully remote volunteer role (approximately 5–10 hours per week) ideal for someone who is deeply motivated by digital-rights issues and eager to turn ideas into action. You’ll gain hands-on experience in advocacy strategy, lobbying, and campaign execution, while connecting with a global network of civil-liberties and technology-policy professionals. If you’re passionate about defending online freedom — and want to help shape the conversations that matter — we warmly invite you to apply.
|
|
|
|
|
|
|
|
|
|
Support our work!
Pledging your monthly support for our work is the best way that you can support us, because it gives us the stability to plan ahead. You can pledge your support at three levels.
|
|
|
|
|
|
|
|
|
|
Keep In Touch With Us
|
|
|
|
|
|
|
|
|