To Censor or Not to Censor?
Navigating the fine line between free speech and algorithmic control in the digital age.
Hello,
The algorithm has spoken: your comment is too offensive to be seen. But who is the algorithm working for? The line between freedom of expression and the commercial or political agendas of social media platforms is blurring fast. Digital censorship has quietly taken root, and Vicky Randle’s Substack Note made me reflect on what censorship means today—and how it affects our relationship with it. How fearful have we become of expressing our thoughts on important issues, that we willingly accept the algorithm’s strikes against us and preemptively self-censor to avoid its wrath?
Algorithms
Social media algorithms now shape not just what we see, but what we say. These algorithms don’t just fuel our desires—they curtail our freedom of expression to align with the political and commercial interests of the platforms we rely on. The days of the Internet as a digital frontier—where access to information was democratised—are gone. Today, money and power rule, and our online lives are fully commodified. Social media’s commodification forces us to tread cautiously, fearing that the algorithm will punish us—not just by hiding our posts, but by limiting our reach, influence, or even silencing us altogether.
Everyday Life
This tightening grip of algorithmic control extends beyond what we post. It’s even shaping how we think in our everyday lives. A couple of months back, I watched a video about a Pro-Palestinian protest at UCLA, made by Francis Foster from the TRIGGERnometry Podcast. It was a balanced, boots-on-the-ground documentary that gave a voice to all sides. One student’s comment stuck with me. He mentioned being ‘afraid to speak his mind out of fear of getting low grades.’ How could this even be a concern in a modern, progressive, democratic society?
After commenting on the video, I noticed a reply by emma24ism that caught my attention. She shared how, during a university discussion on gender ideology, she supported a classmate’s concerns about the safety of women in forensic settings. She suspected her views might have negatively affected her grades on a subsequent assignment. When sharing this experience in the YouTube comments, she deliberately misspelt words like “trayn,” “ghender,” and “viola int” to bypass potential algorithmic censorship. The fact that she had to alter her language to share her experience in a YouTube comment is a small but telling example of how algorithms train us to self-edit, ensuring that even our everyday conversations online are tailored to avoid punishment. It’s madness that we can’t openly discuss controversial topics for fear of offending someone—or some algorithm.
Academia
Seeking answers as to why students feel reluctant to express controversial opinions led me to the Heterodox Academy, an organisation that aims to ensure universities are places where intellectual curiosity thrives. Their report, The Universal Problem of Campus Expression (2019–2022), reveals that about 50% of students hesitate to share their views on sensitive topics like race, gender, politics, and religion. At some schools, up to 90% of students are reluctant, and even at the least restrictive campuses, 29% hold back.
Some critics, like John K. Wilson, argue that self-report surveys, such as the Campus Expression Survey, may not fully capture the complexities of self-censorship, with students possibly misinterpreting questions or answering inaccurately. Others, like Zhou and Zhou, suggest that while surveys aren’t perfect, random errors balance out across large samples, providing valuable insights into trends of campus self-censorship.
Regardless of these criticisms, the data highlights a real problem: students increasingly fear expressing their true views on campus.
Impact on Society
This culture of self-censorship extends beyond academia. I’m watching more and more YouTube videos where the host and/or guest deliberately edit out or use code words to avoid triggering the algorithm. I get that Facebook, YouTube, Google, etc., are businesses that need to please advertisers. Content that doesn’t align with their business models is hidden, demonetized, or buried far below the fold. Political pressure likely plays a role too, like in the case of Jordan Peterson’s interview with Tommy Robinson, an outspoken critic of Islam in the UK. That video garnered millions of views before its ad revenue and rankings were quietly pulled.
We all crave authenticity, but how often do we stifle our voices, fearing the loss of followers or subscribers? Self-censorship feels like a survival mechanism, but in reality, it means surrendering our voices to a system that dictates what is acceptable. The more we self-censor, the more power we give to those who control the narrative.
What Next?
So, where does that leave us? Have you ever held back your views or refrained from writing something because of how you thought your audience or platform might react? I know I have. By doing so, we’re not just protecting ourselves—we’re doing the work of those who seek to maintain control over the narrative. We all want to be true to ourselves, but how often do we bite our tongues, fearing the loss of followers or relevance? Is that the world you want to live and write in, today?
Speak soon,
Matt
Thank you for the shout out...and also for this fascinating essay. A great read!
I commend the thought-stimulating concept of this writing. The algorithm nonsense that thrives today is really provoking but you approached the topic thoughtfully and thoroughly.