Trending
Global markets rally as inflation data shows cooling trends...SpaceX announces new mission to Mars scheduled for 2026...Major breakthrough in renewable energy storage technology...International summit on climate change begins in Geneva...Global markets rally as inflation data shows cooling trends...SpaceX announces new mission to Mars scheduled for 2026...Major breakthrough in renewable energy storage technology...International summit on climate change begins in Geneva...Global markets rally as inflation data shows cooling trends...SpaceX announces new mission to Mars scheduled for 2026...Major breakthrough in renewable energy storage technology...International summit on climate change begins in Geneva...
The science of how (and when) we decide to speak out—or self-censor
Technology
News

The science of how (and when) we decide to speak out—or self-censor

AR
Ars Technica
about 5 hours ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Dec 30, 2025

Freedom of speech is a foundational principle of healthy democracies and hence a primary target for aspiring authoritarians, who typically try to squash dissent. There is a point where the threat from authorities is sufficiently severe that a population will self-censor rather than risk punishment. Social media has complicated matters, blurring traditional boundaries between public and private speech, while new technologies such as facial recognition and moderation algorithms give authoritarians powerful new tools.

Researchers explored the nuanced dynamics of how people balance their desire to speak out vs their fear of punishment in a paper published in the Proceedings of the National Academy of Sciences.

The authors had previously worked together on a model of political polarization, a project that wrapped up right around the time the social media space was experiencing significant changes in the ways different platforms were handling moderation. Some adopted a decidedly hands-off approach with little to no moderation. Weibo, on the other hand, began releasing the IP addresses of people who posted objectionable commentary, essentially making them targets.

“We were seeing a lot of experimentation in the social media space, so this study started as a question,” co-author Joshua Daymude of Arizona State University told Ars. “Why are these companies doing such dramatically different things, if ostensibly they’re all social media companies and they all want to be profitable and have similar goals? Why are some going one way and others going another?”

Daymude and his co-authors also noticed similar dynamics at the nation-state level in terms of surveillance, monitoring, and moderation. “Russia, for the longest time, was very legalistic: ‘Let’s enumerate every bad thing we can think of so that if you do anything even remotely close, we can get you on one of these statutes that we’ve invented,’” said Daymude. “China was the opposite. They refused to tell you where the red line was. They just said, ‘Behave yourself or else.’ There’s a famous essay that calls this ‘The Anaconda in the Chandelier’: this scary thing that might fall on you at any moment so you behave yourself.”

The US has adopted more of a middle ground approach, essentially letting private companies decide what they wanted to do. Daymude and his co-authors wanted to investigate these markedly different approaches. So they developed a computational agent-based simulation that modeled how individuals navigate between wanting to express dissent versus fear of punishment. The model also incorporates how an authority adjusts its surveillance and its policies to minimize dissent at the lowest possible cost of enforcement.

“It’s not some kind of learning theory thing,” said Daymude. “And it’s not rooted in empirical statistics. We didn’t go out and ask 1000 people, ‘What would you do if faced with this situation? Would you dissent or self-censor?’ and then build that data into the model. Our model allows us to embed some assumptions about how we think people behave broadly, but then lets us explore parameters. What happens if you’re more or less bold? What happens if punishments are more or less severe? An authority is more or less tolerant? And we can make predictions based on our fundamental assumptions about what’s going to happen.”

According to their model, the most extreme case is an authoritarian government that adopts a draconian punishment strategy, which effectively represses all dissent in the general population. “Everyone’s best strategic choice is just to say nothing at this point,” said Daymude. “So why doesn’t every authoritarian government on the planet just do this?” That led them to look more closely at the dynamics. “Maybe authoritarians start out somewhat moderate,” he said. “Maybe the only way they’re allowed to get to that extreme endpoint is through small changes over time.”

Daymude points to China’s Hundred Flowers Campaign in the 1950s as an illustrative case. Here, Chairman Mao Zedong initially encouraged open critiques of his government before abruptly cracking down aggressively when dissent got out of hand. The model showed that in such a case, dissenters’ self-censorship gradually increased, culminating in near-total compliance over time.

But there’s a catch. “The opposite of the Hundred Flowers is if the population is sufficiently bold, this strategy doesn’t work,” said Daymude. “The authoritarian can’t find the pathway to become fully draconian. People just stubbornly keep dissenting. So every time it tries to ramp up severity, it’s on the hook for it every time because people are still out there, they’re still dissenting. They’re saying, ‘Catch us if you dare.’”

The takeaway: “Be bold,” said Daymude. “It is the thing that slows down authoritarian creep. Even if you can’t hold out forever, you buy a lot more time than you would expect.”

That said, sometimes a bit of self-censorship can be a net positive. “I think the time and situation in which this paper has been published and our major governmental examples will evoke a primarily political interpretation of what we’re talking about here,” said Daymude. “But we tried to be clear that this doesn’t have to be some adversarial oppressive regime versus freedom loving people. Self-censorship is not always a bad thing. This is a very general mathematical model that could be applicable to lots of different situations, including discouraging undesirable behavior.”

Daymude draws an analogy to traffic laws, notably speed limits. Their model looked at two different forms of punishment: uniform and proportional. “Uniform is anything over the line gets whacked the same,” said Daymude. “It doesn’t matter if you were a little bad or very bad, the punishment is identical for everyone. With the proportional approach, the punishment fits the crime. You sped 10 miles an hour over the limit, that’s a small fine. You sped 100 miles an hour over, this is reckless endangerment.”

What he and his co-authors found intriguing is that different subjects self-censor more strongly in each of those two punishment scenarios. “For uniform punishment, it’s the moderate folks who only wanted to dissent a little bit who self-censor because it’s just not worth it to stick their neck out,” said Daymude. “Very extreme dissenters stick their neck out and say, ‘It doesn’t matter. You can punish me. This is still worth it.’ In the proportional regime, this flips. It’s the moderates who do what they want. And no one expresses dissent over a certain amount. Yeah, we all speed a little bit, but we have this norm: we’re all going to speed a moderate amount over the limit, and then we’re going to stop. It’s not safe, it’s not acceptable, to go beyond this.”

Daymude is aware that there are limitations to this agent-based approach, but insists it can still yield useful insights. “With a mechanistic model like this, you can really tie outcomes to explanations,” said Daymude. “In the artificial world of my model, when tolerance moves like this, the population changes like this, and I can tell you it is because of that change and not because of the hundreds of other things that might’ve been going on in someone else’s head.”

The next step would be to design an empirical study that could test their working hypothesis. “I am not under any fiction that everything in this paper is absolutely true in the real world,” said Daymude. “But it makes it very clear what matters and what doesn’t, and what the phases of behavior are. There’s compliance, then self-censorship, and defiance, and it happens in this way. These phases can disappear if boldness is not sufficient. So I see this not in competition with, but complimentary to the other kinds of research in this area.”

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.

Primary Source

Ars Technica