In early December, Joshua Aaron, the developer behind the ICEBlock app — designed to let people alert others about the presence of Immigration and Customs Enforcement (ICE) agents — filed a federal lawsuit alleging his First Amendment rights were violated. The Department of Justice had urged Apple to remove Aaron’s app from its App Store, which the suit called unconstitutional. And Apple had complied — in the process, setting its own precedent for suppressing anti-ICE speech.
The year 2025 has marked perhaps the biggest leap back for American free speech in generations. The Trump administration’s war on immigrants and civil liberties has led it to attempt to deport organizers and researchers over political speech, weaponize the Federal Communications Commission to crack down on disfavored broadcast shows, and file multiple frivolous lawsuits against journalists that covered Trump, many of which reached settlements that look a lot like shakedowns.
Immigration restrictions, heavy-handed regulation, civil lawsuits, bad-faith prosecutions — these are all longtime tools to shut down speech and criticism. But the administration has also moved to control private speech gatekeepers. With the formalization of the deal to sell TikTok to a consortium including the Ellison-helmed Oracle coming in just under the wire, we are ending 2025 with every major social media platform fully or partially controlled by Trump-friendly US billionaires, the same year that, for the first time, most people in the country reported getting their news from social media. The consolidation of social media control and its broad influence give the administration a very powerful, newer tool, one that ironically began as an effort to preserve and protect online discourse: content moderation.
The Trump administration had suggested, without evidence, that ICEBlock put agents at risk. His is the first such lawsuit after big tech companies went on a spree of blocking his and similar tools, including Eyes Up, an app that was designed to archive and catalog footage of past ICE operations. For all of these takedowns, platforms like Apple and Google cited supposed violations of content policies, including, notably, removing Red Dot and DeICER by classifying ICE agents as a vulnerable group.
“I talked to a couple of sort of longtime trust and safety people who did this kind of work inside platforms for years, and they were like, ‘we can’t speak to Apple’s policy, but I’ve never seen a policy like that, where cops are a protected class,’” said Daphne Keller, a onetime associate general counsel at Google who is now director of platform regulation at the Stanford Program in Law, Science & Technology. “My read on the situation is that they really needed to make this concession to the government for whatever reason — because of whatever pressure they were under or whatever benefit they thought they would get from making the concession — and they did it, and then they had to find an excuse.”
Platform content moderation is a notion only about as old as the relatively new social media and technology platforms themselves, but has generally been understood to be a balance between free expression and the need to protect vulnerable groups or populations. The inversion of this concept — using moderation to restrict speech to protect the state acting against vulnerable populations — is a disconcerting and relatively new phenomenon here in the United States, though one that has already become a modus operandi elsewhere.
One Carnegie Endowment paper published last year, focused on India and Thailand, detailed how governments in those countries had used the language and infrastructure of platforms’ content moderation and community standards systems to restrain criticism and push a message. India under Narendra Modi, for example, had imposed “national security” restrictions that were mostly levied against civil society, using a multipronged approach of legal, economic, and political pressure.
Sangeeta Mahapatra, a research fellow at the German Institute for Global and Area Studies and a coauthor of the paper, stressed that while researchers are loath to extrapolate findings too much to new contexts with their own complexities, it was clear the US government was walking the same path. “We have seen this game played so many times that by now there is a kind of predictability,” she said. “The wolves are right at the door. You realize how this is an everyday phenomenon. It’s not something that is episodic, these kinds of intrusions into your life and the starring role that a platform plays, not just as an enabler, but as a proactive enabler.”
Mahapatra stressed that, while a lot of the public framing — and indeed administration officials’ own gloating — was around the Justice Department or Homeland Security having forced or required the companies to take these apps down, the pressure was, at the time the decisions were made, purely rhetorical, and these companies have on occasion forcefully pushed back on perceived government strong-arming. A decade ago, Apple famously went on the legal and rhetorical offensive to block demands by the FBI to create software to override iPhone security as the agency attempted to unlock a phone belonging to the San Bernardino shooter.
Now, though, there’s what she called a “co-production of digital authoritarianism” in which the government doesn’t really have to do that much to expect some level of compliance. “When you see Apple taking down apps proactively, it’s not something that has started with Trump, it’s a pattern that we have been observing for quite some time now. We have seen it in South Asia especially, India especially, a very lucrative market.”
Keller noted that “there is a narrative from the Republican side right now about how they are free speech warriors who are really mad about how the Biden administration was censoring speech online.” Yet, “politicians on both sides have always tried to get platforms to take down content, and it’s always been to serve their interests or their policy preferences.” In that environment, it can be legitimately difficult to find out what’s the usual rhetoric and what crosses a First Amendment line.
But, as she pointed out, Trump et al. have not exactly been subtle; less than two weeks before taking office again, Trump said Meta CEO Mark Zuckerberg’s Trump-friendly changes to his platforms’ content policies were “probably” a result of the incoming president’s threat to jail Zuckerberg. As the ICEBlock lawsuit lays out, not only did the administration lean on Apple to take down the app, high-level officials including Attorney General Pam Bondi, immigration coordinator Tom Homan, and Homeland Security Secretary Kristi Noem went on to gloat about how they directly triggered the removal.
The ability of regular people to be alerted to ICE sightings and then film and distribute the results has been important not only in a broad narrative sense, but for concrete, practical applications like forming the basis of judicial interventions. In mid-October, US District Judge Sara Ellis ordered Customs and Border Protection agents under the command of Trump henchman and Border Patrol Commander Greg Bovino to follow use of force guidelines and wear body cameras after TV and bystander footage showed agents violently clashing with protesters. These body-worn cameras were then the basis for the judge’s finding that Bovino and his agents were lying to her in their descriptions of their operations (including the finding that agents apparently used ChatGPT to write at least one use of force report).
With an administration that has proven itself ready, willing, and able to lie over and over to the public, the media, Congress, and the courts, the accounts and records produced and compiled by the community, reporters, and researchers seems to be the only reliable corpus of evidence about what the federal agencies are actually doing on the ground — the warrantless arrests, the excessive force, the profiling. Having platforms willing to outright block avenues for people to know about, observe, and archive the footage of these operations poses a concrete risk to the public’s ability to know what’s going on at all.
Mahapatra said she’d been working with local partners including journalists and civic organizations on “record-keeping, all the receipts, so that the digital trace, evidence, is not lost, and there is some accountability mechanism… if you don’t document, the narrative capture becomes more unclear, more enduring and long-term.”
This federal push to tank these tools can also be understood through the lens of the administration working to shut down a competitor in the narrative-building game. It’s no secret that under Stephen Miller and Kristi Noem, the Department of Homeland Security fancies itself not only a law enforcement and security clearinghouse but very much a propaganda organ for the administration’s anti-immigration political project. DHS has been sending out its own photographers to help produce slick, movie trailer-like footage of its operations, runs trollish recruitment ads that emphasize the dog-whistle “western values” preoccupations of its leaders, and has shelled out over $200 million for ad campaigns, including to a firm tied to Noem herself.
Keller referenced a now-infamous nighttime Chicago raid where heavily armed agents, some in helicopters, laid siege to an apartment building in an operation that the administration used as fodder for a heavily produced video. “An idea is this is a media war, of who can get the most compelling footage for their side,” she said. “That’s what ICE was doing in that moment, and it’s what they’re trying to prevent the activists from doing by getting the apps down, to the extent that the apps are really about pulling people together and getting video and documenting what is going on.”
Social media moderation and bad-faith utilization of terms of service as weapons in the speech wars are a little more abstract than having political organizers detained, even with how ham-fisted the administration has ultimately been about it, but it’s arguably a much more wide-ranging and effective way of influencing and controlling what speech is available. Now that Trump and team have had a taste, and seen how apparently easy it is to get the companies to play ball, why wouldn’t they keep reaching?
The platforms might have been acting out of expediency, but now that they’ve opened Pandora’s box, it’s hard to tell what the administration might push for. If ICE personnel are now a protected class under Apple’s rules, does that mean that the company could enforce hate speech standards against those criticizing agents? If not, why not? “I would expect they didn’t really think through the implications of, are they really going to interpret policy that way in the future,” said Keller.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
The Verge