How an information void about Maduro's capture was filled by deepfakes
World
News

How an information void about Maduro's capture was filled by deepfakes

LA
Latest News From Euronews | Euronews RSS
1 day ago
Edited ByGlobal AI News Editorial Team
Reviewed BySenior Editor
Published
Jan 7, 2026

Shortly after US President Donald Trump announced a “large-scale strike” on Venezuela on Saturday, European social media was awash with misleading, AI-generated images of Nicolás Maduro’s capture and videos of Venezuelans celebrating around the world.

Across TikTok, Instagram and X, AI-generated or altered images, old footage repurposed as new footage and out-of-context videos proliferated.

Many racked up millions of views across platforms, and were shared by public figures, including Trump himself, X owner Elon Musk, the son of the former Brazilian president, Flávio Bolsonaro, and the official account of the Portuguese right-wing Chega party in Portugal.

Some experts suggested this was one of the first incidents in which AI images of a major public political figure were created in real time as breaking news unfolded.

But for others, what was unique about this wave of false information was not its scale, but the fact that so many people — including public figures — fell for it.

On 3 January, US special forces captured the former leader of Venezuela and his wife in a lightning operation. Maduro faces federal drug trafficking charges, to which he has pleaded not guilty.

Soon after his capture, multiple images of Maduro appeared across social media channels. Euronews’ fact-checking team, The Cube, found examples of the below image shared in Spanish, Italian, French and Polish.

One picture of Maduro disembarking an aircraft was shared by the official account of Portugal's far-right Chega party, as well as the party's founder André Ventura and other party members. It was also presented by several online media outlets as a real photo.

As the image spread rapidly online, fact-checkers noted that when the photo was run through Gemini’s SynthID verification tool, it contained digital watermarks indicating that all or part of the image had been generated or edited with Google AI.

Analysis from Google Gemini found that “most or all of (the image) was generated or edited with Google AI.”

According to Detesia, a German startup that specialises in deepfake detection technology, its AI models found “substantial evidence” that the image was AI-generated.

Detesia said the original photo sparked several similar ones, with a more modest social media reach, that also contained visible SynthID watermark artifacts. Later versions showed obvious signs of AI-generation, including images of soldiers with three hands and pictures depicting Maduro covered in blood.

The image, which is very likely AI-generated, garnered millions of views across social media platforms, including one Spanish X post that had 2.6 million views alone.

According to Tal Hagin, an information warfare analyst and media literacy lecturer, rapid advances in AI technology are making the challenge of identifying deepfakes even more challenging.

“We are no longer at the stage where it's six months away, we are already there: unable to identify what's AI and what's not.”

In the immediate aftermath of Maduro’s capture, individuals had little details and in particular, no images, “when you have this vacuum of information, it needs to be filled somehow”, said Hagin.

“Individuals started uploading AI-generated images of Maduro in custody of the US Special Forces in order to fill that gap”, he concluded.

Another image, with more than 4.6 million views, purports to show Maduro sitting in a military cargo plane in white pyjamas.

Newsguard, a US-based platform that monitors information reliability, reported that the image shows clear signs of AI-generation, including a double row of passenger seat windows.

It also contradicts evidence: Maduro was transported out of Venezuela by helicopter to a US navy ship — netiher of which appear to have double-rowed images like in the picture.

Shortly after false images of Maduro’s arrest circulated, social media platforms were flooded with footage of protesters celebrating his capture.

Some of these, including one shared by Musk of Venezuelans crying of joy at amassed more than 5.6 million views. Signs of AI generation include unnatural human movements, skin tones and abnormal licence plates on car.

Dispatches from Venezuela indicate the public mood is complex, with a category of the population expressing joy and hope at Maduro’s capture, as well as fear and uncertainty of what a transition of power may look like. Others have condemned the US' intervention in their country.

Multiple videos have appeared with misleading captions on protest videos. One video shared on X which amassed more than 1 million views was shared on 4 January with the caption “this is Caracas today. Huge crowds in support of Maduro.”

In reality, this video is from a march which Maduro and his youth supporters took part in at the Miraflores Palace in Caracas in November 2025.

Another video widely shared on French- language X shows a man on a balcony holding a phone up to a crowd of people with the caption, “I've rarely seen a people as happy as the Venezuelans to finally be rid of Maduro thanks to American intervention.” Fireworks can be seen in the background.

Hagin says there are several indicators that cast doubt on the video’s authenticity, including that the fireworks appear to be coming from within the crowd and are not producing an appropriate level of smoke. The image displayed on phone also does not match the crowd below.

The danger of this volume of AI-generated footage, according to Hagin, is that it creates a false sense of a track record in people’s minds, “if I’ve seen five different examples of Maduro in custody in different outfits — it must all be fake.”

“There are videos that are 100% real, with absolutely no reason to doubt them, and people still say they’re AI because they don’t want them to be true,” Hagin said.

“It takes time to verify information, ensure that you’re correct in what you’re saying, and while you’re fiddling around trying to verify this video, people are churning out more and more misinformation onto a platform.”

In addition to misleading visuals, false claims that US forces struck former Venezuelan President Hugo Chavez’s mausoleum have proliferated online in multiple languages — and were even shared by Colombian President Gustavo Petro.

One purports to show the mausoleum bombed by the US military during its capture operation.

However, as noted by Hagin, the photo on the right is an AI-manipulated image based on a real photo of the mausoleum from 2013, with the destruction artificially added.

The Hugo Chavez Foundation itself posted a video on Instagram on Monday showing the building intact with a video of a phone showing the date as Sunday.

Editorial Context & Insight

Original analysis & verification

Verified by Editorial Board

Methodology

This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.