Low-quality AI-generated videos now make up a significant share of what users see on YouTube, raising concerns about platform quality, monetisation, and misinformation, according to new research by Kapwing.
The study estimates that 21-33% of YouTube’s content feed may consist of what it describes as “AI slop” or “brainrot” videos. These are cheaply produced, repetitive videos generated using AI tools, designed mainly to attract views rather than provide meaningful content.
Kapwing’s analysis also suggests that some of these channels may be earning millions of dollars annually, despite minimal human effort or creative input.
Kapwing examined the top 100 trending YouTube channels in each country, identifying those that primarily publish AI-generated content. The company then used Social Blade data to assess views, subscribers, and estimated earnings. To understand what new users see, researchers also created a fresh YouTube account and reviewed the first 500 Shorts shown in the feed.
The research shows that AI slop does not need large numbers of YouTube channels to dominate attention. In Spain, for example, only eight AI slop channels appeared among the top 100 trending channels, yet they attracted more subscribers than countries with far more such channels.
In South Korea, a single channel, Three Minutes Wisdom, accounted for nearly a quarter of the country’s total AI slop views, with 2.02 billion views. Kapwing estimates that this channel alone could earn over $4 million per year from ads.
Many of these videos follow simple, repetitive formats, such as animals in dramatic situations, quiz-style religious content, or fictional characters in endless variations of the same scenario, making them cheap and fast to produce at scale.
Kapwing’s findings align with how YouTube Shorts are designed and distributed. Shorts rely heavily on algorithmic recommendations rather than subscriber-based feeds, increasing exposure for high-volume publishers regardless of prior audience size.
Short-form video platforms reward frequent uploads, quick watch times, and continuous scrolling, which makes the format particularly well-suited to AI-generated content that creators can produce and publish rapidly with minimal variation.
India is home to the most-viewed AI slop channel identified in the study. Bandar Apna Dost uploads hundreds of near-identical videos featuring a digitally generated monkey placed in emotional or dramatic situations. The channel also maintains a presence on Instagram and Facebook, where it reposts similar content.
Kapwing’s revenue estimates indicate that high-volume AI-generated content from India can compete directly with professionally produced videos in terms of reach and earnings.
YouTube enforces its monetisation rules under the YouTube Partner Program across all regions where the program is available. YouTube has expanded these monetisation rules and enhanced content detection systems to apply across all markets participating in the Partner Program, ensuring consistent enforcement of policy standards regardless of a creator’s location.
The study does not suggest that India produces more AI-generated content by design but highlights how global recommendation systems and ad models reward scale over production method.
YouTube currently allows the use of generative AI tools in video creation and does not prohibit monetisation of AI-generated videos by default. The platform requires creators to disclose altered or synthetic media only in limited cases, such as when content could mislead viewers about real-world events or individuals.
YouTube has not publicly defined a threshold at which AI-generated or repetitive content becomes spam or ineligible for monetisation. It continues to recommend and monetise videos that comply with its community guidelines and advertiser-friendly content rules.
YouTube CEO Neal Mohan has defended generative AI in video creation, saying the level of AI involvement does not determine content quality. In an interview with Wired, he said: “The genius is going to lie in whether you did it in a way that was profoundly original or creative. Just because the content is 75% AI-generated doesn’t make it any better or worse than a video that’s 5% AI-generated. What’s important is that it was done by a human being.”
Advertiser concerns highlighted by the study relate primarily to ad placement and adjacency, rather than direct policy violations. Industry discussions increasingly distinguish between harmful content and low-quality or repetitive content, with the latter raising questions about brand perception rather than safety.
While YouTube has not announced advertiser-specific restrictions on AI-generated videos, advertisers can already use existing brand safety tools to limit placements by content category, format, or channel.
Kapwing’s test of a new YouTube account showed how quickly AI slop appears in the Shorts feed. Of the first 500 videos shown: The study notes that it remains unclear whether algorithms deliberately promote such content or whether its visibility results from the overwhelming volume of uploads.
External reporting supports the trend. An analysis cited by The Guardian found that nearly one in 10 of the world’s fastest-growing YouTube channels publish only AI-generated content.
The Kapwing study does not assess whether AI-generated content directly displaces human creators in recommendations, but documents that a small number of AI-driven channels can account for a disproportionate share of views and impressions across multiple markets.
Researchers and media analysts warn that the spread of AI slop goes beyond entertainment quality. Repetitive exposure to fabricated or misleading visuals can reinforce false beliefs, a phenomenon known as the “illusory truth effect. AI tools also lower the cost of producing politically or ideologically motivated content at scale.
Artist and researcher Eryk Salvaggio has argued that an excess of information eventually turns into noise and increases public dependence on algorithms to decide what people see and trust.
The Kapwing report concludes that while AI-generated video is not inherently harmful, the unchecked growth of low-quality content may reshape online media in ways that prioritise volume and engagement over credibility, creativity, and trust.
Data referenced in the study is accurate as of October 2025.
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
MEDIANAMA