People getting peeved with generative artificial intelligence (GenAI) did not go unnoticed by dictionaries in 2025. A couple named their word of 2025 ‘slop’ or ‘AI slop,’ defined as low-quality content produced in large quantities by AI.
“Slop oozes into everything,” the Merriam-Webster dictionary wrote. The year also marked a possible AI flop, as talk of the AI bubble bursting bustled.
Despite this, the doom and gloom hasn’t stopped tech companies from ramping up their game with new AI model releases. Google’s new Gemini 3 model shook things up, spurring OpenAI to reportedly sound the alarm for a ‘code red’ to urgently improve their GPT-5 model.
But since chatbots are only as good as the data that they are trained on, and AI leaders warn we have reached “peak data,” 2026 might see another type of AI emerge amid the fatigue.
Here is what we could see for AI in the year ahead.
The peak data warning for 2026 has nothing to do with data scarcity. There is a lot of unused data, experts say, but it is difficult to access for many reasons, such as the software it is kept on, regulations, or author rights.
This is where world models could step in. World models work by learning to watch videos, or using simulation data and other spatial inputs, and then building their own representations of scenes or objects.
They do, of course, require a lot of data to train on, but they have different use cases to chatbots.
To put it more simply, instead of predicting the next word, which large language models (LLMs) do, they predict what would happen in the next world and model how things move over time.
They can also be thought of as “digital twins,” where a digital replica of a place is created, using real-time data, and simulations can play out to predict what may happen there.
This means that world models could create an AI system that can understand gravity and cause-and-effect, without the need for them to be programmed on it.
As people get fed up with AI slop and the limitations of LLMs, world models could become more buzzy in 2026, as they are key to creating AI for everything from robotics to video games.
The CEO of Boston Dynamics told Euronews Next in November that AI had helped its robots develop, including its famous robot dog.
“There's still a huge amount of work to be done, but none of this could have been possible without AI. And so it's really an exciting time,” Robert Playter said.
Companies are already jumping on the world model hype, with Google and Meta announcing their own versions for robotics and to make their video models more realistic.
Big names are also entering the space, including one of the so-called godfathers of AI, Yann LeCun, who announced in 2025 that he would leave Meta to launch his own world model start-up. Fei-Fei Li’s company, World Labs, also announced its first release, Marble, in 2025.
Chinese tech companies such as Tencent are developing their own world models, as well.
In Europe, we could see the opposite happen – rather than AI going on the big world stage, it may become smaller.
Small language models, rather than LLMs used by US tech companies, have started making waves in Europe.
Despite the name, small language models are impressive. They are lightweight versions of LLMs that are designed to operate smartphones or low-powered computers. Though they use fewer parameters and are less energy hungry, they retain strong capabilities for text generation, summarisation, question-answering, and translation.
Smaller language models may also be more attractive on an economic front amid talk of the AI bubble bursting.
US AI companies, in particular, are raising enormous amounts of money and seeing unprecedented valuations. Much of that money is going towards the buildout of gigantic data centres, with heavy investment from companies such as OpenAI and Elon Musk’s xAI, Meta, and Google.
“Next year, doubts about the financial sustainability and socioeconomic benefits of today’s large-scale AI boom will continue to grow, even if the bubble doesn’t completely burst,” said Max von Thun, the director of Europe and transatlantic partnerships at the Open Markets Institute.
He told Euronews Next that governments will become “increasingly wary” about relying on American AI and cloud infrastructure, due to the US government’s “clear intentions to weaponise technological dependencies for political gain”.
This could in turn mean that Europe will “accelerate efforts to build local capabilities, while exploring AI approaches better suited to its strengths, such as smaller and more sustainable models trained on high-quality industrial and public data,” he said.
Beyond the AI hype, 2025 was marked by worrisome talk of AI psychosis, where users form delusions or obsessive attachments to AI chatbots. A lawsuit against OpenAI in August alleged that ChatGPT was a ‘suicide coach’ that had encouraged a 16-year-old to take his own life.
OpenAI later denied the allegations and said in response that the teenager should not have been using the technology without parental consent and should not have bypassed ChatGPT’s protective measures.
The case raises serious questions about the effects of chatbots on vulnerable users and the ethical responsibilities of tech companies. 2026 could bring even more harm to users as these models become more powerful, according to one expert.
Engineers who design these systems likely didn’t programme the technology to harm vulnerable children or adults, said Max Tagmark, an MIT professor doing AI and physics research and president of the Future of Life Institute (FLI). They may not even have known it was happening, he told Euronews Next.
This is concerning, especially as machines become more powerful with more sophisticated software and code, he said, adding that he expects a “more powerful AI” in 2026 that could also see AI agents improving and becoming less passive and more like a “biological system”.
AI agents are designed to take autonomous actions to assist humans and do not require a human to tell them what to do, as they gather data based on user preferences.
Currently, they are not that advanced – but the promise is that they could be one day.
For example, currently, an AI agent could plan and suggest a holiday, but a human would still have to click to purchase the flight.
2026 could also see clashes between humans over AI, as Tegmark said there is a growing pushback in the US against unleashing AI without regulation.
AI regulation is winding back in the United States, where Tegmark is based. President Donald Trump signed an executive order in November aimed at blocking states from crafting their own AI rules, which will likely have great consequences on the technology’s development in 2026.
Trump justified the decision by claiming that the industry is at risk of being stifled by a patchwork of onerous rules while in a battle with Chinese competitors to lead in technology.
In October, thousands of public figures, including AI and technology leaders, called for AI firms to slow down their pursuit of superintelligence, broadly defined as AI that surpasses human cognitive capabilities.
The petition, organised by FLI, garnered signatures from across the political spectrum, including Trump’s former chief strategist Steve Bannon, former US National Security Advisor under Barack Obama Susan Rice, religious leaders, and prominent computer scientists.
Tegmark said this poll shows that “people are turning against AI in the US.” He said one of the reasons for the backlash is that superintelligence would “make every single worker unable to make a living, as all the jobs are taken by robots.”
But he warned that the fatigue and the anti-AI sentiment might dampen the progress that the technology promises, for industries such as health care.
“If there is no regulation, the good AI is missed out on, because we might just get a big tech-lash,” he said.
“In the coming year, I expect much broader social movement across the entire political spectrum, pushing against corporate welfare to have safety standards for AI, and there will be a massive lobbying push against it.”
“It’ll be quite the clash,” Tegmark said.
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
Latest News From Euronews | Euronews RSS