Technology3 months ago3 min read

The Ars Technica AI coding agent test: Minesweeper edition

AT

Byline

Ars Technica

Technology Correspondent

Covers technology developments with editorial context for decision-focused readers.

The Ars Technica AI coding agent test: Minesweeper edition
Image source: Ars Technica

Why it matters

Overall: 7/10 The lack of chording is a big omission, but the strong presentation and Power Mode options give this effort a passable final score.

Key takeaways

  • While interactive troubleshooting with the agent may have fixed the issue, as a “one-shot” test, the model completely failed.Of the four coding agents we tested, Gemini CLI gave Benj the most trouble.
  • The result simply did not work.Benj actually bent the rules and gave Gemini a second chance, specifying that the game should use HTML5.
  • But it’s worth noting that Gemini 3 coding models are available for other subscription plans that were not tested here.

The lack of chording is a big omission, but the strong presentation and Power Mode options give this effort a passable final score.

Implementation, presentation, etc.

Gemini CLI did give us a few grey boxes you can click, but the playfields are missing. While interactive troubleshooting with the agent may have fixed the issue, as a “one-shot” test, the model completely failed.

Of the four coding agents we tested, Gemini CLI gave Benj the most trouble. After developing a plan, it was very, very slow at generating any usable code (about an hour per attempt). The model seemed to get hung up attempting to manually create WAV file sound effects and insisted on requiring React external libraries and a few other overcomplicated dependencies. The result simply did not work.

Benj actually bent the rules and gave Gemini a second chance, specifying that the game should use HTML5. When the model started writing code again, it also got hung up trying to make sound effects. Benj suggested using the WebAudio framework (which the other AI coding agents seemed to be able to use), but the result didn’t work, which you can see at the link above.

Unlike the other models tested, Gemini CLI apparently uses a hybrid system of three different LLMs for different tasks (Gemini 2.5 Flash Lite, 2.5 Flash, and 2.5 Pro were available at the level of the Google account Benj paid for). When you’ve completed your coding session and quit the CLI interface, it gives you a readout of which model did what.

In this case, it didn’t matter because the results didn’t work. But it’s worth noting that Gemini 3 coding models are available for other subscription plans that were not tested here. For that reason, this portion of the test could be considered “incomplete” for Google CLI.

OpenAI Codex wins this one on points, in no small part because it was the only model to include chording as a gameplay option. But Claude Code also distinguished itself with strong presentational flourishes and quick generation time. Mistral Vibe was a significant step down, and Google CLI based on Gemini 2.5 was a complete failure on our one-shot test.

While experienced coders can definitely get better results via an interactive, back-and-forth code editing conversation with an agent, these results show how capable some of these models can be, even with a very short prompt on a relatively straightforward task. Still, we feel that our overall experience with coding agents on other projects (more on that in a future article) generally reinforces the idea that they currently function best as interactive tools that augment human skill rather than replace it.

Ars TechnicaVerified

Curated by Aisha Patel

Sources & Further Reading

Key references used for verification and additional context.

Verification

Grade D1 unique evidence links

Publisher: Ars Technica

Source tier: Unranked

Editorial standards: Our process

Corrections: Report an issue

Published: Dec 19, 2025

Read time: 3 min

Category: Technology