The future of learning depends on how thoughtfully we adapt to technology, not how effectively we police technology (Express photographs by Arul Horizon/ representative)
Teachers often take pride in having “been there, done that.” Years of experience, they believe, give them the instinct to spot shortcuts students might take. But when it comes to Artificial Intelligence (AI), an evolving technology, this confidence is increasingly replaced by presumption and suspicion. “Is this AI-generated?” becomes the first question asked about any submission.
As a result, what a student has actually written often becomes secondary. The moment an assignment is uploaded, it is first passed through an AI detection tool. If the software raises a flag, the work is presumed to be AI-generated. The evaluation no longer begins with questions such as “What is the student arguing?” or “How well has the concept been understood?” It now involves a far narrower inquiry of quantifying the AI usage.
This shift has quietly but decisively changed the benchmark of academic assessment. Understanding, originality, and reasoning have been replaced by a single expectation: Do not use AI. Marks are awarded mechanically, in direct proportion to an algorithmic report. If a detection tool claims 45 per cent likely AI-generated content, the student receives 4.5 marks out of 10, leaving little room for human judgment.
Even more troubling is the complete absence of procedural fairness in this detection system. A student who fails an assignment or even an examination due to an AI detection report has no meaningful forum to appeal against the algorithmic report, which itself displays the disclaimer that “false positives are a possibility” and “the assessment should not be used as sole basis for adverse action against a student.” In conventional examinations, students are entitled to apply for re-evaluation, trusting that another human examiner might notice what the first overlooked. With AI detection, however, the report is treated as final and unquestionable. The algorithm speaks, and the matter ends.
The unfairness becomes even starker when assessment components are viewed together. A student may perform exceptionally well in a written examination, demonstrating clarity of thought and conceptual understanding, but still secure low overall grades because an assignment was flagged by an AI detection tool. There is no normalisation, no balancing of performance, and no academic discretion exercised. The entire system rests on an unquestioning faith in tools whose reliability is, at best, contested even by their own developers. While institutional AI policies emphasise responsible AI use by students, they remain largely silent on the responsibility of faculty to critically evaluate and contextualise AI detection reports.
This over-dependence has also begun to distort the very purpose of education. A simple online search for “How to avoid AI detection” reveals countless and often effective strategies. Students are advised to paraphrase sentences, remove descriptive words, identify the AI pattern and remove it, restructure paragraphs, make the sentences inconsistent, or even deliberately introduce grammatical errors. None of this improves understanding. It merely improves the chances of slipping past an algorithm.
Students focus on how to bypass detection, while teachers focus on how to catch it, slowly turning the classroom into a space of mutual suspicion. Regardless of who “wins” this contest, meaningful learning is sacrificed. The goal is no longer to think better or write better, but simply to avoid being flagged.
Instead of policing assignments with flawed tools, educational institutions need to rethink how assessment is designed in the age of AI. Not all forms of evaluation are equally vulnerable to misuse. Face-to-face vivas, classroom discussions, presentations, and seminars allow students to demonstrate understanding in real time, leaving little room for AI substitution. Similarly, greater weightage should be given to written examinations that test analytical and critical thinking rather than to formulaic report-writing that AI can generate in minutes. The future of learning depends on how thoughtfully we adapt to technology, not how effectively we police technology.
The writer is with National Law University, Delhi
Editorial Context & Insight
Original analysis & verification
Methodology
This article includes original analysis and synthesis from our editorial team, cross-referenced with primary sources to ensure depth and accuracy.
Primary Source
The Indian Express