LLM: Opportunities and challenges | Photo Credit: Boy Wirat
Large Language Models (LLMs) such as GPT-class systems have entered undergraduate education with remarkable speed, provoking enthusiasm, anxiety, and uncertainty. Their influence extends beyond simple automation; they are reshaping how students learn, how teachers teach, and how institutions conceptualise educational goals.
LLMs are genuinely remarkable. Not only can they do routine tasks like coding, summarisation, translation, format conversion and rephrasing, but their capacity to represent, generalise, and manipulate concepts across vast and complex multimodal domains would have seemed inconceivable even a decade ago. The semantic representations learned by these models appear, to a significant extent, to transcend in…
LLM: Opportunities and challenges | Photo Credit: Boy Wirat
Large Language Models (LLMs) such as GPT-class systems have entered undergraduate education with remarkable speed, provoking enthusiasm, anxiety, and uncertainty. Their influence extends beyond simple automation; they are reshaping how students learn, how teachers teach, and how institutions conceptualise educational goals.
LLMs are genuinely remarkable. Not only can they do routine tasks like coding, summarisation, translation, format conversion and rephrasing, but their capacity to represent, generalise, and manipulate concepts across vast and complex multimodal domains would have seemed inconceivable even a decade ago. The semantic representations learned by these models appear, to a significant extent, to transcend individual languages, enabling the same underlying concepts to be expressed in new linguistic styles or entirely different modalities—ranging from prose and poetry to images and other forms of creative artefact. Their ability to support increasingly sophisticated forms of reasoning is also improving at a rapid pace, making them capable of critical analysis in complex domains, including mathematics. Notably, Terrence Tao, one of the world’s leading mathematicians, very recently reported making progress on some long-standing open problems with the assistance of LLMs and related computational reasoning tools. In addition, multiple LLMs achieved gold-medal level performance at the International Mathematical Olympiad, 2025. It should be noted, however, that these successes involve expert mediation and do not translate automatically to novice learning.
Students gain
For students, the most compelling advantage of LLMs is the unprecedented access to personalised, on-demand tutoring. LLMs offer instant scaffolding, explanations at different levels, step-by-step derivations, a sounding board for critical analysis, and “ask again with alternate phrasing” possibilities that human tutors cannot always match. This not only benefits students who enter university with uneven preparation, limited access to support, or hesitation in asking basic questions in public settings, but also lowers the barrier to exploration: a student can query a concept repeatedly, request varied examples, or ask for analogies without fear of judgement. As Terrence Tao and most other users of LLMs have experienced, LLMs can also aid in advanced problem-solving and critical thinking.
Teachers’ perspective
From the perspective of teachers, LLMs offer substantial gains in productivity. They help generate draft lecture notes, alternate examples, practice problems, and visual diagrams; they can translate complex arguments into simpler forms or create graded levels of difficulty for diverse learners. Teachers can quickly design interactive exercises, scenario-based questions, or automated feedback tools that can provide detailed, individualised feedback that would be hard to produce otherwise. These affordances can make ambitious pedagogy feasible at scale. Additionally, LLMs can help draft responses to students, produce clarifications, or explain multiple solution strategies, not only reducing routine workload, but adding diversity and quality to the delivery of education.
The caveats
However, there are caveats. LLMs are overwhelmingly instrumentally powerful, but their pedagogic value is conditional, context-dependent, and fragile. Learning–not only in writing-intensive disciplines in the humanities and the social sciences, but also in mathematics, computer science, engineering, and the sciences–requires productive struggle. LLMs offer premature closure, and students can obtain correct-looking answers long before they have wrestled with the underlying concepts. This flattening of the cognitive struggle risks shallow comprehension, weak transfer, and overconfidence without competence. The pedagogic value of LLMs does not lie in the correctness of their outputs, but in how their use shapes the learner’s cognitive processes. Tools that optimise performance without strengthening understanding may, paradoxically, impede learning. Indiscriminate use of LLMs thus poses a definite threat to learning.
Moreover, LLMs are trained for coherence and not for correctness. The reasoning tools to verify correctness are still work in progress. As such, LLMs produce patterns rather than explanations, coherence rather than ground truth, and fluency rather than understanding. A non-discerning user may not be able to disentangle this epistemic tension, may increasingly outsource judgement rather than just labour, and may be vulnerable to absorbing confident but flawed outputs.
Assessment issues
Academic integrity also becomes a central tension. Indeed, take-home assignments – which had such great pedagogic value – are beginning to lose their significance. Moreover, it appears unsatisfactory and regressive to fall back on in-class examinations as the only way to assess student learning. Assessment validity has indeed become a serious problem, and effective strategies for combatting it are not immediately obvious. Institutions need to worry not just about cheating, but about graduates who appear competent yet lack core skills and understanding.
The central question, therefore, is not whether LLMs should be used in undergraduate education, but under what norms, constraints, and pedagogic designs their use genuinely enhances learning rather than merely accelerating output. The challenge is not the technology itself but the pedagogic imagination and institutional will needed to integrate it responsibly. LLMs will evolve and are here to stay. Universities need to cultivate thoughtful norms, redesign assessments, support faculty in thoughtfully integrating LLMs into pedagogy, and train students in discerning, reflective use. Only then can LLMs enrich rather than erode the intellectual purposes of higher education.
The writer is with the Department of Computer Science, and Centre for Digitalisation, AI, and Society, Ashoka University
Published on December 27, 2025