Shortcut or Reward? How Generative AI Exposes the Incentives Behind Learning
A reflection on what generative AI reveals about higher education’s deepest assumptions - and why the real challenge isn’t the technology itself, but the systems that shape how we teach, assess, and define meaningful work.
9/7/20255 min read
Over the past 15 to 18 months, I’ve listened and watched with intrigue as my colleagues across higher education have responded - sometimes anxiously, sometimes dismissively - to the growing presence of large language models. The conversations often begin with concern about student learning or academic integrity, but they quickly spiral into broader claims: that AI will kill critical thinking, flatten originality, or make humans stop thinking altogether.
But I find that most of these arguments place the blame in the wrong place.
It is my perspective that the issue isn’t that the machines are becoming too powerful. It’s that the institutions surrounding them have rarely been consistent about what they value in the first place.
When people use large language models to draft reports, prepare research summaries, or expedite writing they otherwise wouldn’t have time to complete, that’s not a failure of technology. I find that is, in fact, a reflection of the ecosystem we’ve built - and the things we’ve chosen to reward. If surface-level completion is what’s being measured, then tools that accelerate the surface are simply doing what the system asks of them. The concern isn’t really about the tool...it’s about what the tool reveals.
What that tool reveals is an underlying discomfort with ease. There’s a persistent tendency to treat friction as a stand-in for seriousness. What I’ve noticed is that any tool that reduces friction is met with suspicion, as though difficulty itself is proof of depth. In practice, this ends up reinforcing legacy systems that often demand time and effort without clarity about what those things are supposed to yield.
That’s not to say difficulty has no place. The kind of intellectual work I care about is often difficult - but not for its own sake. It requires curiosity, synthesis, and sustained attention. It asks people to see connections they hadn’t seen before. But I find that much of the difficulty embedded into academic and professional knowledge systems is arbitrary and extractive. It rewards endurance over insight. It equates hours spent with seriousness of thought. And it treats any tool that alters the rhythm of production as a threat - even if that change creates space for better thinking.
At the same time, I’m not suggesting we treat every emerging tool as neutral or benign. I don’t think uncritical adoption is the answer. But I also don’t think pretending the tools aren’t here (or treating their use as inherently illegitimate) is sustainable. Responsible use begins with recognition, not avoidance.
In my own work, especially in mentorship, large language models have been most valuable not for producing output, but for clarifying thought. When I’m working through a problem or trying to explain something more clearly, I sometimes use an LLM to test whether I’m actually being clear. It doesn’t provide answers. It reflects the structure of my thinking back to me, and that, at times, is enough to shift how I approach the next part of the task. In fact, this very piece was developed through a process of iterative prompting, reflection, and refinement using a large language model. The tool didn’t write it - it helped shape the questions I needed to ask myself in order to write it more clearly.
None of this means we should ignore the real concerns surrounding generative systems. But I find that the more productive conversations are happening when people start by asking how to use them well, rather than whether they should exist at all. That’s a different kind of responsibility - one rooted in use, not avoidance.
I’ve seen the same thing among trainees and collaborators, particularly those grappling with complex framing questions in research or conceptual development. Some use LLMs to simulate critique, refine definitions, or pressure-test the precision of a sentence. That reflects a calibrated use of the tool - integrated into a broader process of sense-making, not replacing it. It’s not a shortcut. It’s scaffolding.
That stands in contrast to what I’ve observed as the dominant institutional response. The trend has been toward detection and restriction - AI policies, surveillance tools, honor codes, and declarations of non-use. I came across data noting that more than 60 percent of colleges and universities identify AI detection and prevention as their top implementation priority, while fewer than 25 percent have made significant efforts to revise curriculum or assessment structures. I also came across data from a survey of nearly 800 professionals in the educational technology sector, which found that only 39 percent of their institutions had acceptable-use policies in place, and just 9 percent reported that their organizations’ cybersecurity frameworks were prepared for AI-specific risks. I find that reflects a reactive posture, one grounded in compliance, not pedagogy.
What’s striking is that this defensiveness seems uniquely persistent in higher education. When I talk to colleagues and friends outside the academy - in industry, in policy - the tone is different. They’re not asking whether these tools should exist; they’re asking how best to use them. In those spaces, the presence of this technology is already assumed, and the conversation is about calibration and responsibility. Which makes the academic community’s posture - a fixation on detection and avoidance - feel not only reactive, but oddly out of step with the broader world.
Proceeding with caution is warranted. But caution isn’t the same as resistance. And I worry that the institutional instinct to regulate usage is often a substitute for doing the harder work of rethinking what intellectual effort looks like...or how we might assess it differently in light of the tools now in the room.
Meanwhile, usage of generative AI tools is already widespread. I learned that in the Digital Education Council’s 2024 Global AI Student Survey - which included 3,839 respondents across 16 countries - 86 percent of students reported using AI in their studies, with 54 percent using it weekly or more. Use cases ranged from information search to drafting, grammar checking, and summarizing. These data are further confirmed in other surveys. A longitudinal study at a selective U.S. institution also documented widespread LLM use in academic work within two years of ChatGPT’s release.
And yet, many of the assignments, tasks, and workflows now being “protected” - summaries, outlines, polished drafts - were already poor proxies for depth. If a language model can complete a task with minimal input and return something indistinguishable from what would be accepted, then the issue is the task...not the tool.
There are other ways forward. I recently came across a study in which students were encouraged to use ChatGPT openly during writing - specifically for brainstorming and revision. The participants produced more thoughtful writing and reported a stronger sense of connection to their ideas.
That represents the opportunity in front of the higher education complex at this moment - to design systems in which what we reward aligns closely with what we claim to value. That might mean building evaluations around conceptual development and iterative reasoning. It might mean assessing how people use tools, not just whether they used them. And it might mean recognizing that cognitive labor today includes knowing when and how to work alongside generative systems - with clarity, with transparency, and with care.
We don’t get there by pretending the technology doesn’t exist. And we certainly don’t get there by reinforcing the idea that struggle, in itself, is a proxy for depth. We get there by being more honest about what knowledge work should look like in a shifting world, and whether our systems are designed to support that.
Regardless of whether or not the machines are the problem, they are certainly revealing ours. And I find that is the part we’re least prepared to confront.
My sources and related readings:
EDUCAUSE. 2024 EDUCAUSE Horizon Report: Teaching and Learning Edition. EDUCAUSE; 2024.
Stewart B, Kimmons R. Higher education, artificial intelligence, and the future of learning: Landscape analysis and institutional responses. Int J Educ Technol High Educ. 2023;20(49):1-20.
HolonIQ. Global Education Outlook 2024 Survey Report. HolonIQ Research; 2024.
Digital Education Council. Global AI in Higher Education Student Survey. Digital Education Council; 2024.
Sok S, Heng K. Opportunities, challenges, and strategies for using ChatGPT in higher education: A literature review. J Digit Educ Technol. 2024;4(1):ep2401.
EDUCAUSE. Student Survey on AI and Higher Education. EDUCAUSE; 2024.
Kim, J., Yu, S., Detrick, R. et al. Exploring students’ perspectives on Generative AI-assisted academic writing. Educ Inf Technol 30, 1265–1300 (2025). https://doi.org/10.1007/s10639-024-12878-7
Khampusaen T. The impact of ChatGPT on academic writing skills and knowledge: Evidence from EFL argumentative essays. LEARN Journal. 2025;18(1):963-988.
© Avinash Chandran, 2025.