When the machine answers, who actually learns?
Artificial Intelligence is no longer a distant promise — it has firmly planted itself in the daily routine of virtually every college student. That massive presence, however, is keeping professors up at night around the world. What should function as a support tool for learning has, in practice, become a real threat to students’ ability to think for themselves. A report published by The Guardian gathered testimonies from more than a dozen faculty members, most from the humanities, describing a landscape of desperation and improvisation inside American universities. The accounts show that the problem goes far beyond simple digital cheating — we are witnessing a structural transformation that calls into question the very purpose of attending a university when any answer can be manufactured in seconds by a machine.
Lea Pao, a literature professor at Stanford University, has been experimenting with ways to reconnect her students to offline learning. She asks them to memorize poems, participate in recitation events, and observe works of art in person, in the real world. The goal is to restore the embodied experience of learning and steer students away from the temptation of outsourcing intellectual work to AI. As Pao herself acknowledges, no assignment is completely AI-proof. Rather than trying to police the use of technology, she bets on creating experiences meaningful enough that students see value in the real process of learning.
But it does not always work. One telling case illustrates the scale of this challenge. Pao assigned an activity that seemed AI-proof: she asked her students to visit a local museum, observe a painting for ten minutes, and write a few paragraphs describing the experience. Something deeply personal, in-person, and subjective. Yet one student tried to visit the museum on a Monday, when it was closed, and instead of going back another day, turned to Artificial Intelligence. The result was technically polished and grammatically flawless but completely empty of meaning — too perfect without saying anything, in the professor’s words. That episode has become something of a symbol of what many educators are facing: the silent replacement of genuine experience with a convincing simulation.
A crisis that goes far beyond cheating
The problem, according to the professors interviewed, is not just academic dishonesty itself. It is what that dishonesty reveals about students’ relationship with the learning process. When a student does not even try to complete the assignment and turns to AI as a first instinct, something deeper is at play. There is a disconnect between intellectual effort and the expected outcome, as if the diploma were the only goal and the path to get there could be outsourced to an algorithm.
College degrees in the United States often cost hundreds of thousands of dollars and result in decades of debt. In recent years, public trust in American higher education has plummeted. With the possibility of AI increasingly replacing independent thought, one question becomes even more pressing: what exactly is a college education for?
Most faculty members interviewed by the Guardian expressed the view that dependence on Artificial Intelligence is fundamentally at odds with the development of human intelligence they are tasked with nurturing. They described desperate attempts to stop students from using AI as a substitute for thinking, at a time when the technology threatens to reshape not just education but everything from financial markets to social relationships and even armed conflicts.
The language professors used is revealing. One said the situation is driving everyone crazy. Another wrote in an email that generative AI is the bane of their existence. And a third was even more blunt: they wished they could push ChatGPT, Claude, and Microsoft Copilot off a cliff. Dora Zhang, a literature professor at the University of California, Berkeley, said she now discusses AI with her students not through the lens of cheating or academic honesty but in frankly existential terms: what is this doing to us as a species? 🤯
Is critical thinking really under threat?
Recent studies point to potentially catastrophic effects of AI on students’ cognitive skills and critical thinking. Michael Clune, a literature professor and novelist at Ohio State University, said many students have already become unable to read, analyze, and synthesize information — skills that are fundamental in any field. In a recent essay, he warned that universities embracing the technology without discernment are setting themselves up for a kind of self-lobotomy.
Ohio State University itself, where Clune teaches, began requiring all freshmen to take a generative AI course and branded itself the first AI-fluent university, promising to embed the technology across every program. Clune said nobody knows exactly what that means in practice. In his case, as a literature professor, these tools seem to work against the educational goals he has for his students.
When a student uses a generative tool to answer a complex question, they receive a ready-made response that is organized and seemingly complete. The problem is that the entire cognitive process that should happen between the question and the answer simply vanishes. The research, the doubt, the clash of ideas, the reworking of arguments — all of it is wiped out in a single click. And it is precisely those intermediate steps that build the ability to analyze, question, and form well-grounded opinions. Without that journey, a student might turn in a flawless paper but walks away from the process having learned almost nothing.
The professors interviewed for the report say they are noticing concrete changes in the classroom. Written assignments display a strange uniformity, with structures and vocabulary that do not match the level those same students demonstrate in in-person assessments. Some faculty describe the feeling of grading papers written by a single entity, so striking is the resemblance among submitted work. That involuntary standardization is one of the clearest signs that AI is shaping not only what students write but how they think — or stop thinking.
Cognitive skills work like muscles. If they are not exercised, they atrophy. Students who spend four or five years of college delegating the hardest part of intellectual work to a machine graduate with diplomas but without the mental muscle needed to solve problems AI still cannot handle 🤔
The threat to the humanities and the future of work
This is the heart of what many humanities professors fear: that a technology which can be a cutting-edge tool in other fields may spell the end of their own. Alex Karp, co-founder and CEO of Palantir, fueled those anxieties by recently declaring that AI will destroy jobs in the humanities. On the other hand, Daniela Amodei, president and co-founder of Anthropic — herself a literature major — said exactly the opposite: that studying the humanities will be more important than ever.
Interestingly, several tech and finance companies have stated that they are actively looking to hire humanities graduates for their creativity and critical thinking skills. Enrollment data from some universities suggest that the humanities, which had been in decline for decades, may be starting to experience a resurgence in the AI era, with early signs pointing to a reversal of the historic drop in English and related majors in favor of STEM programs.
But some professors add an important caveat: the humanities may survive, yet only as a privilege for the few. When he predicted the end of humanities jobs, Karp assured there would be more than enough positions for people with technical training. Several faculty members voiced concern that AI will deepen a growing divide in American higher education. A small number of elite students will have access to a traditional liberal arts education, largely free of technology, while everyone else will receive what Zhang described as a degraded, soulless form of vocational training administered by AI instructors.
Matt Seybold, a professor at Elmira College in New York who has written critically about what he calls technofeudalism, said he fully expects to see a bifurcation in education. On one side, those who can afford a human-led education. On the other, those who will be trained by machines.
What professors are trying to do
Facing this scenario, professors are not sitting idle. The report describes a range of strategies, from completely redesigning courses to adopting exclusively in-person and oral assessments. Many faculty have turned to oral exams, handwritten notebooks, and class participation as grading criteria. Some require students to submit transparency statements describing their work process.
Others got even more creative. There are reports of professors who embedded random words like broccoli and Dua Lipa in the middle of assignment prompts to confuse language models — and catch students who did not even read the instructions before pasting everything into ChatGPT. The frustration of having to sift through AI-generated work is constant. Danica Savonick, an English professor at SUNY Cortland, summed up the feeling shared by many: it creates hours of extra work and makes me feel like a cop.
Karl Steel, an English professor at Brooklyn College, took a more balanced approach. He allows students to use AI to research and prepare their presentations, acknowledging that the technology has made the content richer and more interesting. However, when it is time to present, they must speak from minimal notes, in front of a photo of a text they annotated by hand. Written assignments are only given after the class has discussed the topic together. He acknowledges that a student could record the conversation, feed a chatbot the transcript, and produce a paper that way, but he believes that would be more effort than most students would be willing to put in.
There is also a growing movement toward collective organizing. Last year, the American Association of University Professors (AAUP), which represents 55,000 faculty members nationwide, published a report warning that universities were adopting the technology uncritically and with little transparency. Some faculty unions have started including AI protections in their contracts to establish oversight mechanisms, give professors greater decision-making power, and protect their intellectual property from feeding machines that may soon replace them.
Initiatives like the Against AI website offer resources and solidarity for educators who feel alone trying to reinvent the wheel while their administrators and deans promote AI relentlessly. The site provides a list of assignment ideas to mitigate AI use, including oral exams, requirements for photographic documentation of notes, and analog journals.
Universities embrace AI — and professors are left on their own
While many professors try to contain the damage, university administrations are heading in the opposite direction. More than a dozen universities have partnered with OpenAI in a 50-million-dollar initiative the company says will accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI. The California State University System joined forces with several of the world’s largest tech companies to build an AI-powered higher education system. Multiple universities have introduced undergraduate and graduate degrees in AI.
The plans are ambitious, but they offer little practical guidance on what professors should do with students who cannot read more than a few paragraphs at a time or who turn in essays generated in seconds by a machine. Recent surveys indicate that up to 92% of students have already used AI in their academic work, and the numbers keep climbing fast — even as a growing number of them express concerns about the technology’s accuracy and the integrity of using it.
AI dependence among professors themselves is also increasing, which raises the dystopian possibility that the college experience may soon be reduced to AI systems grading AI-generated work — a conversation between two robots, as a New York Magazine report put it.
Megan McNamara, who teaches sociology at the University of California, Santa Cruz and created a guide for professors across disciplines to deal with AI-related academic fraud, notes that cultural differences between the humanities and STEM fields, or between qualitative and quantitative social sciences, tend to shape how faculty respond to student AI use. When she suspects someone has used AI, she has a conversation with the student, treating the episode as an opportunity for growth and for strengthening the student-professor relationship.
Is there light at the end of the tunnel?
Despite the challenging landscape, some signs of hope are emerging. Several professors noticed they are beginning to see a growing discomfort among students themselves toward the technology and its dominance in their lives.
Clune shared that his students are increasingly curious about his flip phone, which he adopted after realizing the smartphone was destroying his attention span. Zhang, at Berkeley, said she believes the current Gen Z is realizing they are the guinea pigs of a massive social experiment. Seybold, at Elmira College, pointed to a growing sense among students that something is being stolen from them.
Seybold noted that many students who reject AI are driven by environmental concerns and distrust of companies they see as partly responsible for weakening democracies and making the world more violent. At the University of Michigan, this has translated into concrete activism. The institution announced plans to invest 850 million dollars in an AI data center in partnership with Los Alamos National Laboratory — right at a time when it is cutting funding for arts and humanities research and following campus protests.
Eric Hayot, a comparative literature professor at Penn State University, said he tries to convince his students that tech companies are trying to make them dependent on their products. In his view, these companies give away tools for free partly because they hope to hook an entire generation of students. That topic, Hayot said, is now part of every course he teaches.
We can choose to be human
As resistance grows, so does the emphasis on those intrinsically human qualities that set people apart from machines — exactly the qualities a humanistic education aims to cultivate.
Clune put it plainly: there is a kind of defeatism, this idea that there is no stopping the technology and that resistance is futile, that everything will be crushed in its path. That needs to change. We can decide that we want to be human.
That idea is also at the core of Pao’s approach at Stanford. She compares her work to planting seeds and hoping they grow. You plant and you wait, Pao said, about efforts that sometimes feel like tilting at windmills. You hope that in the long run you are helping your students become happy human beings who can take a walk, experience things, and describe the world on their own.
The challenge is enormous and there is no easy answer. Higher education institutions need to rethink their assessment models, their grading criteria, and even what it means to prepare a professional in the 21st century. If education continues measuring success solely by deliverables — written papers, exams, dissertations — AI will always find a shortcut. The way forward seems to lie in valuing the process, the intellectual journey, the questions asked along the way.
At the end of the day, what makes education irreplaceable is not the information it delivers but the transformation it sparks in those who are willing to walk the path for real. And no machine, no matter how advanced, can walk that path for someone else ✊
