Higher education, students, professors, and the responsible use of artificial intelligence
Students, professors, and artificial intelligence tools are, in practice, reshaping what it means to learn and teach in higher education. Instead of just books, handwritten notes, and long hours in the library, today’s scene blends notebooks, slides, PDFs, and an open browser tab with some AI chatbot ready to answer questions in seconds.
This new reality is exactly what professors and students at colleges and universities across the United States are living right now. The technology is no longer a novelty: more than three years after the arrival of systems like ChatGPT, generative AI has become part of everyday academic life, even in fields traditionally tied to writing and argumentation, like the humanities — English, philosophy, cultural studies, history, and the like.
A survey cited in the original article, conducted by Inside Higher Ed in partnership with Generation Lab, paints a clear picture of this heavy usage: about 85% of undergraduate students surveyed said they were already using AI for coursework, mainly for brainstorming ideas, drafting papers, organizing study sessions, and preparing for exams. Roughly 19% of students admitted to using AI to write entire essays, which raises red flags for a lot of people in education.
The most interesting finding is that, even with all this heavy usage, most students don’t see AI as a magic solution. More than half of those who use these tools reported mixed feelings: on one hand, the tools help them learn; on the other, they make them think less deeply about subjects, almost as if the hardest part of reasoning gets handed off to the algorithm.
It’s in this tension between genuine support and dangerous shortcut that universities, students, and professors are creating, in practice, their own rules for the responsible use of artificial intelligence in higher education.
When AI feels like too easy a shortcut
For English professor Dan Cryer, who teaches at Johnson County Community College near Kansas City, using AI to write a college essay is like bringing a forklift to the gym. He sums up the metaphor like this: if the only goal were to move weights from one side to the other, the machine would get the job done. But the whole point of going to the gym is to build muscle. In academic writing, the logic is the same: what matters isn’t just having a finished text — it’s the process of thinking, researching, organizing ideas, and building arguments.
After spending a sabbatical devoted to studying generative AI, Cryer arrived at a pretty clear stance: in his view, professors should use these tools as little as possible in the classroom, because one of the core functions of these systems is precisely to reduce cognitive effort — and that effort is an essential part of a college education.
He also describes a direct impact on faculty workload. With the rise of AI, it’s harder to tell whether a text was actually written by the person whose name is on it. AI detection tools aren’t foolproof, can lead to unfair accusations, and don’t solve the problem on their own. At the same time, many colleges provide institutional access to these systems, which makes usage even more widespread. The result: professors spend extra time trying to evaluate authenticity and authorship on top of the actual content.
For students, the pressure has increased too. Now, on top of turning in assignments on time and following formatting guidelines, they need to figure out — often on their own — the line that separates responsible use from clearly inappropriate use of AI. Cryer considers this unfair: it’s not enough to just hand students access to the tools; there needs to be a clear conversation about what makes sense from a teaching standpoint.
He’s been emphasizing to his classes that the main goal of college isn’t to produce more texts to flood the world with essays, but to use writing as a way to train the mind. In his words, society doesn’t need more college papers; it needs people who know how to build solid arguments, tell good information sources from bad ones, and articulate ideas clearly. When a student outsources everything to AI, they may be robbing themselves of exactly that intellectual growth, even if they get a good grade in the short term.
When AI becomes a learning partner
On the other end of the spectrum, some professors see generative AI as an ally for deepening learning, as long as it’s used thoughtfully. In Charlotte, North Carolina, professor Leslie Clement, who teaches English, Spanish, and African studies at Johnson C. Smith University, takes an explicit approach to integrating AI into the teaching process.
She doesn’t just allow her students to use artificial intelligence — she encourages responsible use. The focus is very specific: using the models to organize ideas, build draft outlines, compare different information sources, and test interpretations of complex topics. The final writing, though, needs to come from the student’s own thinking.
Clement also helped create a course called African Diaspora and AI, which explores the impact of AI on people of the African diaspora around the world. The course covers, for example, cobalt mining in the Democratic Republic of Congo, a critical element in the supply chain for batteries and equipment tied to AI. The class combines social critique, ethics, and technology, highlighting the risks while also pointing to potential future opportunities these systems could offer Black communities, and acknowledging the contributions of Black researchers and scientists in the field.
One of the key themes of the course is Afrofuturism, exploring how students can use AI to reimagine their futures and tell new stories about themselves and their communities. Technology enters the picture both as a practical tool and as an object of critical analysis, helping students develop a less naive and more informed view of the role algorithms play in society.
For Clement, the central goal is to shape people with critical, ethical, and inclusive thinking — and that includes knowing how to ask tough questions of the technology itself. She wants her students to understand that it’s not enough to use AI for good; they also need to learn how to interrogate the models’ responses, identifying possible biases, factual errors, or gaps in perspective.
AI as a pocket tutor: studying with the help of chatbots
In Durham, North Carolina, 19-year-old student Anjali Tatini is pursuing a double major in global health and neuroscience at Duke University. For her, AI has become a kind of on-demand study partner. In more challenging courses like biology, she turned to Gemini, Google’s chatbot, whenever a concept seemed confusing.
The approach was pretty straightforward: she’d type in the concept that wasn’t clicking and ask for an explanation in plain language. When the answer came back too technical, she’d adjust the request — asking it to simplify, give examples, or relate the idea to everyday situations. That way, she created a question-and-answer loop that worked almost like a quick private tutoring session, adapted to her level of understanding in that moment.
In chemistry, Anjali used AI to generate extra exercises and practice questions to help her prepare better for exams. In marketing, the tool came in during the brainstorming phase for campaign ideas and projects. In statistics classes, it served as a support for creating code snippets for data analysis, which she’d then review and adjust herself.
The biggest draw for her is the flexibility. With a packed schedule of classes, internships, extracurriculars, and work, there isn’t always time to make it to every professor’s office hours. Having a chatbot that can answer questions at any hour, even with its limitations, became a real game-changer.
But there’s a clear boundary in how Tatini uses the technology: she doesn’t hand over authorship of her writing to AI. The tool can help organize topics, suggest paragraph structures, or flag inconsistencies, but the final draft is always hers. She values the feeling of looking at a paper and recognizing her own voice in it — if the entire thing were AI-generated, she says she wouldn’t be able to feel proud of the result, because it wouldn’t sound like something genuinely hers.
Authorship, identity, and the value of writing by hand
Not far away, in Chapel Hill, 21-year-old student Hannah Elder attends the University of North Carolina and is preparing for a career in law. Her coursework includes subjects like public policy and philosophy, where dense reading and argumentative writing are central.
She does turn to AI for specific tasks, like checking grammar, making sure a paper aligns with what the professor asked for in the rubric, or identifying potential weak spots in an argument. However, she draws a clear line: she doesn’t use the tool to generate ideas or draft sections of essays. For Hannah, developing her own thinking and learning to articulate it clearly is one of the most valuable parts of the college experience.
One symbolic detail of her routine stands out: Hannah still prefers to take her notes on paper, using a notebook and pen. In her view, what a person writes — the way they build sentences, word choices, mistakes and triumphs — works as a kind of fingerprint in the world. With the massive use of AI, she feels that personal mark risks being diluted into increasingly standardized texts, generated by systems trained on data from many people at once.
Despite this careful stance, Hannah doesn’t advocate for a total ban on AI in college. She has a pragmatic view: these tools are already part of academic and professional life, and they’re not going away. What she considers essential is that professors explicitly teach how to use AI in ways that are beneficial, both for boosting study and for avoiding misuse.
In her view, when professors incorporate AI responsibly into their courses — for example, showing examples of good and bad use, discussing ethical boundaries, and asking students to explain how they used the technology — the tool stops being seen as some kind of secret cheat code and simply becomes part of the digital reality everyone needs to learn to navigate.
Between the risk of outsourcing thought and the chance to innovate in education
The stories of Cryer, Clement, Anjali, and Hannah reveal different facets of the same challenge in higher education: defining the rules of the game for the use of artificial intelligence without falling into extremes. On one side, there’s the very real fear of watching students trade the effort of thinking for answers generated in seconds. On the other, there’s the opportunity to use these tools to better explain difficult material, expand access to educational support, and spark richer discussions in the classroom.
On many campuses, the solutions are still being worked out. Some institutions officially make generative AI available to students and faculty, while simultaneously updating academic integrity policies to require transparency about AI use in graded assignments. Others are creating entire courses to discuss the social, economic, and ethical impacts of the technology, connecting the topic across different fields of study.
In the middle of all this, one point is starting to gain consensus: responsible use of AI doesn’t simply mean restricting or greenlighting tools. It means teaching, in a practical way, what these technologies do well, where they tend to fail, why they can reproduce biases present in the data they were trained on, and how they can serve as support without stealing a student’s chance to develop their own ability to think, argue, and write.
The future of the university hinges on this delicate balance. If AI is treated only as a villain, the trend is that usage will continue, but underground, without critical debate. If it’s embraced without standards, there’s a real risk of turning academic education into a mere formality, with papers that look sophisticated in form but are hollow in substance. Between those two extremes, professors and students are, in practice, writing a new chapter about how to learn in a world where algorithms are also part of the classroom.
