Share:

The collective effort to build AI rules in universities

Students and professors across universities in the United States are taking the initiative to collaboratively create the guidelines that will determine how artificial intelligence can and should be used inside higher education classrooms. More than three years after ChatGPT launched, this remains a question without a definitive answer, and the absence of clear institutional policies is pushing the academic community toward a bottom-up approach, built directly by the people who live the day-to-day reality of learning.

To put the landscape into perspective, roughly 85% of American college students already use AI tools for academic activities, according to a survey conducted by Inside Higher Ed in partnership with Generation Lab, carried out in July. The number is high, but what really stands out is the feeling that comes with it. More than half of these students admit to having mixed feelings about using these tools. They acknowledge that artificial intelligence makes many everyday academic tasks easier, like brainstorming ideas, structuring papers, and studying for exams, but they also sense that over-reliance can weaken the depth of their own thinking. On top of that, approximately 19% of students reported using AI to write entire essays, a data point that raises a serious red flag about the line between assistance and replacement of intellectual effort.

English professor Dan Cryer, from Johnson County Community College near Kansas City, Kansas, captures this dilemma with an analogy that has become well-known among educators: using generative AI to write an essay is like bringing a forklift to the gym 🏋️. If the goal were simply to move weights from one side to the other, the machine would handle it efficiently. But the real purpose of going to the gym is to build your muscles, and that requires effort, repetition, and above all, personal involvement. In the same way, the goal of a writing course is not just to produce a finished text but to exercise the student’s critical, argumentative, and creative abilities.

Cryer also points out that the arrival of generative AI has brought a new kind of workload for professors like him: trying to determine whether a student’s submitted work is genuinely their own. According to the professor, this problem is made even trickier by the fact that his community college, like many other higher education institutions in the United States, provides students with access to AI tools. And it is not just faculty carrying this extra burden. Students also have to deal with an unprecedented challenge: finding the line that separates responsible use from irresponsible use of these technologies.

The role of professors in setting boundaries

After dedicating a sabbatical to studying generative artificial intelligence, Professor Cryer reached a personal conclusion: educators should use AI tools as little as possible in their teaching practices. In his view, one of the primary purposes of these tools is precisely to keep people from thinking deeply, which runs counter to what education should promote.

Cryer says he now spends more time convincing his students about the value of putting in the effort to become better writers. He explains to his classes that the purpose of academic training lies in the process, not the final product, because society does not need more college essays. What society needs, according to the professor, is for students to go through the process of writing research papers so they can become better thinkers, capable of building coherent arguments and telling a reliable source from a questionable one. If students rely on AI to do that work for them, Cryer says, they may end up being deprived of the very education they enrolled to receive.

While educators like Cryer urge caution and warn about the risks of thoughtless adoption, other professors see artificial intelligence as a powerful ally when responsible use is placed at the center of the conversation. This disagreement among faculty, far from being a problem, is turning out to be productive.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

The perspective of those who see AI as a learning partner

In Charlotte, North Carolina, Professor Leslie Clement reached a conclusion quite different from Cryer’s. For her, generative AI can work as a powerful collaborator capable of enriching student learning. Clement teaches English, Spanish, and African studies at Johnson C. Smith University, a historically Black university, and allows her students to use AI to create outlines for papers, get feedback on ideas, and compare different sources of information.

Clement’s approach goes beyond simply using technology as a tool. She co-created a course called African Diaspora and AI, which examines how artificial intelligence impacts people of African descent around the world. The course covers topics like the dangerous mining of cobalt in the Democratic Republic of the Congo, an essential component in AI technologies, while also discussing potential future benefits of these tools and the contributions of Black researchers and scientists to the field.

The course also explores the concept of Afrofuturism, encouraging students to use AI tools to reimagine their own futures. Clement says her goal has always been to foster critical, ethical, and inclusive thinking, and that she wants her students to apply these skills when using AI tools. She wants students not only to use the technology for good but also to question and critically examine it.

This decentralized approach, where each professor sets the boundaries for their own course, has been gaining traction because it acknowledges something important: there is no single policy that works for every field of study. A computer science course may benefit enormously from AI-based coding assistants, while a philosophy or literature class may require stricter restrictions to preserve the exercise of original argumentation. The result is a mosaic of guidelines that, while not uniform, reflects the real needs of each learning context.

Students as leaders in responsible use

One of the most interesting aspects of this movement is that students are not just following rules created by others. At several universities, students themselves are defining, in practice, how they want to interact with artificial intelligence during their education. And the accounts show that many are making remarkably thoughtful choices.

When AI works as an on-demand tutor

Anjali Tatini, a 19-year-old double-majoring in global health and neuroscience at Duke University in Durham, North Carolina, found her own ways to use AI productively. Last semester, when she was struggling with some concepts in a biology class, she turned to Gemini, Google’s AI chatbot, to ask for explanations.

Tatini says she would ask the chatbot about a concept and request that it explain what it meant. If the response came back at too technical a level, she would simply ask it to simplify a bit, which turned out to be really helpful. In other courses, like chemistry, she used AI to generate practice problems to prepare for exams. In a marketing class, the tool helped with brainstorming ideas. In statistics, AI assisted with generating lines of code for data analysis.

For Tatini, having a kind of on-demand tutor is valuable because she cannot always find her professors in person. Between assignments, other classes, and extracurricular activities, there is not always time to make it to every office hour. Having something that responds on your own time, in a way similar to how a person would respond, makes a real difference in the packed schedule of an undergraduate student.

But Tatini draws a clear line: she never lets AI write for her. She uses the tools to help structure and organize ideas, but the final writing is entirely her own. The student explains that when she puts something out there, she wants it to be something she can proudly say is hers. She would never use AI to write something because it simply would not sound like her 🎯.

Writing as a fingerprint

Not far away, in Chapel Hill, Hannah Elder, a 21-year-old junior at the University of North Carolina, shares a similar philosophy. Elder is a pre-law student pursuing a combination of public policy and philosophy. She says she uses generative AI to review her papers and check whether they align with course grading rubrics.

However, Elder says she would never use the tool to write or generate ideas on her behalf. For her, learning to formulate her own ideas and beliefs and communicate them through writing has been one of the most valuable parts of her entire college experience. Her concern is that if students start depending on AI to do that for them, they will not learn to think for themselves.

Elder still takes her notes on paper because she deeply believes that what you write and what you produce is like a fingerprint to the world. And she feels that, in some ways, that uniqueness is being lost as text-generation tools advance.

Despite these convictions, Elder does not believe the solution is to ban AI entirely. She recognizes that the technology will inevitably be part of the college experience and advocates for educators to integrate AI instruction into their curricula. The idea is that students learn to see the line between beneficial and harmful use. If professors incorporate these tools responsibly into their teaching, Elder says, AI will start to be seen less as a dishonest shortcut and more as a reality that can be navigated in a smart and productive way.

Tools we use daily

The case of a student who quit using AI

Not all students who experimented with AI tools kept using them. Aysa Tarana, a recent graduate of the University of Minnesota Twin Cities, was a freshman when ChatGPT launched. She says she started using the chatbot for small tasks, like suggestions for research topics. But over time, Tarana decided to stop using the tool because she felt she was outsourcing her own thinking, and it gave her a strange, uncomfortable feeling.

Tarana’s experience illustrates something many educators have observed: the use of AI in academic settings is not a one-way street. Some students try it out, realize the tool does not fit the kind of learning they are looking for, and opt for more traditional paths. This diversity of experiences reinforces the importance of not treating all students as a monolithic group and allowing each one to find their own balance.

Transparency and open dialogue as pillars

Another key aspect of this movement is transparency. Several professors are adopting the practice of having open conversations with their classes in the first weeks of the semester about what they consider acceptable or not when it comes to artificial intelligence. This frank conversation helps reduce anxiety among students, who often do not know whether they are crossing some ethical line by using a chatbot to clarify a concept or to review the grammar of a paper. When expectations are clear from the start, the environment becomes more collaborative and less punitive, which supports genuine learning.

This active participation from students is also producing a positive side effect: artificial intelligence literacy. By discussing limits, possibilities, and risks, students end up gaining a better understanding of how these tools work, what their limitations are, and why they sometimes produce inaccurate or biased information. This kind of knowledge is valuable because it transforms the student from a passive user into someone who can critically evaluate the outputs generated by AI. And this skill will only become more important in the job market and in daily life, regardless of their field.

A movement that could inspire universities worldwide

The landscape taking shape in American higher education could serve as an important reference for universities in other countries facing similar dilemmas. The clearest lesson so far is that waiting for regulation to come exclusively from the top, whether from the government or from educational networks themselves, can take too long. In the meantime, the collective building of rules between professors and students is proving to be a viable, practical, and surprisingly effective path to ensuring that artificial intelligence is incorporated into the educational process in an ethical and productive way.

Striking a balance between harnessing the potential of technology and preserving the essence of learning is not easy to achieve. But the fact that the academic community is seeking that sweet spot openly and collaboratively, with voices as diverse as those of Cryer, Clement, Tatini, Elder, and Tarana, is in itself a meaningful step forward. The debate is far from over, but at least it is happening in the right places and with the right people at the table 🧠.

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

Performance and Growth: Nvidia, AI Agents, and Data Centers

Nvidia accelerates revenue with data centers, GB300 NVL72, and Rubin; efficiency and AI Agents demand drive record growth and profit.

AI and Copyright: Supreme Court Denies Copyright Protection for Artistic Creation

Supreme Court rejected the AI-generated art case; in the US only humans can hold authorship — a direct impact on

AI Reveals the Identity of Anonymous Social Media Users

Vulnerable anonymity: how modern AI unmasks social media profiles and why this threatens your online privacy.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.