Expand the section navigation mobile menu

Updating Your AI Policy

Wed, Jan 29, 2025 at 4:30 AM

Many of us began last semester thinking about how to develop course policies about when students can and cannot use AI on assignments, activities, and exams. I suggested that everyone needs to talk to their students about AI - especially if your policy is that you don’t want to authorize any AI assistance - so that we can help students develop critical literacy skills and appropriate use.

Since last semester, there have been some updates to best practices regarding class AI use and course policies, as well as some alarming new research that shows students just don’t understand what we mean when we say “your own work” or use terms like “academic honesty” or “academic integrity.” As Bowen and Watson write in Teaching with AI, “What we call cheating, businesses see as innovation” (5). Why wouldn’t we reuse material or automate tasks if it saves time and money? We often take for granted that students understand why asking them to do something the long way is useful for them.

Many of us have talked a lot in the last two years about helping students develop enough AI literacy to recognize that a large language model isn’t the same thing as a search engine. Since last semester, though, Google’s AI overview in searches, which was previously on a slow rollout, has become ubiquitous. In December, OpenAI offered demonstrations of its new real-time search capabilities within ChatGPT. Telling students who aren’t versed in the nuances of these technologies that using generative AI and searching the web aren’t the same thing is bound to confuse them.

This semester, I recommend a few amendments to the policies many of us developed last semester:

Start at ground zero when talking about academic honesty

Instead of just a punitive statement that says academic misconduct is bad and will be referred to the Dean of Students for a hearing, spend a few sentences in your syllabus - and more time in class, preferably before the first assignment is due - explaining what academic honesty is. What does plagiarism mean? Why do professors consider plagiarism bad, when copying what has already worked in the past is usually the best practice outside of school? You might ask the students to answer these questions themselves before you offer your own answers. These questions might be so facile that we never thought to talk about them with students, but talking through these ideas is important to help students make sense of our courses, assignments, and practices that may not align with customs in their current or future jobs.

My new paragraph on academic honesty explains that 

“your own work” means the assignment “was written by you alone, without AI assistance unless expressly authorized by me and without consulting any websites other than our course Moodle page and readings unless expressly indicated in the assignment.” 

I also explain when students are allowed to collaborate and brainstorm with each other and when they’re not. OU Libraries offers an Academic Integrity in Research & Writing Microcourse to build literacy around academic honesty.

Name specific AI tools and uses in your AI policy

One of the new best practices is to name which tools are authorized and not, and under which circumstances. My updated AI policy says, “You’re allowed to use Grammarly and spellcheck/grammar check functions in Google Docs and Microsoft Word to proofread your spelling and grammar after your work is written.” I also describe when using rephrasing suggestions from sources like Grammarly borders on murky (with encouragement for students to use the Writing Center instead and a reminder that they pay for it!). I also warn students, 

“There are also numerous custom GPTs that you can use – or develop on your own – to help ChatGPT write more closely to your own voice. These tools are incredibly time-saving for things like writing blanket emails or completing tedious work. They are not useful for you to demonstrate your success with our course learning outcomes.” 

Explain how AI use does or doesn’t connect to course learning outcomes

I try to remind students of the course learning outcomes frequently, so they understand where AI might interfere with their ability to meet these outcomes and where it might help them. For instance, students in one of my courses will give presentations at the end of the semester. I explain that use of generative AI might be helpful for these presentations, not in content but in form: 

“For any multimedia project in this course, you are encouraged to go nuts with image- and video-generators. You can create videos, images, comics, video games, and audio to supplement the research and writing you have performed by yourself. Have fun! This isn’t unauthorized aid because ‘making beautiful presentations’ isn’t a course learning outcome.”

If you’re not teaching a course focused on generative AI, you might also explain to students that because AI literacy isn’t a course learning outcome, you don’t have time to teach them appropriate or inappropriate use or good AI skills. This can help explain why you’ve instituted a very strong anti-AI use policy.

Talk about the double whammy of detection tools and humanization tools

Students who have used generative AI to complete assignments in the past have most likely also used a humanization tool to skirt detection. Although we’ll never be able to persuade every student not to cheat, we can explain why cheating is lousy. In my updated policy, I acknowledge that faculty sometimes rely on detection tools that give false positives but note that humanization tools are often just as flawed. I remind students that detection and humanization tools are often owned by the very same companies who make exaggerated promises to both faculty and students because what they really care about is making money, and I warn them not to pay for humanization - or use a free humanization tool, which works even worse. By acknowledging the existence of these tools, I hope to cultivate an atmosphere in which students see our relationship as less adversarial. We’re a team, navigating this new moment together.

Discuss how new technologies require evolving pedagogies

Students today likely know that generative AI for the public is new, but they might not understand how new or what consequences “new” has on teaching and learning. In my updated policy, I give a brief background: 

“Generative artificial intelligence only became available to the public in November 2022. That’s not a lot of time for faculty to understand its many capabilities and limitations, redesign their assignments, and learn how to teach students about and with it. It’s also not a lot of time for students to develop AI literacy.” 

The end of my AI policy similarly reminds students that we are just at the start of the AI moment, which means academia is still figuring out how to navigate use of these tools. I remind my students that they often use generative AI for terrible purposes (like creating a research bibliography – it sucks at that) and that faculty are often terrified of ANY use, when some uses can actually be helpful. I remind students that I’m happy to talk through appropriate or inappropriate use if they have questions. By focusing on teaching as an evolving practice, I hope to help them understand why policies might fluctuate from course to course and why faculty can’t just reinvent themselves, their courses, and their assignments.

Conclusion

We might fear that students know more about generative AI than we do. They may have found detailed videos on social media explaining how to use generative AI to complete their homework without getting caught. But I also recently overheard a few students discussing how terrified they are of generative AI’s rapid spread. They were worried about automation replacing their jobs, and, like us, worried that they and their classmates aren’t going to actually learn anything in college if they rely too much on AI. 

We won’t be able to stop students from using generative AI in ways we don’t authorize, and we won’t be able to catch every use for sanctions. The best practice we can undertake is to talk to students. Hear their concerns and be inspired by their innovative uses. Work together to develop policies for class use or prohibition that are as meaningful to the students as they are to you.

References

Bowen, José Antonio and C. Edward Watson. Teaching with AI. Baltimore: Johns Hopkins University Press, 2023

Kies, Bridget. “Have You Talked to Your Students about AI?”, CETL Teaching Tip, Sept. 5, 2024.

Venkatachary, Srinivasan. “AI Overviews in Search Are Coming to More Places Around the World,” Google Blog, Oct. 28, 2024. 

OpenAI. “Search - 12 Days of OpenAI: Day 8.” YouTube, Dec. 16, 2024. 


Save and adapt a Google Doc version of this teaching tip.


About the Author

Bridget Kies is Associate Professor of Film Studies and Production at OU and CETL’s faculty fellow for AI and teaching. She is the co-author of the article “From Attributions to Algorithm: Teaching AI and Copyright in Media Studies” for the journal Teaching Media and is currently writing a book about teaching about AI in film and media studies for Routledge.

Others may share and adapt under Creative Commons License CC BY-NC.

View all CETL Weekly Teaching Tips. Follow these and more on Facebook and LinkedIn.

Tags:
artificial intelligence, communication