Mindful Management of AI During Finals

by Dana Dawson

As we near the end of the semester, it’s important to carefully consider your plan of action should you suspect students have used generative AI in a manner that you explicitly prohibited. In past blog posts, we strongly encouraged faculty members to begin by meeting with the student in cases where you suspect unacceptable use of AI and to start with a conversation. However, in the case of final exams and projects, you may feel you don’t have time for that course of action. In this post, we offer some suggestions for how to prepare for and address AI use during the finals period.

Ensure Guidelines Are Clear

Review your final exams and final project instructions to determine whether you have clearly outlined where the use of generative AI is and is not allowed. Build guidelines into assignments as well as the syllabus to ensure students have it readily available. Have a conversation with your classes to ensure they understand the limitations of acceptable generative AI use and state the steps that will be taken if you suspect students have used generative AI (more on that below). 

Test Your Final Exams and Final Projects Using Generative AI

Run final exam questions or final project prompts through tools such as ChatGPT and Claude.AI and prompt the tools to take the exam or complete the project. Note that in ChatGPT, you can simply copy and paste the entire exam or project prompt and rubric into the tool and ask it to generate a response. Claude.AI allows you to upload a pdf and enter a prompt. If you find that the tools can successfully complete your exams or assignments, reconsider the questions and prompts. Can you link questions or project prompts to in-class work that will draw on students’ past experiences? Can you add reflective or metacognitive questions that are difficult to replicate using generative AI? See this EDvice Exchange blog post for assessment ideas that are less prone to AI use.

Be Wary of AI Detectors

It has been well-established that AI detectors are not reliably able to differentiate between human- and AI-written text. Assessments we conducted of Turnitin’s AI detector, and four other applications available for free online, show that these detectors are prone to false positives (identifying human-written text as generated by AI) and false negatives (identifying AI-written text as generated by humans). AI detectors should never be used as the sole basis for a judgment on whether a student has used AI; companies such as Turnitin acknowledge this, for example, saying in their own explanatory materials that detector predictions should be taken with a grain of salt and that the instructor must ultimately make their own interpretations. Notably, TurnItIn also indicates that a score of 20% or less AI-created should not be considered valid. As you assess AI detector reports, keep in mind that there are currently no completely reliable detectors of generative AI use in writing available to instructors.

Step on the Brakes

Confronting possible cheating is always stress-inducing. We see a block of text or a pattern of answers that seem unlikely to have been generated by a student and the stress response kicks in. This is not the optimal time to make a decision. Take a breath, step away. Consider factors that might be influencing your assessment of the student’s work or your willingness to accept the results of an AI detector. Talk to a colleague or a CAT consultant and carefully consider all factors before making a determination as to your course of action.

You Can Still Have a Conversation with Students

If you strongly suspect a student of using generative AI in a manner you have stated is not acceptable, ask the student to meet, by Zoom if they are already off campus. If they are not able to meet prior to the end of the grading period, issue an Incomplete for the course and do not grade the final exam or project until you have met with the student. 

Have a Back-Up Plan

If you speak with the student and they do not admit to using generative AI, have an actionable plan for how to proceed. Consider how you might replicate the element you suspect they used AI to complete. Can you conduct an oral exam? Can they write an essay or a reflective statement on their process of solving the exam question or completing the project in-person? To talk over your plan for considering possible AI use in these final weeks of the semester, don’t hesitate to reach out to schedule a consultation with a CAT specialist. 

Err On the Side of Caution

The suspicion that a student may be taking shortcuts can be upsetting and we are all struggling to manage course design and delivery in the age of AI but the risk of falsely accusing a student should be taken very seriously. A false accusation can derail a student’s entire educational trajectory and not only because of the possible impact on their GPA; more importantly, it can shake their trust in their faculty members, their experience with higher education and their motivation to continue, particularly where their sense of belonging is tenuous. Turnitin has acknowledged that their detector is more likely to generate a false positive in the case of English language learners or developing writers as some of the writing patterns more common among these populations are the same patterns AI detectors look for in identifying AI-generated text. We must exercise the utmost caution in accusing any student and be sure to give them the benefit of the doubt when engaging in these conversations. 

Plan for Next Semester

Finally, once finals are over and your grades are in, make an appointment with a CAT specialist to explore how to revise assignments that are particularly vulnerable to AI use. We can often avoid these problems in the future by revising our current assessments into ones that work better in the age of AI.

Leave a Reply