Mindful Management of AI During Finals

by Dana Dawson

As we near the end of the semester, it’s important to carefully consider your plan of action should you suspect students have used generative AI in a manner that you explicitly prohibited. In past blog posts, we strongly encouraged faculty members to begin by meeting with the student in cases where you suspect unacceptable use of AI and to start with a conversation. However, in the case of final exams and projects, you may feel you don’t have time for that course of action. In this post, we offer some suggestions for how to prepare for and address AI use during the finals period.

Ensure Guidelines Are Clear

Review your final exams and final project instructions to determine whether you have clearly outlined where the use of generative AI is and is not allowed. Build guidelines into assignments as well as the syllabus to ensure students have it readily available. Have a conversation with your classes to ensure they understand the limitations of acceptable generative AI use and state the steps that will be taken if you suspect students have used generative AI (more on that below). 

Test Your Final Exams and Final Projects Using Generative AI

Run final exam questions or final project prompts through tools such as ChatGPT and Claude.AI and prompt the tools to take the exam or complete the project. Note that in ChatGPT, you can simply copy and paste the entire exam or project prompt and rubric into the tool and ask it to generate a response. Claude.AI allows you to upload a pdf and enter a prompt. If you find that the tools can successfully complete your exams or assignments, reconsider the questions and prompts. Can you link questions or project prompts to in-class work that will draw on students’ past experiences? Can you add reflective or metacognitive questions that are difficult to replicate using generative AI? See this EDvice Exchange blog post for assessment ideas that are less prone to AI use.

Be Wary of AI Detectors

It has been well-established that AI detectors are not reliably able to differentiate between human- and AI-written text. Assessments we conducted of Turnitin’s AI detector, and four other applications available for free online, show that these detectors are prone to false positives (identifying human-written text as generated by AI) and false negatives (identifying AI-written text as generated by humans). AI detectors should never be used as the sole basis for a judgment on whether a student has used AI; companies such as Turnitin acknowledge this, for example, saying in their own explanatory materials that detector predictions should be taken with a grain of salt and that the instructor must ultimately make their own interpretations. Notably, TurnItIn also indicates that a score of 20% or less AI-created should not be considered valid. As you assess AI detector reports, keep in mind that there are currently no completely reliable detectors of generative AI use in writing available to instructors.

Step on the Brakes

Confronting possible cheating is always stress-inducing. We see a block of text or a pattern of answers that seem unlikely to have been generated by a student and the stress response kicks in. This is not the optimal time to make a decision. Take a breath, step away. Consider factors that might be influencing your assessment of the student’s work or your willingness to accept the results of an AI detector. Talk to a colleague or a CAT consultant and carefully consider all factors before making a determination as to your course of action.

You Can Still Have a Conversation with Students

If you strongly suspect a student of using generative AI in a manner you have stated is not acceptable, ask the student to meet, by Zoom if they are already off campus. If they are not able to meet prior to the end of the grading period, issue an Incomplete for the course and do not grade the final exam or project until you have met with the student. 

Have a Back-Up Plan

If you speak with the student and they do not admit to using generative AI, have an actionable plan for how to proceed. Consider how you might replicate the element you suspect they used AI to complete. Can you conduct an oral exam? Can they write an essay or a reflective statement on their process of solving the exam question or completing the project in-person? To talk over your plan for considering possible AI use in these final weeks of the semester, don’t hesitate to reach out to schedule a consultation with a CAT specialist. 

Err On the Side of Caution

The suspicion that a student may be taking shortcuts can be upsetting and we are all struggling to manage course design and delivery in the age of AI but the risk of falsely accusing a student should be taken very seriously. A false accusation can derail a student’s entire educational trajectory and not only because of the possible impact on their GPA; more importantly, it can shake their trust in their faculty members, their experience with higher education and their motivation to continue, particularly where their sense of belonging is tenuous. Turnitin has acknowledged that their detector is more likely to generate a false positive in the case of English language learners or developing writers as some of the writing patterns more common among these populations are the same patterns AI detectors look for in identifying AI-generated text. We must exercise the utmost caution in accusing any student and be sure to give them the benefit of the doubt when engaging in these conversations. 

Plan for Next Semester

Finally, once finals are over and your grades are in, make an appointment with a CAT specialist to explore how to revise assignments that are particularly vulnerable to AI use. We can often avoid these problems in the future by revising our current assessments into ones that work better in the age of AI.

Faculty Adventures in the AI Learning Frontier: One Professor’s Take

by Jeff Rients, Ph.D.

Title card: Faculty Adventurers in the AI Learning Frontier

This week we’re happy to share a video featuring Michael L. Schirmer, who teaches the course Integrative Business Practices in the Fox School of Business. Michael shares his experiences with generative AI both with his students and as part of his personal scholarly practice.

Thank you so much for your insights, Michael!

If you’d like more guidance on exploring how to use AI tools in your class or assistance running your assignments through GenAI to better assess the value of using it, please visit our Faculty Guide to A.I. or book an appointment for a one-on-one consultation.

Faculty Adventures in the AI Learning Frontier: Assignments and Activities that Address Ethical Considerations of Generative AI Use

by Benjamin Brock, Ph.D and Dana Dawson, Ph.D

Title card: Faculty Adventurers in the AI Learning Frontier

In response to our fall 2023 survey on the use of generative AI (GenAI) in the classroom, we received a number of assignments and activities faculty members have designed to tackle the ethical issues raised by GenAI. Ethical concerns related to GenAI include such considerations as the implications for privacy when these tools are used, the possibility of over-reliance on GenAI for analytics and decision making, and exposure to inaccurate or biased information (Brown & Klein, 2020; Masters, 2023; Memarian & Doleck, 2023). The following activities and assignments equip students with the capacity to critically evaluate when and how it is appropriate to use GenAI tools and to protect themselves against possible risks of AI use.

Sherri Hope Culver, Media Studies and Production faculty member and Director of the Center for Media and Information Literacy (CMIL) at Temple University, asks students in her GenEd course, Media in a Hyper-Mediated World, to complete a reflection on the implications of AI use. She first asks them to listen to an episode of the podcast Hard Fork centered on data privacy and image manipulation and to read the Wired article “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” (Luccione, 2023). Students are then instructed to write a 300-word reflection referencing the assigned material that addresses both concerns they have about use of AI and ways in which it could make their lives or society better. Professor Culver provides the following prompts to help students’ thinking:

  • What does critical thinking mean in a tech-centric, AI world?    
  • How might AI affect your free will?    
  • How might AI affect your concerns about privacy or surveillance?    
  • How should we prepare ourselves for an increasingly AI world?    
  • How might AI influence the notion of a public good?   
  • How might AI influence K-12 education?    
  • How might AI influence family life?    
  • What worries you about AI?    
  • What excites you about AI?    
  • What is our responsibility as media creators when we use AI?    
  • It has been said that AI will make life more “fast, free and frictionless.” Should everything first be “fast, free and frictionless”? Should that be the aim?
  • Is AI the end of truth?

In a dynamic, interactive, reflection-oriented honors course aimed at exploring the four pillars of Temple’s Honors Program (inclusive community, intellectual curiosity, integrity in leadership, and social courage), Dr. Amanda Neuber, Director of the Honors Program, is using AI as the discussion anchor for their unit on “integrity in leadership.” By way of multiple media modalities, students delve into the ethical and unethical uses of AI in academia. Students are asked to read “How to Use ChatGPT and Still Be a Good Person” and watch a related video exploring the meaning of integrity. Students then discuss whether or not AI can be used with integrity, how academic culture might frame one’s decision to use AI, and the “peaks and pitfalls” of AI use. Beyond the many important conversations focused on AI itself, the technology is used as a reference point as to what it means to lead with integrity and how to promote said quality in teams and organizations.

In another interactive, thought-based classroom initiative, mechanical engineer Dr. Philip Dames is bringing ethics and AI to Temple’s College of Engineering. Having reimagined for a modern era the “trolley problem” philosophical exercise in which one is faced with an ethical dilemma, students in Dr. Dames’ class consider having AI make decisions using autonomous cars as the basis for deliberation. They are prompted to think about how a vehicle should be programmed to respond to different scenarios by using examples from MIT Media Lab’s Moral Machine website. Students then reflect upon their scenario-based activities and experiences and engage in prompt-guided written reflection. Prompts include questions such as: 

  • How does the ownership model of autonomous vehicles affect how they should behave? For example, does it make a difference if a vehicle is owned by a single private citizen vs. publicly owned by the city and hired by individuals? 
  • What surprised you about the aggregated responses from different people shown to you at the end of the exercise? 
  • Are there other factors that you feel are important but were not considered in Moral Machine?

In this way, students not only explore elements to consider when designing autonomous vehicles, but make concrete what was once only abstract via critical thinking and hands-on engagement.

If you’d like more guidance on exploring how to use AI tools in your class, please visit our Faculty Guide to A.I. and/or book an appointment for a one-on-one consultation.

Brown, M., & Klein, C. (2020). Whose data? Which rights? Whose power? A policy discourse analysis of student privacy policy documents. The Journal of Higher Education, 91(7), 1149–1178. https://doi.org/10.1080/00221546.2020.1770045  

Masters, K. (2023). Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Medical Teacher, 45(6), 574–584. https://doi.org/10.1080/0142159X.2023.2186203  

Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: A systematic review. Computers and Education: Artificial Intelligence, 5 (2023), 1-12. https://doi.org/10.1016/j.caeai.2023.100152

Faculty Adventures in the AI Learning Frontier: Teaching with Generative AI in Health Sciences Education 

by Jonah Chambers, MA and Cliff Rouder, EdD 

Title card: Faculty Adventurers in the AI Learning Frontier

As part of our fall 2023 survey on generative AI (GenAI) in the classroom, we heard back from a wide variety of Temple faculty who teach a broad range of courses. In this installment, we’re going to take a look at how three health science instructors are incorporating GenAI tools like ChatGPT into their teaching.

Scott Burns, Professor of Instruction in the Department of Health and Rehabilitation Sciences, had his graduate physical therapy students prompt ChatGPT to create a generic plan of care for a specific health condition and then provide a detailed explanation of how the exercises it prescribes may or may not properly address the condition described in the scenario under consideration. In addition to having students demonstrate their knowledge of what constitutes a good plan of care by evaluating and critiquing the AI-generated plan, Professor Burns explains that the goal of the activity is to highlight that while generative AI may be useful for broad recommendations, it “currently lacks the ability to provide decision-making and rationale backed by anatomy, neuroscience, motor control/learning, and physiology.” 

Before he launched the assignment, Dr. Burns surveyed his class about their experiences with and perceptions of GenAI. He also wanted to gauge the level of anxiety surrounding it, given that there is concern in health-related fields that AI could replace the human provider. Students reported that they appreciated the opportunity to interact with AI, since the experience level with AI varied, and some had never even used it before. Dr. Burns plans to administer a more formal survey for the end of the semester to see if student perceptions of AI have shifted.

Alissa Smethers, Assistant Professor in the Department of Social and Behavioral Sciences, had her nutrition students prompt ChatGPT to create a 1-day, 2,000 kcal dietary pattern for a popular diet of their choice (Keto, Paleo, Atkins, etc.) and then submit the outputs to an established dietary analysis program and answer the following questions:

  • Does the plan provide 2000 kcal? If not, how far off was it?
  • Does the macronutrient composition and food choices reflect the popular diet you chose? If not, what foods would you add/remove?

Her students were surprised at how far off ChatGPT was at times, in some cases generating plans that differed by over 800 kcal from what the dietary analysis program provided. The goal was not only to ensure that students are learning the correct information but also that they develop critical thinking and research skills crucial to their work as nutrition professionals. In the future, she is considering having students evaluate how well ChatGPT is able to tailor the dietary patterns based on culture, income level, or other more personalized factors as well as reflect on the limitations of using a generative AI tool to create dietary patterns vs. working with a nutrition professional like a Registered Dietitian.

Leah Schumacher, Assistant Professor in the Department of Social and Behavioral Sciences, invited Health Science and Human Behavior students to roleplay as someone who either wants to avoid or already has a chronic disease and has turned to ChatGPT to provide answers or advice. She first asked students to pick one of the diseases they covered in her class and then pose questions about it to ChatGPT such as “Why did I have a stroke?” or “How do I avoid getting cancer?” She then had students prepare a submission for the assignment that included: 

  1. The full prompt they submitted to ChatGPT
  2. The full response ChatGPT provided
  3. A short 5-7-sentence reflection that compared the ChatGPT response to what they had learned in class through textbook readings, lectures, videos, etc. Specifically, she asked students to reflect upon the extent to which ChatGPT’s response hit upon aspects of the biopsychosocial model they studied in class, whether it touched upon major risk factors they covered, and if ChatGPT presented any information that was new to them.

Dr. Schumacher was careful to have students clearly distinguish between text generated by ChatGPT and their own written work in their submission. Not only did this assignment have students apply their understanding of the biopsychosocial model to a diverse set of cases, it also gave them the opportunity to reflect upon (and illuminate problematic aspects of) how people may use ChatGPT in their everyday lives.

Each of these professors has illuminated one of the most powerful ways of using GenAI in teaching: instead of taking its outputs at face value, they have their students question, evaluate, analyze and verify them using a variety of methods. Not only does this provide students an opportunity to apply their knowledge (a proven way to promote deep learning), but it also helps them sharpen their critical thinking skills surrounding the use of GenAI. These skills will likely not only prove helpful to them now but also in their future professional lives.

In the next installment, we’ll be looking at ethics in AI. In the meantime, if you’d like more guidance on exploring how to use AI tools in your class or assistance running your assignments through GenAI to better assess the value of using it, please visit our Faculty Guide to A.I., attend a workshop on using generative AI for teaching and learning, or book an appointment for a one-on-one consultation.

Faculty Adventures in the AI Learning Frontier: AI and (First Year) Writing

by Jeff Rients

Title card: Faculty Adventurers in the AI Learning Frontier

As part of our fall 2023 survey on AI in the classroom, we heard back from a wide variety of Temple faculty who teach a broad range of courses. In this installment, we’re going to take a look at what three First Year Writing instructors are doing with AI tools like ChatGPT.

 

First year writing instructor Jacob Ginsburg incorporated “AI and education” as a theme in his course. His students read Ted Chiang’s “Chat GPT is a Blurry JPEG of the Web,” Matteo Wong’s “AI Doomerism is a Decoy,” and some academic articles about the role of AI in education. In class, each student writes a paragraph about what it means to them to be a member of their generation. As homework, they then give ChatGPT four tasks:

  1. Respond to the same prompt as they wrote about in class (i.e. what it means to be a member of their generation).

  2. Make an argument FOR the use of AI in education.

  3. Make an argument AGAINST the use of AI in education.

  4. Each student devises a “silly” or “fun” task of their own.

Afterwards, everyone then discusses their prompts and results in class.

 

Professor Amy Friedman challenges her students to write an essay in which they summarize several disparate, current articles on generative AI in education and learning. She has used articles such as Valerie Pisano’s “Label AI-Generated Content,” Allison R. Chen’s “Research training in an AI world,” and Naomi S. Baron’s, “How ChatGPT Robs Students of Motivation to Write and Think for Themselves.” Her goal is for each student to formulate and articulate their own opinion about the role of generative AI in their own learning and education. Beforehand, students explore ChatGPT in class, including asking it to write in response to previous essay prompts. The class then collectively assesses the results and compares them to their own writing. 

Meanwhile at Temple’s Japan campus, Ryan Rashotte has developed two activities for his first year writing students. In the first one, students writing essays about a film ask ChatGPT to write a paragraph regarding how a specified element in the film supports a theme they are exploring. In response, students write about the strengths and weaknesses of ChatGPT’s argument. In the second assignment, students working in groups explore which art form they think is superior – television or film. As part of this investigation, they query ChatGPT for reasons in support of their choice. Students identify new and/or interesting arguments and identify their strengths and weaknesses. They are asked to consider how well the ChatGPT output would work if it were incorporated into their essay.


In the next installment, we’ll be looking at the way AI tools are being used in a variety of health sciences learning environments. In the meantime, if you’d like more guidance on exploring how to use AI tools in your class, please visit our Faculty Guide to A.I. and/or book an appointment for a one-on-one consultation.

Faculty Adventures in the AI Learning Frontier: Introduction

by Dana Dawson, Ph. D

Title card: Faculty Adventurers in the AI Learning Frontier

Associate Director’s log, stardate 4616.2. Temple University. While minding our own business over here, someone invented generative AI and made it readily accessible to the people of Earth, which includes our students. We find ourselves in a strange and unfamiliar landscape; computer probability projections are useless due to insufficient data¹.  Some faculty members have boldly entered this new terrain.

_______

We decided to call this blog series Faculty Adventures in the AI Learning Frontier not only because Google’s new AI tool Gemini suggested the title but because of its resonance with Star Trek. The original television series and its many reboots and spin-offs depict intrepid adventurers exploring the deep reaches of space, the “final frontier,” as we were reminded at the beginning of each episode. With the rapid evolution of generative AI, we find ourselves standing on the threshold of a strange new world. Teaching in the age of AI is, indeed, a new frontier; instructors are boldly going where no instructor has gone before. With great ingenuity, and at times trepidation, faculty members throughout Temple have begun to explore what it means to integrate AI into their class planning and delivery. 

Over the course of this series, we will showcase examples of activities and assessments that Temple faculty members used in their classes in Fall 2023. In our March 11th post, we will feature assignments used in First Year Writing courses. March 25th will focus on implementation of generative AI assignments and activities in the health sciences and our final post on April 15th will include examples of how instructors across different disciplines have addressed ethics in the use of generative AI. Some of the featured activities and assessments have been designed to help students practice implementing generative AI in their fields while others encourage students to critically interrogate the impact of AI. 

Whether we like it or not, generative AI is now part of our–and our students’–world. It is our responsibility not just to familiarize ourselves with these tools but to help our students make decisions about their use and, should they choose to use them, to do so effectively and ethically. We hope this series will give you some concrete and applicable examples of how to address generative AI in your courses.

_______

¹ The Gamesters Of Triskelion, Original Airdate: 5 Jan, 1968.

My trip down the AI rabbit hole

In her end of the semester wrap-up, our fearless leader reflects on her intellectual adventures springing from the emergence of A.I. as a factor in teaching in learning. 

For further help on the role of A.I. in your classroom, visit our Faculty Guide to A.I. or book an appointment for a one-on-one consultation.

 

Stephanie Laggini Fiore, Ph.D., is Associate Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.

Testing Assignments in ChatGPT

Jeff Rients and Jennifer Zaylea

As our understanding of artificial intelligence and its uses in higher education grows, we continue to expand our faculty guide to A.I. Previously, the Center for the Advancement of Teaching produced a video introduction to prompting in ChatGPT. Our newest A.I. resource is a short demonstration on testing an assignment using the tool.

This video only provides a brief introduction to how to think about testing the tasks that you give your students. You may need to develop additional prompts and/or make additional uses of the Regenerate Response option in order to thoroughly understand how ChatGPT can respond to each of your assignments. 

Need additional help wrestling with the challenges posed and opportunities offered by A.I.? The CAT is here to help! Send an email to cat@temple.edu or book an appointment for a one-on-one consultation.

Survival Guide to AI and Teaching, pt. 10: Talking to Your Students About AI and Learning

Stephanie Laggini Fiore

While we have dealt with many aspects of AI and teaching in this blog series, we want to end the series with the most important aspect—talking to your students about AI and learning. One of the realities of the present moment is that we are all in the midst of a disruptive change, one that neither we nor our students fully understand how to navigate. Therefore, whether or not we decide to allow the use of AI in our classes, it is vitally important to discuss these tools with our students in productive ways. 

At the CAT, we have seen plenty of draconian language on syllabi over the years (“Don’t even think about cheating; you will be caught!!”), but the old adage about catching more flies with honey than with vinegar stands true here as well.  Establishing trust in the learning environment, having clarifying conversations about AI and the choices you have made for the course, engaging students in thinking critically about the use of these tools and what they mean for society and for learning, and welcoming students’ thoughts will be far more effective than setting up an adversarial dynamic. We recommend dedicating time to discussing generative AI during the first week of the semester and then re-engaging students briefly before each written assignment. You should, of course, take some time to go over your AI syllabus statement, explaining your reasons for the decisions you have made, but it is important to go beyond that conversation to allow space for students to reflect on what it means to use these tools for learning. 

Here are some thoughts on how to speak to your students about AI:

  • Consider using an anonymous poll that asks the extent to which your students have used these tools. This will provide a window into how familiar your students are with generative AI.
  • Begin the conversation by asking students what they know about generative AI. You may be surprised about what they do (or don’t) know. Continue with a clarifying conversation on how generative AI tools work, including their benefits and pitfalls. It will be most effective if you can show examples of those benefits and pitfalls—for instance, an example of a hallucination (inaccuracies) or biased content that it might reproduce. 
  • Engage students in thinking about how your assignments help students to achieve the goals of your course. We often recommend using Bloom’s Taxonomy for this exercise. If, for example, you have a goal that reaches the level of evaluation on the taxonomy, how will the assignments (if completed by the student) aid in their attainment of that goal? 
  • Think about how to connect your students to the value of what they are learning. Often students see our courses (especially our required courses) simply as hoops to jump through on the way to a degree. Can you articulate for your students the reason why what they are learning will benefit them? What relevance will it have for their professions, personal growth, future academic work, or communities? Helping students to find meaning in what they are learning will be key to managing AI use.
  • Include a discussion about AI and academic integrity. Why is academic integrity important? How can we think about the use of generative AI in ethical terms? Uses case studies to have them ponder whether those uses are ethical; for instance, how they would feel if you offloaded all student feedback to an AI? Would that be an ethical use of the tool or would it be a breach of your responsibility as an instructor?
  • Ask students to discuss important philosophical questions that will get them thinking about the nature of learning, thought, and voice, such as:
    • Why do we write? What kinds of thinking happens when we write? Query students about how they use writing outside of class: do they keep a journal, write their opinions on social media, text friends when something important happens? Why might they turn to writing to express their thoughts? 
    • What does it mean to cede our thinking and our voice to non-sentient machines? Do we want to live in a world where none of our passions and ideas are expressed in the way that we want to express them, and where originality of thought is replaced by a process of scraping a dataset for answers? 

Talking to a student when you suspect cheating

You’ve followed our advice above and talked to your students about AI from day one of the semester, clarifying permissible use in your course. Still, you suspect that a student in your class has used AI in ways that you have not allowed. The first step is always to talk to the student. Here are some tips for tackling this discussion: 

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright. As was established in a previous post, our ability to accurately identify the use of AI generative tools at present is quite weak.  
  • Ask the student to meet with you. Simply say something like “I have some concerns about your assignment. Please come to see me.” 
  • When you meet with the student, try not to be confrontational (remember that you may not be certain they used AI in an unauthorized manner). Instead, start by asking them questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask pointed questions that get at whether they actually engaged in that activity.  
  • Then state your concerns: I’m concerned because the writing in this assignment doesn’t seem to match the writing in your other assignments, and the AI detector tool said that it is AI written. Point out any inconsistencies, odd language, repetition, or hallucinated citations with the student.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to the course structure, review our blog post on academic integrity and AI in order to take steps to promote academic integrity and consider whether your course is designed to reflect these best practices.

In a world in which AI is here to stay, it is essential that we support students’ ethical and productive interaction with these tools. No matter the discipline, we need to take on the responsibility of developing our students to adapt to this new reality with full awareness of the implications of AI use for learning, for work, and for society. 

We know that this is all new and it is not easy—the CAT is here to help. To book an appointment with a CAT educational developer or educational technology specialist, go to catbooking.temple.edu or email cat@temple.edu.

A Survival Guide To AI and Teaching pt. 9: AI and Equity in the Classroom

Dana Dawson, Ph.D

In previous posts in this series, we noted how generative AI can perpetuate biases and exacerbate the digital divide. Here, we will explore in more depth the potential for these tools to widen the institutional performance gaps that impact learning in higher education, but also the potential for generative AI to create a more equitable learning environment. We conclude with suggestions for what you can do to minimize possible negative impacts of generative AI for students in your courses. 

Rapid improvements in the capabilities of generative AI have a tendency to provoke doom spiraling, and there are indeed some very real concerns we will have to grapple with in coming years. While generative AI at times produces helpful summaries of content or concepts, it is prone to error. Students with tenuous confidence in higher education or their capabilities of succeeding in their studies are less likely to deeply engage in their coursework (Biggs, 2011; Carver, 1998) and may rely excessively or uncritically on AI tools. Over-reliance on generative AI to reduce effort, and not as a mechanism for jumpstarting or supporting conceptual work robs students of opportunities to practice and develop the very creative, critical thinking and analysis skills that are likely to become increasingly valued as AI is more widely available. In addition, where we neglect to carefully vet content created by AI, we run the risk of repeating erroneous information or perpetuating disinformation. The prospect of bias and stereotypes impacting students’ experience in higher education arises not only from the content generative AI produces (Bender et al., 2021; Ferrara, 2023), but from the challenge of determining whether a student has appropriately used the tools. AI detectors cannot reliably differentiate human- from AI-generated content. Faculty must be aware that judgments of whether students relied excessively on AI may be influenced by assumptions that have more to do with factors such as race, gender or spoken language fluency than student performance. Finally, faculty who wish to encourage students to experiment with and integrate the use of AI tools must be aware that inequitable access to broadband internet connection and digital tools, along with varying levels of preparation to effectively use the tools, may differentially impact students. Variable access to broadband prior to, or during their postsecondary studies raises digital equity concerns. Some students will come to our classes well-equipped to engineer prompts and vet generated content while others will be encountering these technologies for the first time. That high quality AI applications are often behind paywalls compounds these issues. 

On the other hand, some scholars and policy-makers have pointed to ways that these tools can be productively used to support student learning and success. AI tools such as ChatGPT can be used to fill in knowledge gaps related to a field of study or to being a college student more generally that are particularly salient for first-generation students or those whose previous educational experiences insufficiently addressed certain skills or topics. GPT3 responses to prompts such as “What are the best ways to study?” and “How do I succeed in college?” generate strategies that are useful and can be expanded upon with additional prompts. Warschauer et al. point out that for second language learners, the ability to quickly generate error-free email messages or to get feedback on one’s writing reduces the extra burden of studying disciplinary content in a second language. Students can prompt generative AI tools to explain concepts using relatable analogies and examples. For students with disabilities, generative AI can serve as an assistive technology, for example by improving ease of communication for those who must economize words, assisting with prioritizing tasks, helping practice social interactions or modeling types of communication. 

RECOMMENDATIONS

1. Reduce the potential for bias to impact your assessment of unauthorized student use of generative AI tools by determining the following before the start of the coming semester:

  • Which assessments have the most potential for unauthorized use?
  • Is there an alternative mechanism for assessing student learning for those assessments most prone to unauthorized use?
  • What are my guidelines for appropriate use of generative AI tools in this class?
  • Can I reliably detect inappropriate use?
  • Is my determination of inappropriate use subject to bias? 
  • What will my next steps be if I suspect inappropriate use?

If you’re not sure whether to allow use of generative AI tools, review our decision tree tool.

2. Clearly communicate your classroom policies on use of generative AI and talk with (not to) your students about those policies, ensuring they understand acceptable limits of use.

3. If you are encouraging the use of generative AI tools as learning tools, consider questions of access by:

  • Assessing the extent to which your students know how to use and have access to the tools; and
  • Showing students how to use the tools in ways that will benefit their education (for example, using follow-up prompts to focus initial queries). Temple University Libraries has created an AI Chatbots and Tools guide to help our students learn to judiciously use these tools.

4. Educate students on how generative AI tools may be biased, can perpetuate stereotypes and can be used to increase dissemination of mis- and dis-information.

5. Help students find their own voice and value a diversity of voices in writing and other content that has the potential to be generated by AI tools.

6. Consider a SoTL project.

In the next (and final) installment of our series, we’ll focus on how to talk to your students about generative AI. In the meantime, if you’d like to discuss AI or any other topic related to your teaching, please book an appointment for a one-on-one consultation with a member of the CAT staff.
 

Works Referenced

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Biggs, J. (2012). What the student does: Teaching for enhanced learning. Higher education research & development, 31(1), 39-55.

Carver, C. S., & Scheier, M. (1998). On the self-regulation of behavior. Cambridge, UK: Cambridge University Press.

Ferrara, E. (2023). Should chatgpt be biased? challenges and risks of bias in large language models. arXiv:2304.03738.

Dana Dawson serves as Associate Director of Teaching and Learning at Temple University’s Center for the Advancement of Teaching