Faculty Adventures in the AI Learning Frontier: Assignments and Activities that Address Ethical Considerations of Generative AI Use

by Benjamin Brock, Ph.D and Dana Dawson, Ph.D

Title card: Faculty Adventurers in the AI Learning Frontier

In response to our fall 2023 survey on the use of generative AI (GenAI) in the classroom, we received a number of assignments and activities faculty members have designed to tackle the ethical issues raised by GenAI. Ethical concerns related to GenAI include such considerations as the implications for privacy when these tools are used, the possibility of over-reliance on GenAI for analytics and decision making, and exposure to inaccurate or biased information (Brown & Klein, 2020; Masters, 2023; Memarian & Doleck, 2023). The following activities and assignments equip students with the capacity to critically evaluate when and how it is appropriate to use GenAI tools and to protect themselves against possible risks of AI use.

Sherri Hope Culver, Media Studies and Production faculty member and Director of the Center for Media and Information Literacy (CMIL) at Temple University, asks students in her GenEd course, Media in a Hyper-Mediated World, to complete a reflection on the implications of AI use. She first asks them to listen to an episode of the podcast Hard Fork centered on data privacy and image manipulation and to read the Wired article “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” (Luccione, 2023). Students are then instructed to write a 300-word reflection referencing the assigned material that addresses both concerns they have about use of AI and ways in which it could make their lives or society better. Professor Culver provides the following prompts to help students’ thinking:

  • What does critical thinking mean in a tech-centric, AI world?    
  • How might AI affect your free will?    
  • How might AI affect your concerns about privacy or surveillance?    
  • How should we prepare ourselves for an increasingly AI world?    
  • How might AI influence the notion of a public good?   
  • How might AI influence K-12 education?    
  • How might AI influence family life?    
  • What worries you about AI?    
  • What excites you about AI?    
  • What is our responsibility as media creators when we use AI?    
  • It has been said that AI will make life more “fast, free and frictionless.” Should everything first be “fast, free and frictionless”? Should that be the aim?
  • Is AI the end of truth?

In a dynamic, interactive, reflection-oriented honors course aimed at exploring the four pillars of Temple’s Honors Program (inclusive community, intellectual curiosity, integrity in leadership, and social courage), Dr. Amanda Neuber, Director of the Honors Program, is using AI as the discussion anchor for their unit on “integrity in leadership.” By way of multiple media modalities, students delve into the ethical and unethical uses of AI in academia. Students are asked to read “How to Use ChatGPT and Still Be a Good Person” and watch a related video exploring the meaning of integrity. Students then discuss whether or not AI can be used with integrity, how academic culture might frame one’s decision to use AI, and the “peaks and pitfalls” of AI use. Beyond the many important conversations focused on AI itself, the technology is used as a reference point as to what it means to lead with integrity and how to promote said quality in teams and organizations.

In another interactive, thought-based classroom initiative, mechanical engineer Dr. Philip Dames is bringing ethics and AI to Temple’s College of Engineering. Having reimagined for a modern era the “trolley problem” philosophical exercise in which one is faced with an ethical dilemma, students in Dr. Dames’ class consider having AI make decisions using autonomous cars as the basis for deliberation. They are prompted to think about how a vehicle should be programmed to respond to different scenarios by using examples from MIT Media Lab’s Moral Machine website. Students then reflect upon their scenario-based activities and experiences and engage in prompt-guided written reflection. Prompts include questions such as: 

  • How does the ownership model of autonomous vehicles affect how they should behave? For example, does it make a difference if a vehicle is owned by a single private citizen vs. publicly owned by the city and hired by individuals? 
  • What surprised you about the aggregated responses from different people shown to you at the end of the exercise? 
  • Are there other factors that you feel are important but were not considered in Moral Machine?

In this way, students not only explore elements to consider when designing autonomous vehicles, but make concrete what was once only abstract via critical thinking and hands-on engagement.

If you’d like more guidance on exploring how to use AI tools in your class, please visit our Faculty Guide to A.I. and/or book an appointment for a one-on-one consultation.

Brown, M., & Klein, C. (2020). Whose data? Which rights? Whose power? A policy discourse analysis of student privacy policy documents. The Journal of Higher Education, 91(7), 1149–1178. https://doi.org/10.1080/00221546.2020.1770045  

Masters, K. (2023). Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Medical Teacher, 45(6), 574–584. https://doi.org/10.1080/0142159X.2023.2186203  

Memarian, B., & Doleck, T. (2023). Fairness, accountability, transparency, and ethics (FATE) in artificial intelligence (AI) and higher education: A systematic review. Computers and Education: Artificial Intelligence, 5 (2023), 1-12. https://doi.org/10.1016/j.caeai.2023.100152

Leave a Reply