A Survival Guide to AI and Teaching pt.7: Inoculating Our Students (and Ourselves!) Against Mis- and Disinformation in the Age of AI

Dana Dawson

In a previous blog post in this series, we suggested making generative AI a subject of critical analysis in your courses. Here, we will focus on the importance of teaching our students to critically engage with content generated by AI tools and with the implications of generative AI use for our information environment. This topic lies at the intersection of digital literacy, information literacy and the newly emerging field of AI literacy (Ng et al.; Wuyckens, Landry and Fastrez). While our students will need to develop the digital literacy required to solve problems in a technology-rich environment characterized by the regular use of AI tools, they will require the information literacy skills to navigate through a complex information ecosystem. Though generative AI tools are digital tools that generate information, we have a tendency to interact with AI tools as if they are social beings (Wang, Rau and Yuan, 1325-1326) and the manner in which they generate information requires special attention to issues of authorship, the impact of data-set bias and the potential automation of disinformation dissemination.

  • As the efficacy and availability of generative AI tools advances, both we and our students will face a variety of information-related challenges. Generative AI can be used to automate the generation of online misinformation and propaganda, significantly increasing the amount of mis- and disinformation we are exposed to online. Flooding our information environment with disinformation not only increases exposure to bad information, but distracts from accurate information and increases skepticism in content generated by credible scholarly and journalistic sources. Even where users do not intend to propagate misinformation, Ferrara and others have pointed out that bias creeps into text and images produced by generative AI through the source material used for training data, the design of a model’s algorithm, data labeling processes, product design decisions and policy decisions (Ferrara, 2).These limitations can result in the creation of content that seems accurate but is entirely made up, a phenomenon known as AI hallucinations.

Our task as educators is to prepare our students to navigate an information environment characterized by the use of generative AI by inoculating against disinformation, helping them develop the skill and habit of verifying information, and building a conception of the components of a healthy information environment.

Tools for Inoculation

Inoculating ourselves and our students against mis- and dis-information functions much the same as inoculating ourselves against viruses through controlled exposure. By “pre-bunking” erroneous content students may themselves create using generative AI tools or may encounter online, we can help reduce the potential for them to be misled in later encounters.

  • Ask students to use ChatGPT to outline one side of a contemporary debate and then to outline the other side of the debate. Have them experiment with prompting the tool to write in the voice of various public figures or to modify the message for different audiences. Analyze what the tool changes with each prompt. Look for similar messages in social and news media.
  • Use the resources of the Algorithmic Justice League to explore how algorithms reproduce race- and gender-based biases.
  • If you assign discussion board entries to your students, secretly select one student each week to use ChatGPT or another generative AI tool to write their response. Ask students to discuss who they believe used AI that week and why.
  • Have students experiment with Typecast or other AI voice generators to create messages in the voice of public figures that are aligned or misaligned with that individual’s stance on contemporary issues.
  • Have students investigate instances of the use of tools such as Adobe Express to create misleading images that circulated online (for example, fake viral images of explosions at the Pentagon and the White House). The News Literacy Project keeps a list here. Analyze who circulated the images and why. How were they discovered to be fake? Ask students to experiment with the image generating and editing tools used in the instances they discover, or with free alternatives.

Tools for Verifying Information

Zeynep Tufekci argues that the proliferation of generative AI tools will create a demand for advanced skills including “the ability to discern truth from the glut of plausible-sounding but profoundly incorrect answers.” Help your students hone their analytical skills, understand the emotional aspects of information consumption and develop a habit of questioning and verifying.

  • Increase students’ self-awareness of their own information consumption habits and methods for verifying information they are exposed to. Ask students to keep a journal for a week of their social media consumption and what they shared, liked, up- or down-voted, reposted, etc. on social media for a week. What kind of content do they tend to engage with? What feelings motivated them to share or interact with content and how did they feel afterward? If shared content included information or took a stance on a topic, did they verify before sending? What do they notice about their information consumption after observing their habits for a week, and what might they consider changing?
  • Introduce students to the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims, quotes and media to the original context). Note that some students may already know of this popular approach to addressing information online, so be sure to first ask if anyone can describe the method for others. Discuss how this approach may need to be modified in the age of AI. Challenge your students to design a modified method that accounts for the difficulty of finding a source and tracing claims where generative AI tools are involved.
  • Given the difficulty or even impossibility of differentiating AI generated content from human generated content and tracing AI generated content to its source, help students focus on analyzing the content itself. Teach students lateral reading strategies and have them investigate claims in articles posted online using these strategies.
  • Develop your students’ habit of asking questions by utilizing tools such as the Question Formulation Technique (registration is free) and the Ultimate Cheatsheet for Critical Thinking.

Tools for Shared Understanding

One of the most insidious consequences of AI generated disinformation is the way in which it can undermine our confidence in the reality of anything we see or hear. While it’s important that we prepare students to confront disinformation and to be aware of how generative AI will impact their information environment, we must also reinforce the importance of trust and shared understanding for the functioning of a healthy democracy.

  • Help students recognize and overcome simplistic and dualistic thinking. Developing an awareness of the criteria and procedures used by different disciplines to verify claims will provide a framework for students to establish their ways of verifying claims. One approach might be to analyze the basis upon which generative AI tools such as ChatGPT makes claims.
  • If confronted by a clear instance of mis- or disinformation in the context of a classroom or course-related interaction (for example, a student asserts the truth of a conspiracy theory that is blatantly false in a discussion board post), correct the inaccuracy as soon as possible. Point to established evidence for your claim. Help students see the difference between topics upon which we can engage in fruitful debate and topics where there is broad agreement, and to identify bad-faith approaches to argumentation.
  • Ask students to create a healthy media diet for themselves. Where might they find verifiable information on topics of interest? What constitutes a good source of information on that topic?
  • Promote empathy for others. We are more likely to believe inaccurate information about others if we are already predisposed to think of those individuals or groups negatively.
  • Encourage students to see themselves as an actor within their information environment. Have them reflect on all of the sources of information they access and contribute to, including those within your class. Ask them to consider how they are using generative AI tools to inject content into that environment and what the implications of their decisions, and similar decisions of others may be on that information environment overall.

In the next installment of our series, we’ll dive a little deeper into the issue of bias and equity as it relates to AI. In the meantime, if you’d like to discuss digital literacy, artificial intelligence, or any other topic related to your teaching, please book an appointment for a one-on-one consultation with a member of the CAT staff.

References

Carolusa, Astrid, Yannik Augustin, André Markus, Carolin Wienrich.  Digital interaction literacy model – Conceptualizing competencies for literate interactions with voice-based AI systems.  Artificial intelligence, 2023, Vol.4, p.100114

Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., … & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13-29.

Ferrara, E. (2023). Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738.

Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246.

Ng, Davy Tsz Kit, Leung, Jac Ka Lok, Chu, Samuel Kai Wah and Qiao, Maggie Shen. Conceptualizing AI literacy: An exploratory review. Computers and education. Artificial intelligence, 2021, Vol.2, p.100041

Organization for Economic Co-operation and Development, 2013

Wang, Bingcheng, Rau, Pei-Luen Patrick, Yuan, Tianyi. “Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale.” Behaviour & information technology, 2022, Vol.42 (9), p.1324-1337

Wuyckens, Geraldine, Landry, Normand, and Fastrez, Pierre. Untangling media literacy, information literacy, and digital literacy: A systematic meta-review of core concepts in media education. Journal of Media Literacy Education, 14(1), 168-182, 2022  https://doi.org/10.23860/JMLE-2022-14-1-12

Dana Dawson serves as Associate Director of Teaching and Learning at Temple University’s Center for the Advancement of Teaching.

Leave a Reply