2024 STEM Educators’ Lecture Recap

By Cliff Rouder, Ph.D.

The CAT’s STEM Educators’ Lecture, held on April 10, 2024, featured guest speakers Dr. Tara Nkrumah and Cornelio “Coky” Aguilera. Dr. Nkrumah is an Assistant Professor in the Department of Teacher Preparation, Mary Lou Fulton Teachers College at Arizona State University. Her research is on equitable teaching practices for anti-oppressive discourse in education and science, technology, engineering, and mathematics (STEM). Coky Aguilera studied as an Acting Specialist at UW Madison, works professionally with Tampa-area theater companies, and along with Dr. Nkrumah and colleagues have brought the Theatre of the Oppressed to different universities to engage academic audiences in critical investigations of inequities. Check out this Youtube video to learn more about the historical roots of Theatre of the Oppressed.

We were delighted to have their colleagues Dr. Vonzell Agosto, Dr. Deirdre Cobb-Roberts, and doctoral candidate Maria Migueliz Valcarlos join as they engaged Temple STEM and theater faculty in an interactive and engaging session titled, Unmasking the “Isms” in STEM Education to Promote Equitable Teaching and Learning. The speakers began by introducing a framework for the session–Iris Marion Young’s Five Faces of Oppression. They used this framework to help us think about how “isms” such as racism, ableism or genderism can manifest through the five faces of oppression, which are 

  • Exploitation
  • Marginalization
  • Powerlessness
  • Cultural Imperialism
  • Violence
  • For a more in-depth look at this framework, see Young’s “Five Faces of Oppression” in Geographic Thought: A Praxis Perspective.

As participants worked through definitions of these facets of oppression and shared examples of how they can manifest in our disciplines, departments, and classrooms, the speakers then engaged participants in a series of theater-based exercises that encouraged them to use mimicry and the creation of human tableaus to explore and address physical and emotional aspects of oppression.

For more on Dr. Nkrumah’s research, check out these recent publications:

  • Nkrumah, T. (2023). The Inequities Embedded in Measures of Engagement in Science Education for African American Learners from a Culturally Relevant Science Pedagogy Lens. Education Sciences, 13(7), 739.
  • Nkrumah, T., & Scott, K. A. (2022). Mentoring in STEM higher education: a synthesis of the literature to (re) present the excluded women of color. International Journal of STEM Education, 9(1), 1-23.
  • Nkrumah, T., & Mutegi, J. (2022). Exploring racial equity in the science education journal review process. Science Education, 1-15. https://doi.org/10.1002/sce.21719

As always, our CAT staff is ready to help you! To explore how to incorporate this work into your STEM courses or how to design and implement classroom-based research in this area, book a consultation appointment or email a CAT staff member directly.

Survival Guide to AI and Teaching, pt. 10: Talking to Your Students About AI and Learning

Stephanie Laggini Fiore

While we have dealt with many aspects of AI and teaching in this blog series, we want to end the series with the most important aspect—talking to your students about AI and learning. One of the realities of the present moment is that we are all in the midst of a disruptive change, one that neither we nor our students fully understand how to navigate. Therefore, whether or not we decide to allow the use of AI in our classes, it is vitally important to discuss these tools with our students in productive ways. 

At the CAT, we have seen plenty of draconian language on syllabi over the years (“Don’t even think about cheating; you will be caught!!”), but the old adage about catching more flies with honey than with vinegar stands true here as well.  Establishing trust in the learning environment, having clarifying conversations about AI and the choices you have made for the course, engaging students in thinking critically about the use of these tools and what they mean for society and for learning, and welcoming students’ thoughts will be far more effective than setting up an adversarial dynamic. We recommend dedicating time to discussing generative AI during the first week of the semester and then re-engaging students briefly before each written assignment. You should, of course, take some time to go over your AI syllabus statement, explaining your reasons for the decisions you have made, but it is important to go beyond that conversation to allow space for students to reflect on what it means to use these tools for learning. 

Here are some thoughts on how to speak to your students about AI:

  • Consider using an anonymous poll that asks the extent to which your students have used these tools. This will provide a window into how familiar your students are with generative AI.
  • Begin the conversation by asking students what they know about generative AI. You may be surprised about what they do (or don’t) know. Continue with a clarifying conversation on how generative AI tools work, including their benefits and pitfalls. It will be most effective if you can show examples of those benefits and pitfalls—for instance, an example of a hallucination (inaccuracies) or biased content that it might reproduce. 
  • Engage students in thinking about how your assignments help students to achieve the goals of your course. We often recommend using Bloom’s Taxonomy for this exercise. If, for example, you have a goal that reaches the level of evaluation on the taxonomy, how will the assignments (if completed by the student) aid in their attainment of that goal? 
  • Think about how to connect your students to the value of what they are learning. Often students see our courses (especially our required courses) simply as hoops to jump through on the way to a degree. Can you articulate for your students the reason why what they are learning will benefit them? What relevance will it have for their professions, personal growth, future academic work, or communities? Helping students to find meaning in what they are learning will be key to managing AI use.
  • Include a discussion about AI and academic integrity. Why is academic integrity important? How can we think about the use of generative AI in ethical terms? Uses case studies to have them ponder whether those uses are ethical; for instance, how they would feel if you offloaded all student feedback to an AI? Would that be an ethical use of the tool or would it be a breach of your responsibility as an instructor?
  • Ask students to discuss important philosophical questions that will get them thinking about the nature of learning, thought, and voice, such as:
    • Why do we write? What kinds of thinking happens when we write? Query students about how they use writing outside of class: do they keep a journal, write their opinions on social media, text friends when something important happens? Why might they turn to writing to express their thoughts? 
    • What does it mean to cede our thinking and our voice to non-sentient machines? Do we want to live in a world where none of our passions and ideas are expressed in the way that we want to express them, and where originality of thought is replaced by a process of scraping a dataset for answers? 

Talking to a student when you suspect cheating

You’ve followed our advice above and talked to your students about AI from day one of the semester, clarifying permissible use in your course. Still, you suspect that a student in your class has used AI in ways that you have not allowed. The first step is always to talk to the student. Here are some tips for tackling this discussion: 

  • Don’t take it personally! Cheating can often feel like a personal attack and a betrayal of all the work you’ve put into your teaching. Remember that a student’s decision to use AI to take shortcuts is probably about them, not about you. 
  • Check your biases. Is your suspicion of your student’s work well-founded? Would you have the same concerns if the work had been handed in by other students? 
  • Beware of falsely accusing students outright. As was established in a previous post, our ability to accurately identify the use of AI generative tools at present is quite weak.  
  • Ask the student to meet with you. Simply say something like “I have some concerns about your assignment. Please come to see me.” 
  • When you meet with the student, try not to be confrontational (remember that you may not be certain they used AI in an unauthorized manner). Instead, start by asking them questions that will give them a moment to tell the story of their writing process, such as: How were you feeling about the assignment? What do you think was challenging about it? Why don’t you tell me what your process was for getting it done. If there is research involved, you can ask what research they used. If they were writing on something they were supposed to read or visit (an art exhibit, for instance), ask pointed questions that get at whether they actually engaged in that activity.  
  • Then state your concerns: I’m concerned because the writing in this assignment doesn’t seem to match the writing in your other assignments, and the AI detector tool said that it is AI written. Point out any inconsistencies, odd language, repetition, or hallucinated citations with the student.  
  • Use developmental language. Remember that your student may have used generative AI without realizing it is considered cheating, or there may have been factors that made them feel that they needed to cheat. A conversation with your student can be a learning opportunity for them. 
  • Discuss with the colleagues in your department what a reasonable penalty might be for unauthorized use of generative AI. Consider also when it might be necessary to contact The Office of Student Conduct and Community Standards. (Remember, however, that speaking with your student is always the first step before taking further action.) If your conclusion is that the student cheated, you’ll have to decide whether you allow them to complete the assignment again on their own (perhaps with a penalty) or whether you’ll give no options to right the ship. Consider that we are in a developmental stage with these tools and it might be good to give the do-over if the student owns up to it.
  • Self-reflect. Given that students often take shortcuts for reasons related to the course structure, review our blog post on academic integrity and AI in order to take steps to promote academic integrity and consider whether your course is designed to reflect these best practices.

In a world in which AI is here to stay, it is essential that we support students’ ethical and productive interaction with these tools. No matter the discipline, we need to take on the responsibility of developing our students to adapt to this new reality with full awareness of the implications of AI use for learning, for work, and for society. 

We know that this is all new and it is not easy—the CAT is here to help. To book an appointment with a CAT educational developer or educational technology specialist, go to catbooking.temple.edu or email cat@temple.edu.

A Survival Guide To AI and Teaching pt. 9: AI and Equity in the Classroom

Dana Dawson, Ph.D

In previous posts in this series, we noted how generative AI can perpetuate biases and exacerbate the digital divide. Here, we will explore in more depth the potential for these tools to widen the institutional performance gaps that impact learning in higher education, but also the potential for generative AI to create a more equitable learning environment. We conclude with suggestions for what you can do to minimize possible negative impacts of generative AI for students in your courses. 

Rapid improvements in the capabilities of generative AI have a tendency to provoke doom spiraling, and there are indeed some very real concerns we will have to grapple with in coming years. While generative AI at times produces helpful summaries of content or concepts, it is prone to error. Students with tenuous confidence in higher education or their capabilities of succeeding in their studies are less likely to deeply engage in their coursework (Biggs, 2011; Carver, 1998) and may rely excessively or uncritically on AI tools. Over-reliance on generative AI to reduce effort, and not as a mechanism for jumpstarting or supporting conceptual work robs students of opportunities to practice and develop the very creative, critical thinking and analysis skills that are likely to become increasingly valued as AI is more widely available. In addition, where we neglect to carefully vet content created by AI, we run the risk of repeating erroneous information or perpetuating disinformation. The prospect of bias and stereotypes impacting students’ experience in higher education arises not only from the content generative AI produces (Bender et al., 2021; Ferrara, 2023), but from the challenge of determining whether a student has appropriately used the tools. AI detectors cannot reliably differentiate human- from AI-generated content. Faculty must be aware that judgments of whether students relied excessively on AI may be influenced by assumptions that have more to do with factors such as race, gender or spoken language fluency than student performance. Finally, faculty who wish to encourage students to experiment with and integrate the use of AI tools must be aware that inequitable access to broadband internet connection and digital tools, along with varying levels of preparation to effectively use the tools, may differentially impact students. Variable access to broadband prior to, or during their postsecondary studies raises digital equity concerns. Some students will come to our classes well-equipped to engineer prompts and vet generated content while others will be encountering these technologies for the first time. That high quality AI applications are often behind paywalls compounds these issues. 

On the other hand, some scholars and policy-makers have pointed to ways that these tools can be productively used to support student learning and success. AI tools such as ChatGPT can be used to fill in knowledge gaps related to a field of study or to being a college student more generally that are particularly salient for first-generation students or those whose previous educational experiences insufficiently addressed certain skills or topics. GPT3 responses to prompts such as “What are the best ways to study?” and “How do I succeed in college?” generate strategies that are useful and can be expanded upon with additional prompts. Warschauer et al. point out that for second language learners, the ability to quickly generate error-free email messages or to get feedback on one’s writing reduces the extra burden of studying disciplinary content in a second language. Students can prompt generative AI tools to explain concepts using relatable analogies and examples. For students with disabilities, generative AI can serve as an assistive technology, for example by improving ease of communication for those who must economize words, assisting with prioritizing tasks, helping practice social interactions or modeling types of communication. 

RECOMMENDATIONS

1. Reduce the potential for bias to impact your assessment of unauthorized student use of generative AI tools by determining the following before the start of the coming semester:

  • Which assessments have the most potential for unauthorized use?
  • Is there an alternative mechanism for assessing student learning for those assessments most prone to unauthorized use?
  • What are my guidelines for appropriate use of generative AI tools in this class?
  • Can I reliably detect inappropriate use?
  • Is my determination of inappropriate use subject to bias? 
  • What will my next steps be if I suspect inappropriate use?

If you’re not sure whether to allow use of generative AI tools, review our decision tree tool.

2. Clearly communicate your classroom policies on use of generative AI and talk with (not to) your students about those policies, ensuring they understand acceptable limits of use.

3. If you are encouraging the use of generative AI tools as learning tools, consider questions of access by:

  • Assessing the extent to which your students know how to use and have access to the tools; and
  • Showing students how to use the tools in ways that will benefit their education (for example, using follow-up prompts to focus initial queries). Temple University Libraries has created an AI Chatbots and Tools guide to help our students learn to judiciously use these tools.

4. Educate students on how generative AI tools may be biased, can perpetuate stereotypes and can be used to increase dissemination of mis- and dis-information.

5. Help students find their own voice and value a diversity of voices in writing and other content that has the potential to be generated by AI tools.

6. Consider a SoTL project.

In the next (and final) installment of our series, we’ll focus on how to talk to your students about generative AI. In the meantime, if you’d like to discuss AI or any other topic related to your teaching, please book an appointment for a one-on-one consultation with a member of the CAT staff.
 

Works Referenced

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Biggs, J. (2012). What the student does: Teaching for enhanced learning. Higher education research & development, 31(1), 39-55.

Carver, C. S., & Scheier, M. (1998). On the self-regulation of behavior. Cambridge, UK: Cambridge University Press.

Ferrara, E. (2023). Should chatgpt be biased? challenges and risks of bias in large language models. arXiv:2304.03738.

Dana Dawson serves as Associate Director of Teaching and Learning at Temple University’s Center for the Advancement of Teaching

A Survival Guide To AI and Teaching pt. 8: Academic Integrity and AI: Is Detection the Answer?

Stephanie Laggini Fiore, Associate Vice Provost

Even if you’ve done your due diligence in clarifying acceptable use of AI in your course, you may still suspect that students are using these tools in unauthorized ways. While unauthorized AI use is not considered plagiarism, it is still cheating and a violation of the university’s standards on academic honesty, as it both uses “sources beyond those authorized by the instructor in writing papers, preparing reports, solving problems, or carrying out other assignments” and engages “in any behavior specifically prohibited by a faculty member in the course syllabus, assignment, or class discussion.” The sticky question is, therefore, “How can I be sure that students have indeed inappropriately used these tools to complete their work?” We may be tempted to lean on detection methods as a solution, but is that the answer to this conundrum?

Can Humans Detect AI Work Unaided?

In playing with AI tools, you may have noticed some quirks in the output they provide (based on your prompts): they can be repetitive, go off on tangents unrelated to the topic at hand, or simply produce generic or illogical text. Generative AI can also “hallucinate” citations or quote text that simply doesn’t exist. These “AI tells” can sometimes tip us off to unauthorized AI use by our students. But how good are we at accurately identifying these tells? Our colleagues at the University of Pennsylvania conducted an investigation into human ability to detect AI text. They found that participants in their study were significantly better than random chance at detecting AI output but that there was large variability in ability among the participants. The good news is that their findings suggest that detection is a skill that can be developed with training over time (Dugan et al., 2023). At this point, however, few of us have had the targeted training referenced by the authors nor have we been able to dedicate the time necessary to improve. Barring glaring hallucinations or illogical content, most of us are simply not yet familiar enough with the features of AI text to be confident that our hunches are accurate. Try the test the researchers used; you may find, like me, that identifying AI text can be pretty darn challenging. And, of course, these tools will continue to evolve and improve, so our ability to detect non-human content may dwindle as generative AI advances.

Can AI Detectors Do the Job?

Don’t we all wish that AI Detectors (such as Turnitin, GPTZero, Copyleaks, or Sapling) were the answer to all of our generative AI concerns? Sadly, the simple and definitive answer to whether AI detectors can reliably detect AI-generated writing is “not at this time.” The reality is that these detector tools are flawed, delivering both false positives and negatives. In addition, unlike plagiarism detection tools, there is no way to verify that the detector’s conclusions are correct as the results do not link to source material in the same way. The CAT and the Student Success Center are conducting an investigation into error rates in a variety of AI detectors; early indications are concerning. In the meantime, others have pointed to the unreliability of the tools in both formal and informal investigations (here’s another), and in explanations of why these tools fail. Companies creating AI detectors themselves include disclaimers such as Turnitin’s statement that it “does not make a determination of misconduct…rather, we provide data for educators to make an informed decision.” They then go on to advise us to apply our “professional judgment” to these situations. That professional judgment, though, can itself be flawed. 

Some faculty have been advised to run student work through multiple detectors, but the potential for (both positive and negative) bias may come into play as we make decisions about which detector to believe when they return different results (which, from our experience, they most likely will). My wonderful student couldn’t possibly have used AI so I believe the detector that says it’s human-written. OR I don’t doubt for a minute that this student cheated, so I believe the detector that says it is AI-written. Importantly, these detector tools can’t tell us if students have used AI in the ways we have outlined in our syllabi as permissible. Let’s say I am allowing students to use AI for idea generation or for writing an outline, but not for writing full drafts of papers. The detector cannot tell me whether students have used AI in permissible ways. Finally, there are already hacks out there with advice on how to beat the detectors; for example, videos that demonstrate how to run AI-generated content through a rephraser in order to fool AI detectors. All this adds up to inconsistent and unreliable results whereby catching those who have engaged in academically dishonest behavior is hit or miss and does not provide incontrovertible proof of misconduct. Most importantly, we have to consider the very real and potentially damaging effects of wrongfully accusing students of cheating when they have not.*

What’s a Harried Faculty Member To Do?

If detectors aren’t reliable and our own skills at detecting AI writing are not mature, what’s the answer? While we will all be adjusting to this new reality for a while, we can keep some fundamental principles in mind to nudge our students towards transparency and academic honesty, the first of which is to give up on a surveillance mentality as it simply won’t be effective (and you don’t want to police students anyway, right?). Instead, think developmentally and pedagogically by taking these steps:

1. Shift from a reactive to a proactive stance. Test your assessments in a generative AI tool to see how vulnerable they are to AI use. Then make some intentional decisions about whether to change assessments or create new ones. In the long run, of course, it is all about our assessments. We may have used these same types of assessments for decades, but they simply may not work in the way we want them to in the age of AI. Review blog posts #4#5 and #6 to think about changes you may make to your assessments, or if you missed our Using P.I. to Manage A.I. series, see our suggestions there. Remember you can also make an appointment with a CAT developer to help you think this through.

2. Put a statement about AI in your syllabus clarifying acceptable use of AI! I can’t repeat this enough. Our colleagues at The Office of Student Conduct and Community Standards have expressed to us that it is essential to have clear guidelines clarifying what is and isn’t acceptable use of AI in our courses.

3. Engage your students in a discussion about generative AI and academic integrity, including why you have set the standards you have in your course. Remind them periodically about the ethics of generative AI use. (Look for an upcoming blog post for guidance on how to speak with your students about AI.)

4. Design courses that reduce the factors that induce students to cheat. James Lang, in his excellent book Cheating Lessons: Learning From Academic Dishonesty, reminds us that the literature on cheating points to an emphasis on performance, high stakes riding on the outcome, an extrinsic motivation for success, and a low expectation of success as factors that promote academic dishonesty. The good news is that we know also from the literature on learning that evidence-based teaching practices such as formative assessments, scaffolded assignments, ample opportunity for practice and feedback, development of a positive learning environment, and helping students to find relevance and value in what they are learning will both deter cheating by reducing these factors, and improve learning. Need help in reducing the temptation to cheat? Make an appointment with a CAT developer.

5. Plan thoughtfully for how you will manage situations where you suspect unauthorized use of generative AI, starting with a conversation with the student. (We’ll include advice on how to speak to students in the aforementioned future blog post.)

There is no doubt that generative AI is a disruptor in the educational space. Our response to that disruption matters for learning and for our relationship with students. Let’s work together thoughtfully towards a productive and forward-looking response. The answer is not detection—it is development

*Note: If I haven’t convinced you to avoid these flawed detectors in accusing students of cheating, I agree with Sarah Eaton that it is essential to transparently state in your syllabus that you will be using detectors. Do not resort to deceptive practices in an effort to “catch” students. In addition, never use detectors as the sole source of evidence as, of course, the results may not be reliable.

Stephanie Laggini Fiore serves as Associate Vice Provost at Temple University’s Center for the Advancement of Teaching.

A Survival Guide to AI and Teaching pt.7: Inoculating Our Students (and Ourselves!) Against Mis- and Disinformation in the Age of AI

Dana Dawson

In a previous blog post in this series, we suggested making generative AI a subject of critical analysis in your courses. Here, we will focus on the importance of teaching our students to critically engage with content generated by AI tools and with the implications of generative AI use for our information environment. This topic lies at the intersection of digital literacy, information literacy and the newly emerging field of AI literacy (Ng et al.; Wuyckens, Landry and Fastrez). While our students will need to develop the digital literacy required to solve problems in a technology-rich environment characterized by the regular use of AI tools, they will require the information literacy skills to navigate through a complex information ecosystem. Though generative AI tools are digital tools that generate information, we have a tendency to interact with AI tools as if they are social beings (Wang, Rau and Yuan, 1325-1326) and the manner in which they generate information requires special attention to issues of authorship, the impact of data-set bias and the potential automation of disinformation dissemination.

  • As the efficacy and availability of generative AI tools advances, both we and our students will face a variety of information-related challenges. Generative AI can be used to automate the generation of online misinformation and propaganda, significantly increasing the amount of mis- and disinformation we are exposed to online. Flooding our information environment with disinformation not only increases exposure to bad information, but distracts from accurate information and increases skepticism in content generated by credible scholarly and journalistic sources. Even where users do not intend to propagate misinformation, Ferrara and others have pointed out that bias creeps into text and images produced by generative AI through the source material used for training data, the design of a model’s algorithm, data labeling processes, product design decisions and policy decisions (Ferrara, 2).These limitations can result in the creation of content that seems accurate but is entirely made up, a phenomenon known as AI hallucinations.

Our task as educators is to prepare our students to navigate an information environment characterized by the use of generative AI by inoculating against disinformation, helping them develop the skill and habit of verifying information, and building a conception of the components of a healthy information environment.

Tools for Inoculation

Inoculating ourselves and our students against mis- and dis-information functions much the same as inoculating ourselves against viruses through controlled exposure. By “pre-bunking” erroneous content students may themselves create using generative AI tools or may encounter online, we can help reduce the potential for them to be misled in later encounters.

  • Ask students to use ChatGPT to outline one side of a contemporary debate and then to outline the other side of the debate. Have them experiment with prompting the tool to write in the voice of various public figures or to modify the message for different audiences. Analyze what the tool changes with each prompt. Look for similar messages in social and news media.
  • Use the resources of the Algorithmic Justice League to explore how algorithms reproduce race- and gender-based biases.
  • If you assign discussion board entries to your students, secretly select one student each week to use ChatGPT or another generative AI tool to write their response. Ask students to discuss who they believe used AI that week and why.
  • Have students experiment with Typecast or other AI voice generators to create messages in the voice of public figures that are aligned or misaligned with that individual’s stance on contemporary issues.
  • Have students investigate instances of the use of tools such as Adobe Express to create misleading images that circulated online (for example, fake viral images of explosions at the Pentagon and the White House). The News Literacy Project keeps a list here. Analyze who circulated the images and why. How were they discovered to be fake? Ask students to experiment with the image generating and editing tools used in the instances they discover, or with free alternatives.

Tools for Verifying Information

Zeynep Tufekci argues that the proliferation of generative AI tools will create a demand for advanced skills including “the ability to discern truth from the glut of plausible-sounding but profoundly incorrect answers.” Help your students hone their analytical skills, understand the emotional aspects of information consumption and develop a habit of questioning and verifying.

  • Increase students’ self-awareness of their own information consumption habits and methods for verifying information they are exposed to. Ask students to keep a journal for a week of their social media consumption and what they shared, liked, up- or down-voted, reposted, etc. on social media for a week. What kind of content do they tend to engage with? What feelings motivated them to share or interact with content and how did they feel afterward? If shared content included information or took a stance on a topic, did they verify before sending? What do they notice about their information consumption after observing their habits for a week, and what might they consider changing?
  • Introduce students to the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims, quotes and media to the original context). Note that some students may already know of this popular approach to addressing information online, so be sure to first ask if anyone can describe the method for others. Discuss how this approach may need to be modified in the age of AI. Challenge your students to design a modified method that accounts for the difficulty of finding a source and tracing claims where generative AI tools are involved.
  • Given the difficulty or even impossibility of differentiating AI generated content from human generated content and tracing AI generated content to its source, help students focus on analyzing the content itself. Teach students lateral reading strategies and have them investigate claims in articles posted online using these strategies.
  • Develop your students’ habit of asking questions by utilizing tools such as the Question Formulation Technique (registration is free) and the Ultimate Cheatsheet for Critical Thinking.

Tools for Shared Understanding

One of the most insidious consequences of AI generated disinformation is the way in which it can undermine our confidence in the reality of anything we see or hear. While it’s important that we prepare students to confront disinformation and to be aware of how generative AI will impact their information environment, we must also reinforce the importance of trust and shared understanding for the functioning of a healthy democracy.

  • Help students recognize and overcome simplistic and dualistic thinking. Developing an awareness of the criteria and procedures used by different disciplines to verify claims will provide a framework for students to establish their ways of verifying claims. One approach might be to analyze the basis upon which generative AI tools such as ChatGPT makes claims.
  • If confronted by a clear instance of mis- or disinformation in the context of a classroom or course-related interaction (for example, a student asserts the truth of a conspiracy theory that is blatantly false in a discussion board post), correct the inaccuracy as soon as possible. Point to established evidence for your claim. Help students see the difference between topics upon which we can engage in fruitful debate and topics where there is broad agreement, and to identify bad-faith approaches to argumentation.
  • Ask students to create a healthy media diet for themselves. Where might they find verifiable information on topics of interest? What constitutes a good source of information on that topic?
  • Promote empathy for others. We are more likely to believe inaccurate information about others if we are already predisposed to think of those individuals or groups negatively.
  • Encourage students to see themselves as an actor within their information environment. Have them reflect on all of the sources of information they access and contribute to, including those within your class. Ask them to consider how they are using generative AI tools to inject content into that environment and what the implications of their decisions, and similar decisions of others may be on that information environment overall.

In the next installment of our series, we’ll dive a little deeper into the issue of bias and equity as it relates to AI. In the meantime, if you’d like to discuss digital literacy, artificial intelligence, or any other topic related to your teaching, please book an appointment for a one-on-one consultation with a member of the CAT staff.

References

Carolusa, Astrid, Yannik Augustin, André Markus, Carolin Wienrich.  Digital interaction literacy model – Conceptualizing competencies for literate interactions with voice-based AI systems.  Artificial intelligence, 2023, Vol.4, p.100114

Ecker, U. K., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., … & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13-29.

Ferrara, E. (2023). Should chatgpt be biased? challenges and risks of bias in large language models. arXiv preprint arXiv:2304.03738.

Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative language models and automated influence operations: Emerging threats and potential mitigations. arXiv preprint arXiv:2301.04246.

Ng, Davy Tsz Kit, Leung, Jac Ka Lok, Chu, Samuel Kai Wah and Qiao, Maggie Shen. Conceptualizing AI literacy: An exploratory review. Computers and education. Artificial intelligence, 2021, Vol.2, p.100041

Organization for Economic Co-operation and Development, 2013

Wang, Bingcheng, Rau, Pei-Luen Patrick, Yuan, Tianyi. “Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale.” Behaviour & information technology, 2022, Vol.42 (9), p.1324-1337

Wuyckens, Geraldine, Landry, Normand, and Fastrez, Pierre. Untangling media literacy, information literacy, and digital literacy: A systematic meta-review of core concepts in media education. Journal of Media Literacy Education, 14(1), 168-182, 2022  https://doi.org/10.23860/JMLE-2022-14-1-12

Dana Dawson serves as Associate Director of Teaching and Learning at Temple University’s Center for the Advancement of Teaching.

A Survival Guide to AI and Teaching pt.6: Creatively Working Around AI

Jennifer Zaylea and Jeff Rients

In the previous two blog posts of this series, we addressed ways in which you may decide to incorporate AI into your classroom.  In this post, we offer suggestions for how you might creatively work outside the limits of AI by developing assignments and assessments that tools like ChatGPT cannot easily simulate.

Creating learning activities that prioritize human interaction (whether online or in-person), personal experience, and local knowledge can reduce the chance that your students will come to rely solely on AI-generative tools. This method has the benefit of prioritizing the humanity of the learning experience, because this method focuses student learning on the personal or community aspects of what we are teaching. It also can be more meaningful for students, motivating them to higher levels of effort.

Below are some possible assignments that are more difficult to simulate with generative AI tools. Remember, if you need assistance fleshing any of these ideas out or designing your own creative assignments, you can always schedule a one-on-one consultation with one of our staffers!

  • Focus on human interactions in the form of attending events, arranging interviews, and conducting ethnographies of practitioners in your field. Ask for photos to be included as part of the student submission. Note: always include expectations for these kinds of activities up front in your syllabus, and keep in mind that it may be more difficult for some students to attend events and conduct in-person interviews because of work, transportation costs, etc. Having an alternative method for completing the assignment  handy in case of particular student hardship would be wise.
  • Design assignments that ask students to meaningfully incorporate breaking news, local events, or niche references–forms of data or knowledge that AI tools have not been extensively trained on. Analysis of an event as it is still unfolding not only makes it harder for AIs to respond, but they also provide students with a much needed sense of the real connection of what they are learning with the world.
  • Require “process papers” with follow-up on incomplete/incorrect steps to be revisited for iterative projects. For example, require that the second draft of a paper (or the second paper of the semester) must be accompanied by a memo outlining how earlier feedback was incorporated into the new submission.
  • Ask students to record their making process for studio-oriented projects. This could take the form of a series of still images or videos, accompanied with a written reflection on the overall creative process.
  • Strive to account for non-Western points of view outside the reach of most current AI datasets. Your students may need to use Google Translate to get the jist of some non-English websites, perhaps comparing them to similar English-language sources. Or they could interview people with non-Western perspectives.
  • Use presentations, oral exams, classroom debates, and in-class writing activities to explore knowledge without immediate recourse to AI tools. Look for ways to connect these in-class activities to assignments outside of class, such as asking students to write a paper analyzing and evaluating a classroom debate.
  • Consider using collaborative tools that allow for tracking of specific student contributions, such as Google Docs, Dropbox Paper, or Microsoft Word. A look at the edit history of the Google Doc for a collaborative project would allow you to see who was writing and who was revising the group paper.
  • Utilize alternative assessments beyond high-stakes essays, such as group concept mapsgamification-based activities, or student podcasts.
  • Assign multi-media composition work currently beyond AI capabilities, such as creating a zine that incorporates both words and images to document learning or designing a board game or card game that illustrates key course concepts in action.
  • Ask students who are doing coding revisions, such as coding for graphic design, animation, computer programming, or websites to screen record their corrections while explaining why they made changes.  The assignment will encourage students to clearly articulate what the actions are accomplishing while reinforcing the skills they have learned.
  • Have students write reflectively on how their past personal experiences intersect with their current learning.

Familiarity with a few of these approaches can also be helpful in the event that your otherwise AI-intensive course is disrupted by website crashes, fee structure changes, government regulation, or potential bans. A mix-and-match approach may best fit your learning environment, regardless. You may want one unit to focus on using AI tools and another that eschews them completely.

Just as with other teaching decisions (think absence or late work policies), your policies on the use of AI to complete work in your course will be a decision that you will have to make. As we’ve mentioned in earlier posts in this series, it will be important to include a statement on your syllabus, which you should discuss during your first class,  that explains your policies so that students understand clearly the expectations for your course, and so that the Office of Student Conduct has a standard by which to make decisions about any cases of academic misconduct. The CAT has developed sample syllabus statements that provide language you can adopt for your own use.

In the next installment of our series, we’ll look at how to foster students’ digital literacy skills in the age of generative AI.

Jennifer Zaylea and Jeff Rients both work at Temple’s Center for the Advancement of Teaching, where Jennifer serves as the Digital Media Specialist and Jeff Rients serves as Associate Director of Teaching & Learning Innovation.

A Survival Guide to AI and Teaching pt.5: A Critical Eye on AI

Jonah Chambers & Jeff Rients

In the previous installment of this series, we outlined some ideas for how to put artificial intelligence to work as a tool in the classroom. This time, instead of adopting AI as a new educational technology, we propose that you use AI as an object of inquiry. A wide variety of controversial topics intersect with the rise of generative AI, providing a timely touchstone for the development of student critical thinking and media literacy skills.  Your students could productively engage with issues surrounding AI such as data privacydataset biasenvironmental impactintellectual property, and labor. Here are just a few examples:

  • Collaboratively review the privacy policy and terms of use before using an AI tool. Will the students’ input become the property of the company owning the tool? What sort of privacy protections are in place?
  • Contact multiple AI providers and ask them about their protocols for avoiding dataset bias issues like the now-infamous white Obama problem. Which responses suggest good corporate stewardship? How can their claims be tested?
  • Compare and contrast the environmental impact of AI tools to the impact of search engine usage, cryptomining, the overall impact of the internet, etc.
  • Debate the intellectual property concerns of large language AI tools versus other forms of intellectual appropriation.
  • Attempt to map out which professions will be most impacted by AI in the short term. Which jobs will go away and which will be completely transformed?

Importantly, these sorts of investigations do not require you or the students to use the tools, making it an ideal choice for those who are uncertain about their willingness to join the AI revolution. However, some students may opt to interrogate the AI itself as part of their investigation. In this latter case, students might find it interesting to form their prompts as interview questions, tasking the AI with justifying its own existence!

However you want to handle AI in your course, it will be important to include a statement on your syllabus that explains your policies so that students understand the expectations for your course. The CAT has developed sample syllabus statements that provide language you can adopt for your own use.

Remember, although we are providing three different paths forward in the use of AI in your classroom, you don’t have to choose just one! A mix-and-match approach may best fit your learning environment. Your course might benefit, for example, from a unit critiquing AI followed by a unit that makes use of AI. Or vice versa.

In the next installment, we’ll look at creative assessments and assignments that not only discourage illicit student usage of AI but help your students learn as well!

Jonah Chambers is Senior Educational Technology Specialist at Temple University’s Center for the Advancement of Teaching. Jeff Rients serves as the Center’s Associate Director of Teaching and Learning Innovation.

A Survival Guide to AI and Teaching pt.4: Make AI Your Friend

Jonah Chambers & Jeff Rients

This week we’ll take a look at purposefully integrating the use of these AI tools into your course activities and assessments, particularly if the use of the tools can support student attainment of your learning goals. As with all educational technology, we generally recommend only those tools which support your previously identified goals And even when a tool does support your learning goals, consider the cognitive load of using the tool itself. Is the effort by the students to learn the tool worth the benefit gained? On the other hand, resisting the inevitable changes in our modern technological society is a well-trodden path to inconsequentiality. If you can, why not take advantage of these fabulous new AI tools to build more effective learning experiences for your students?

Here are some ways to make productive use of AI in your courses:

  • Have students compete in creating the best prompt to elicit the most complete, useful, or interesting output to a course-related question or topic. Formulating a useful prompt requires clear articulation of the student’s own understanding, and comparing results allows students to practice their analytical skills.
  • Ask students–on their own or in groups–to edit AI-generated texts. Editing could include fact checking, critiquing style, expanding upon, and adding references. Use collaborative documents (such as MS Word or Google Docs) to track changes to the original output. This process helps students develop critical thinking skills and digital literacy.
  • Have students write annotated bibliographies based on guidelines you provide them and then compare those with AI-generated annotated bibliographies, noting where the AI-generated version produces errors or comes up short. This could work for other types of writing as well.
  • Ask students to create a business plan, encouraging them to be as ambitious as possible and use the AI to provide ideas and feedback from a particular perspective.
  • Task students with intentionally incorporating AI-generative tools into the writing process, such as to brainstorm paper topics, generate outlines, and proofread text they have written. You may want to check out these resources: Need an AI essay writer? Here’s how ChatGPT (and other chatbots) can help and How to… use ChatGPT to boost your writing.
  • Share AI-generated responses with students to establish a new baseline of acceptable work, then expect better from them! Assign follow-up work that pushes student thinking further up Bloom’s Taxonomy.
  • Allow students to submit an AI-generated first draft of a paper early in the semester, then focus their efforts on the revision process. Students submit both the AI draft and their revised version in a collaborative document (such as MS Word or Google Docs) so that you can track changes and see the version history.
  • Per this paper, have students provide you with the list of the prompts they input and outputs they gathered in the process of writing a paper.

We’ve gathered some more specific activity and assignment ideas into this handy guide. Although any of these approaches will take some work to implement, this could be an opportunity for you to be an AI pioneer in your field, experimenting with possibilities these tools afford. You might even want to conduct a study to measure the effect of these tools on learning! Our resources on the Scholarship of Teaching and Learning can help you get started.

Keep in mind that you’ll have to decide to what extent and for what purposes students will be allowed to use AI, and then communicate those parameters clearly to students. That way students will understand clearly the expectations for your course. Key to any acceptable use of AI is transparency; you should insist that students identify any work that is AI generated. The CAT has developed sample syllabus statements that provide language for acceptable and unacceptable use of AI that can provide clarity to students and to the Office of Student Conduct for use in cases of academic misconduct.

Remember, although we are providing three different paths forward in the use of AI in your classroom, you don’t have to choose just one! A mix-and-match approach may best fit your learning environment.

In the next installment, we’ll look at how your students can learn valuable skills by integrating critical examination of generative AI tools themselves into your course activities and assessments.

Jonah Chambers is Senior Educational Technology Specialist at Temple’s Center for the Advancement of Teaching. Jeff Rients serves as the Center’s Associate Director of Teaching and Learning Innovation.

A Survival Guide to AI and Teaching pt.3: “Should I Allow My Students to Use Generative AI Tools?” Decision Tree

Dana Dawson, Ph.D

Generative AI tools like ChatGPT are already being used by students, and are likely to become ubiquitous in the workplace of the future, so ignoring them is not the ideal solution. We need to be actively thinking about how students are achieving the learning goals of our courses in light of their possible use of these tools, but also whether our courses prepare them for future careers in which they may be using AI tools regularly.

That being said, as with any other technology, we and our students need to approach the use of these tools critically. With regard to your teaching responsibilities, we see three approaches to the use of AI:

  • Integrate the use of generative AI into your course activities and assessments;
  • Integrate critical examination of generative AI tools themselves into your course activities and assessments; and
  • Work around AI by designing assignments that are AI-resistant

If you’re not sure where to begin in deciding which of these approaches to take, we offer you this decision tree as a helpful tool. Determining how you will address generative AI in your classes will require you to reflect on your level of familiarity with the tools, take steps to become more familiar with them if you haven’t already, examine the learning goals for your course and determine whether AI can be useful in helping students to reach these goals,  and consider your readiness to carefully vet content students create with the help of or in response to the effects of generative AI. Finally, whatever your decision, you’ll need to speak with your students about the use of AI in your course.

In upcoming blog posts in this series, we will address strategies for the three approaches listed above as well as how to speak with your students about your approach. Note that you don’t have to pick just one of these approaches. We urge instructors to consider a mixed approach to incorporating AI in the classroom. In some cases, the use or analysis of content created by generative AI may help your students achieve a particular learning outcome and in other cases, it may be counterproductive. Revisit this decision tree not only at the outset of your class planning, but as you consider activities and assessments throughout the semester. Then articulate your decision to your students clearly on your syllabus. We have made syllabus guidance available to help you craft a coherent syllabus statement regarding use of AI in your courses.

Follow this blog series for continuing guidance on how to think about AI in your classes, and remember that you can make an appointment with a CAT developer to discuss your decision.

Dana Dawson is Associate Director of Teaching and Learning at Temple’s Center for the Advancement of Learning. Decision Tree graphic by the Center’s Graphic and Design Specialist, Emily Barber.

A Survival Guide to AI and Teaching pt.2: Generative AI and Learning: Benefits and Pitfalls

Emtinan Alqurashi, Ed.D., Jennifer Zaylea, MFA

Generative AI has revolutionized the way we interact with technology, opening new doors to faster and more efficient organization, and allowing more time for the creative and conceptual aspects of learning. However, there are considerable challenges that need to be addressed, especially when they could potentially impact how learning works in our classrooms. In our first blog of the AI series, we discussed what generative AI is and invited you to explore one of these tools (i.e., ChatGPT), in our second blog in this series, we explore some of the benefits and possible pitfalls of using generative AI in the student learning process. 

Some benefits of incorporating generative AI into the student learning process:  

  • Access to large quantities of information. Generative AI is trained using a vast amount of information from a variety of sources. With this access to an enormous amount of information, it responds to a wide range of language-related tasks. Used productively, the greater access to information can potentially help students gain multiple perspectives on a topic, and spark inspiration that leads to greater creativity and ideation.  

  • Speed of responding or processing. Generative AI provides quick responses to queries, prompts, and requests. This is particularly helpful when we need real-time responses or quick turnarounds for questions or requests. Rather than spending excessive time searching for information, students can gather information efficiently, and spend time focusing on comprehension and analysis of information.  Ultimately, this helps them advance to more complex thinking and learning. 
  • Automation of routine tasks.  Because generative AI can produce human-like text, it can be used as a starting point for any type of content from drafting emails and articles to creating social media posts or course outlines. It can also be used as an editor to develop unstructured transcript or notes into a well-structured text. Students can save time and effort in the organizing and editing processes to allow them to focus on more creative and strategic aspects of their work. 
  • Conversational manner of speech/writing. The human-like conversational ability makes AI an enjoyable discussion partner (but also provides serious concerns – see pitfalls below). In fact, generative AI can refine its outputs in response to your prompts in an ongoing discussion, helping you and your students to iteratively refine your thinking. For this reason, it can be very valuable to learn simple prompt engineering, which is the process of refining input instructions to achieve desired results. This can enhance the conversational capabilities of generative AI.   
  • Assistive technology. Generative AI has the promise to improve inclusion for people with communication impairments, low literacy levels, and those who speak English as a second language. It can also improve ease of communication and comprehension in a variety of ways.   

Possible pitfalls when incorporating generative AI into the student learning process: 

  • Engenders a false sense of trust. Generative AI’s conversational manner can be concerning as it can create a false sense of trust in the content it delivers. We must be aware that generative AI may disseminate inaccurate, biased, or incorrect information, and we must caution individuals against treating it as a source of truth.  
  • Can generate inaccurate information. Generative AI has the ability to produce convincing responses to prompts and can offer properly formatted citations. However, it may not always provide accurate or reliable information as it has the ability to replicate outdated information, misinformation and conspiracy theories that exist in its data set. In all cases, the responses formulated by generative AI require careful vetting to ensure their accuracy. Therefore, it is important for students to take responsibility for verifying the accuracy and reliability of any information they obtain. This ensures students are not only learning correct information but also developing critical thinking and research skills that will serve them well in their academic and professional lives. 
  • Can generate biased information. Just as generative AI can replicate inaccurate information, it can also replicate biased information. For example, if the training data used to train a generative AI contains biased or skewed information, the AI may inadvertently reproduce that bias in its results. Again, careful vetting of information is key to productive use of these tools. 
  • Human authorship. Generative AI can produce responses that closely resemble pre-existing text, leading to an inability for the user to distinguish between AI-generated and human-authored text. Additionally, it is unclear which sources generative AI is using. This obfuscation of human authorship can make it challenging to attribute sources accurately. In fact, if you ask some generative AI tools to add references with citations and links, it can invent false sources, a phenomenon that is called hallucinating (although others are already becoming more sophisticated and can pull citations from real sources).   
  • Simulated emotional responses. Generative AI can, in written form, simulate emotional intelligence, empathy, morality, compassion, and integrity. Devoid of nuance, AI simulates an emotional output by scouring the troves of text and training that has been provided to offer what seems most likely to be an accurate emotional response. It will be important for users of these tools to be cognizant of the fact that simulated emotional response is not the same as true interpersonal human connection and understanding. 
  • Lacks context. When it comes to specific courses, class discussions, or more recent events, generative AI may not be able to establish connections between writing or arguments as they relate to the context of a course. 
  • Proliferation of harmful responses. With prompt engineering, users can circumvent the safeguards put in place to prevent harmful, offensive, or inappropriate information from being proliferated in responses.  Additionally, there are concerns that misinformation (inaccurate) and disinformation (deliberately false) will be shared as accurate and reliable information. 
  • Can potentially widen the education gap. The digital divide that can already exist between students with means and those without may be widened as more powerful AI tools are placed behind paywalls. In addition, gaps in digital literacy skills may exacerbate the widening of the education gap.  

While generative AI brings numerous benefits to students, the potential pitfalls must be considered. It is crucial for students to verify information, develop critical thinking skills, and exercise caution when relying on generative AI. These aspects are integral to developing digital literacy skills, which will be explored later in the series.  

In next week’s post, we will discuss how to decide about the use of AI in your class. If you have questions about generative AI and learning, book a consultation with a CAT specialist

Emtinan Alqurashi, Ed.D., serves as Assistant Director of Online and Digital Learning at Temple University’s Center for the Advancement of Teaching. 

Jennifer Zaylea, MFA, serves as the Digital Media Specialist at Temple University’s Center for the Advancement of Teaching.