Using P.I. To Manage A.I.: Series Wrap-Up

Dana Dawson, Ph.D.

Over the course of this series, we have presented what we’re calling pedagogical intelligence as an approach to addressing the impact of technologies such as ChatGPT in our classrooms. It is more important than ever that we apply time- (and research-) tested practices shown to engage students in authentic learning. We have recommended:

At the foundation of these suggestions is the belief that the majority of our students want to learn and will dedicate themselves to assigned activities if they see the value of course content, believe they are capable of succeeding, and see learning as a process of incremental improvement requiring practice and recalibration on the basis of feedback. 

We are not yet at the point where we can reliably detect the use of text-generating technology in writing and we may never get to that point. At the same time, the capacity of large language models to replicate human intelligence will continue to improve. We can’t beat it but we also don’t want to join it! Our task as educators is to show our students the value of finding and using their voice and the importance of their insight and creativity as living, feeling humans

To this end, our final recommendation for applying pedagogical intelligence in an age of emerging artificial intelligence (AI) is to talk with your students about the implications of text-generation and other AI tools for their learning and for their post-graduation lives. Talk to your students about how these tools work – their potential and possible benefits but also their limitations and the risks they pose. Ask your students to stake out their own approach to the use of these tools. What is gained by learning to use them? What is lost by relying on them? Be transparent about your expectations for student use of AI tools. To help, the CAT has created a range of syllabus guidelines to assist you in articulating your own stance toward student use of AI tools in your classes.

If you’d like assistance in incorporating more iterative work in your class, you can make an appointment with a CAT staff member to talk through your plan. You can also arrange to have someone from the CAT come to your in-person or virtual class to give you feedback.

If you are intentionally using ChatGPT to teach in your classrooms this semester, please email us at cat@temple.edu and tell us about it. Consider that you can engage in the Scholarship of Teaching and Learning (SoTL) by designing a classroom study to evaluate the impact of ChatGPT on student learning. If you want to learn how to design a study related to your use of ChatGPT in the classroom, contact Benjamin Brock at bbrock@temple.edu for assistance. 

Finally, continue the conversation on the Faculty Commons! We’ve added a new discussion category for talking about the impact of Chat-GPT and other similar tools in the classroom.

Dana Dawson serves as Associate Director of Teaching & Learning at Temple’s Center for the Advancement of Teaching.

Using P.I To Manage A.I. pt. 6: Tech Tools to Make Invisible Learning Visible Through Formative Assessments

Jennifer Zaylea, Emtinan Alquarshi

How do we know our students are learning, and how do students themselves gain insight into their own learning progress?  One way is to use formative assessments throughout the course. Formative assessment both helps faculty understand what students are learning and helps students see their own progress and also their gaps in learning. Formative assessment prioritizes the learning process, providing immediate feedback to both instructors and students during a learning activity. This information is then used to modify subsequent learning activities to promote new content understanding or revisit prior content knowledge.  We have already introduced you to learning assessment techniques in part III of this series that provide ways to implement formative assessments. Consider also that incorporating technology into formative assessments can provide valuable insights into students’ progress and comprehension, making it easier to identify areas where students are excelling and where additional support is needed. Beyond its primary purpose of giving students (and instructors) an opportunity to assess their learning, formative assessments can also help build student confidence by developing the skills and knowledge necessary to meet the goals of the course [1].   

Digital assessment tools can immediately show where things are working very well in a class, where a student requires more support for their learning, or where faculty might consider changing their own pedagogical approach. But what does a technology tool for formative assessment look like? Here are just a few technology tools and assessment ideas that work well with small groups of students followed by those that work well with the whole class.  

FORMATIVE ASSESSMENT FOR SMALL GROUPS 

Google Docs: Google Docs can be used to facilitate small group activities both in person and online that involve collaborative writing, peer editing, research and collaboration, and group projects. In-person students can work together on the same document, while online students can meet in breakout rooms in Zoom to discuss. As the student groups work, the instructor can also participate by monitoring progress in the Google Doc for each group and assessing where each group is making connections to the content and where additional support might be necessary for better understanding. Google Docs allow focus on the process of collaborative work instead of just the product, thereby short-circuiting the usefulness of generative AI (Artificial Intelligences) tools. 

One way to create a formative assessment is using Google Docs and the jigsaw technique. In this approach, students are initially divided into small groups (round 1) and each group is given a specific section of a larger text or article to read and summarize. Once each group has completed their task, groups are reshuffled (round 2) so that each group now has one representative from each article who is considered the expert in their topic. These experts then teach their respective sections to the new group, sharing their insights and understanding to help build a collective understanding of the material. This technique encourages students to work collaboratively, helps to foster deeper learning, and provides opportunities for instructors to recalibrate the next activity. 

Google Slides: Just like Google Docs, Google Slides can be an effective tool to show how students are implementing concepts from your course content. Students can use it to create collaborative presentations where they share their knowledge with one another and experience different perspectives on the course material. They can see others’ changes as they make them, and every change is automatically saved. In-person students can work on the same slide deck, while online students can work together in a Zoom breakout room to collaborate and edit the presentation. Google Slide activities can promote and encourage student collaboration, creativity, critical thinking, and problem-solving skills while also providing invaluable assessment opportunities to the instructor (particularly if the instructor is also dropping into Google Slides, as a participant, where they can offer praise or course correction in real-time). 

FORMATIVE ASSESSMENT FOR THE WHOLE CLASS 

VoiceThread: VoiceThread is a collaboration tool that reinforces students’ interaction with one another, provides peer and instructor feedback about where there might be knowledge gaps, and, used effectively by asking the right questions, shines a light on the student’s own life experience as it ties to the course content. It allows the student to respond using the media tool of their choice. The most effective VoiceThread activities tie course content to students’ own life experiences and provide space for all students’ voices, both of which are incentives to not rely on ChatGPT and similar recent AI technologies. 

Padlet: Padlet is a highly engaging online tool that allows instructors to create a virtual canvas for collaborative activities such as brainstorming, discussion, sharing resources, creating multimedia content, exit tickets, and interactive icebreakers. Instructors can design the Padlet to allow students to respond anonymously and upvote other students’ contributions. By allowing the students to respond anonymously, it provides opportunities for more honest responses that will allow instructors to better gauge content understanding across the classroom.  

Poll Everywhere: Poll Everywhere is an online polling tool that allows instructors to create a variety of interactive activities that allow for formative assessment in the classroom, including live polls, surveys, quizzes, word clouds, and open-ended questions. Students can respond anonymously to questions using their own devices, and instructors can view the results in real time. This enables instructors to quickly assess students’ understanding of the material and revisit the content to clarify muddy areas or reinforce accurate understanding of concepts. It also provides an opportunity for students to reply honestly, interact when they might otherwise be too shy, gauge where their responses are situated within the student group and develop community. 

Custom Google Maps: Custom Google maps allow you and your students to create maps with markers, routes, and layers. It can be a desirable alternative to ChatGPT, as it allows for a more interactive and immersive learning experience where students get to explore, document, and present information in a spatially meaningful way. Students can work on the same custom map to work collaboratively on a class project or research assignment. For example, history and geography students can create maps highlighting important locations, events, or landmarks related to their topic and/or their own lived experiences. This tool is especially useful when the assessment is addressing the relationship between a learned experience and a personal experience. The information can provide insight into culturally and socially relevant ties and/or departures related to course content.   

As instructors, it is important to remember that assessments serve a broader purpose than just assigning grades to students. In fact, many formative assessments work better when offered as low- or no-stakes activities. When we intentionally incorporate technology into assessments, not just because the tool is convenient but because the tool serves an objective purpose, we help students to become familiar with the technology while achieving formative assessments that provide more immediate feedback for both faculty and students. And remember, when introducing assignments that allow the use of innovative technologies, you might also want to provide resources and guidance on how to use tools ethically and responsibly. This will better prepare students for a world where evolving and innovative technology is a constant reality. 

If you’d like to learn more about using and/or adopting one of these tools or exploring other tools, feel free to schedule a consultation with the CAT.  

Footnotes: 

[1] Angelo, Thomas & Cross, Patricia. Classroom Assessment Techniques: A Handbook for College Teachers. 2nd ed., Jossey-Bass, 1993. 

Follow our companion Using PI to Manage AI CAT Tips Video Series. 

Emtinan Alqurashi is Assistant Director of Online and Digital Learning and Jennifer Zaylea is Digital Media Specialist at Temple’s Center for the Advancement of Teaching.

Using P.I. To Manage A.I. pt. 5: Summative Assessments Can Promote Reflection and Learning, Too!

Cliff Rouder, EdD, Pedagogy and Design Specialist

I have such a vivid memory of a student, upon my returning his exam, quickly looking at the grade, crumpling the paper into a wad, and making a 3-point shot into the wastebasket–right in front of me! Why did I spend so much time providing meaningful feedback on the exam? Even more exasperating than seeing my written feedback ignored was seeing similar errors on his next exam! UGH! It never dawned on me that I could do more than say to my students, “Take some time to review my comments so that you can see where you can improve, and don’t hesitate to come to office hours if you have any questions about the comments.”

Unbeknownst to me at that time, there was a term and theory in cognitive and developmental psychology literature called metacognition. Proposed by John Flavell, a professor at Stanford University at the time, metacognition is often referred to as “thinking about one’s own thinking,” and includes a process of self-monitoring, self-evaluation, and self-regulation that leads to more intentional learning practices. While metacognitive strategies can be used in a number of ways to support our students’ learning, we can use our usually higher-stakes summative assessments (exams, projects, etc.) to help students “think about their thinking.” More specifically, we can help students to explicitly reflect on elements of their exam preparation in order to make a plan to improve on the next high-stakes assessment.

Enter the metacognitive activity of exam wrappers. Dr. Marsha Lovett and her colleagues at Carnegie Mellon University are credited with creating the exam wrapper technique. Wrappers were developed in reaction to their findings that many successful high school students were arriving at college with study habits that are ineffective for higher order learning. Exam wrappers ask students to answer a series of questions that require them to reflect on how they prepared for the exam, whether the results were what they expected, and how they might prepare differently for the next exam.

Saundra McGuire, author of Teach Students How to Learn: Strategies You Can Incorporate Into Any Course to Improve Student Metacognition, Study Skills, and Motivation, states that
that there is a metacognitive equity gap. Often students from under-resourced public schools may not have yet learned how to reflect on their learning and thus continue to employ study strategies that are not as effective as they can be. While all students can benefit from an exam wrapper, the activity may be especially valuable for these students.

The Protocol

Faculty prepare a series of metacognitive questions that help students think about their preparation and performance on a summative assessment. Carnegie Melon’s Eberly Center has some example exam wrappers for different disciplines. What’s nice is you can tailor these questions to the unique nature of your summative assessment: questions getting at the specifics of students’ study/exam preparation strategies, asking them to assess the types of errors they made on the exam, and asking them to develop a concrete plan to improve are typical.

Exams are returned to students wrapped in a piece of paper with the metacognitive questions. Students are then asked to reflect on the questions and develop a plan for the next exam. They submit a copy of the plan to you for homework or extra credit–whichever way you think works best. You review the plan and seek clarification if responses are not specific enough. You can then remind them of their plan and perhaps check in with the class occasionally to make sure they are following the plan as they prepare for the next exam. If you give your exams online or if you’re a fan of reducing paper, the questions can be posed in Canvas.

Lisa Kurz at Indiana University’s Center for Innovative Teaching and Learning points out that exam wrappers can also be given to students before they take an exam. While the name is a little bit of a misnomer for this use, she states, “The exam wrapper provided before the exam might ask students to create exam questions at different levels of Bloom’s taxonomy, predict what major topics will appear on the exam, how much of the exam will be devoted to each topic, and what kinds of questions will be asked. Students can then use this ‘exam blueprint’ as they study.” This works well as a solo, pairs, or small group activity.

If you’d like to learn more about using or adopting one of these exam wrappers or creating your own, feel free to schedule a consultation with the CAT. So, wrap your exam and help your students become metacognitive pros!

Follow our companion Using PI to Manage AI CAT Tips Video Series

Cliff Rouder serves as Pedagogy and Design Specialist at Temple’s Center for the Advancement of Teaching.

Using P.I. To Manage A.I. pt.4: Iterative Work to Strengthen Student Engagement

Jeff Rients

We sometimes tend to imagine education as a linear, cumulative process. As we learn new knowledge and master new skills, we grow towards our future selves. Our education system reinforces this idea as we move from 1st grade to 2nd grade, sophomore to junior, or undergrad to graduate student via a linear progression. However, new learning always exists in the context of our prior learning. What we learned before is not something passive and fixed that we build upon; instead our new learning is in conversation with old learning. The new selves that learning produces don’t exist separate from our old selves, but because of them.

We can create better learning opportunities for our students, building upon the relationships between past and present learning, if we structure our courses not as linear movement through topics or units, but as a series of feedback loops where prior learning is explicitly invoked and connected to present learning. This iterative work takes the form of embedding every new learning opportunity into a context.

Questions to Connect

For example, some instructors start a course in a sequence with an overview of the key topics of the prior course or some sort of review activity. An interactive approach would then take that review and ask students to explicitly connect it to future learning, with a prompt such as “Looking over the key takeaways of the prior semester, which of those topics do you see coming into play with each of the units outlined on the syllabus?”  This can also be done at the individual unit or project level, with prompts such as “What skills or knowledge do you already possess that will be crucial to your success at this task?” Explicitly asking these questions creates new connections between old learning and new, strengthening both in the students’ minds.

Draft and Revision

An iterative approach also provides more opportunities for students to receive feedback. The classic example of this is the semester-long paper, where students hand in pieces (thesis statement, bibliography, conclusion, etc.) and/or drafts prior to the final submission. Each of the earlier submissions receives peer and/or instructor feedback, giving the students the information that they need to improve their approach for the next submission. In this way small mistakes can be pruned away and misunderstandings of the task addressed before the final submission.

Recurring Examples

The use of recurring examples brings earlier content into later contexts. For instance, a case study introduced early in the semester can be redeployed, either with changes to challenge the students’ new knowledge or as an opportunity to reflect on what they have learned since the case originally appeared. Simple prompts like “How would you handle that old case differently, knowing what you know now?” will help make student’s new learning visible. Students can also be asked to critique earlier, intentionally simpler examples, to reveal the complexities those earlier cases glossed over. You could task students to write a more complete version of the case, one that includes the new nuances the students have learned since the original instruction of the case.

Revisiting Prior Work

Many times our students hand things in to us on a “fire-and-forget” model; the ordeal of the task is over and they would just as soon not think about it again. Painful as it might be, we need to help them do that thinking while showing them that their skills and knowledge are growing. For example, you could ask students to do a short write activity describing how they would do an earlier project differently, knowing what they know now. This can even be formalized into a system where students are given the opportunity to revise earlier submissions for a higher grade, provided that they incorporate one or more new skills and concepts introduced after the original assignment.

There are benefits to iterative work on the instructor as well as the student end. If our classes feature more tasks with built-in continuity, it will be easier for us to see our students in a more comprehensive way. Their growth as thinkers and their individual professional or scholarly identities will become clearer to us. That puts us in a better position to offer feedback that makes sense in the context of the individual learner. It also allows us to see when a student’s thinking or writerly voice suddenly changes. Maybe the student has stepped up to the challenge of the course in a new way. Or maybe the shift is due to some sort of distress and the student needs help (possibly including a referral to the CARE Team). Or perhaps the change in student voice is an indicator that the student is getting unacknowledged assistance. Which of these is true and whether the unacknowledged assistance, if any, rises to the level of cheating requires careful consideration and investigation. But without the record that iterative work provides, these issues are much harder to identify.

Note that iterative work is not offered here only an “anti-cheating” technique. Easier detection of changes in student writing is just one benefit of iterative work. The real gain to be made by this approach is that your students will be less likely to see our syllabi as a series of randomly ordered individual tasks and more likely to see the grand trajectory of their learning. We all too often assume that the students can see all the linkages we made between various content and activities, but those linkages are only readily visible to experts such as ourselves. Iterative work helps the students see how the moving parts of the course fit together and how the various things they have learned form a coherent education.

If you’d like assistance in incorporating more iterative work in your class, you can make an appointment with a CAT staff member to talk through your plan. You can also arrange to have someone from the CAT come to your in-person or virtual class to give you feedback.

Follow our Using PI to Manage AI blog series at EdVice Exchange

Follow our companion Using PI to Manage AI CAT Tips Video Series.

Jeff Rients serves as an Assistant Director at Temple’s Center for the Advancement of Teaching.

Using P.I. To Manage A.I. pt. 3: Learning Assessment Techniques That Help Build Students’ Self-efficacy

Cliff Rouder, Ed.D.

“I think I can, I think I can,” goes a well-known line from the classic children’s fairytale, The Little Engine that Could. In this blog post we’re going to answer the question, “How can we, as instructors, help our students to “think they can”? In other words, how can we help develop students’ self-efficacy? 

Let’s start by exploring the concept of self-efficacy. Self-efficacy refers to the level of a person’s confidence in their ability to successfully perform a behavior. Well-known psychologist and researcher Albert Bandura was the first to demonstrate that self-efficacy has an effect on what individuals choose to do, the amount of effort they put into doing it, and the way they feel as they are doing it. Subsequent theorists have also demonstrated that self-efficacy contributes to motivation and is a predictor of intentions to change behavior. Still other researchers have demonstrated the ability of self-efficacy to predict outcomes of student success, such as cumulative GPA, grades, persistence, and stress/strain/coping. 

Clearly, academic self-efficacy matters on a variety of important fronts. Our next task is to examine our role in developing students’ self-efficacy. Here we can get a bit of guidance from Bandura, who posited that there are four sources of our self-efficacy development:

  1. Performance Accomplishments: Previous successes raise mastery expectations, while repeated failures lower them. Here, we can give students opportunities for practice that provide a moderate (neither too easy nor too hard) challenge.
  2. Vicarious Experience: Aka modeling, self-efficacy is gained by observing others perform activities successfully. Here, we can create opportunities for peers (and you) to model productive ways of approaching course requirements.
  3. Social Persuasion: Activities where people are led, through suggestion, into believing that they can cope successfully with specific tasks. Here’s where providing frequent, targeted, and motivating feedback can help. See the CAT’s recent EdVice Exchange blog post for providing feedback that nourishes the mind and heart.
  4. Affective arousal: Some feelings, e.g., stress and anxiety, can lower self-efficacy. Here’s where acknowledging and addressing these feelings can help. 

Below are just some of the many learning assessment techniques that can target and develop students’ sources of self-efficacy. (For a comprehensive collection of learning assessment techniques, check out Barkley and Cross’s recent book available online through Charles Library.)

Think-Pair-Share

Pose a challenging question to students that they must consider alone and then discuss with a neighbor before settling on a final answer. This is a great way to have students model their thinking for their peers. A think-pair-share can take as little as three minutes (quick-response) or longer (extended response), depending on the question/task.

Classroom Polling

Ask students questions throughout your lecture using our newly-licensed polling software, PollEverywhere. Using this method, the learner and instructor can both check understanding anonymously. An alternative low-tech approach is to give students a set of numbered index cards that correspond to the answer choices and ask them to hold up their number at the same time. 

Group Formative Quizzes

Have students complete a quiz individually and then work with a group to compare and discuss answers before submitting a group answer. An added possibility is to have the group use “scratch-off” cards called Immediate Feedback Assessment Technique (IF -AT) sheets. This allows students to assess their understanding as well as practice articulating and explaining concepts to classmates.

One-Minute Paper

Either during class or at the end of class, ask students to produce a written response to a question. This technique can be used to collect feedback on understanding by asking them to identify what they thought the key points of a lecture are, what the most confusing point is, or to voice a question. It can also be used to “take the temperature” of the class and address affective concerns such as stress and anxiety related to the assignment or assessment.

Student-Generated Test Questions

These allow you to assess what students consider to be the most important content, what they understand as useful test questions, and how well they understand the material. They allow students to practice organizing, synthesizing and analyzing large amounts of information in order to prepare for summative assessments.

Rubrics

Sharing the rubrics you have for course assignments and projects, and most importantly, giving them practice using the rubrics to confirm understanding of your expectations, can help lower anxiety about unclear expectations.

You may be asking yourself, “Do I have the self-efficacy to try a new learning assessment technique!?” If so, great! Try it out. Just remember that ultimately your assessments should be chosen based on whether they are helping your students meet the course goals, so use them intentionally. Also, be clear about (and make transparent to your students) the purpose and value to them of engaging in the activity/assessment. 

If your self-efficacy is low, then look to your peers who can model how to successfully implement these learning assessment techniques, help persuade you to try one, and/or talk with them about the apprehensions you might have and how they addressed them. Then, be sure to reflect on the experience and see what you might improve the next time you try it. Getting feedback from your students can also help you assess what might need to be tweaked the next time you implement the technique.

I also invite you to make an appointment with a CAT staff member to talk through your plan. You can also arrange to have someone from the CAT come to your in-person or virtual class to give you feedback. So, let’s help our students go from “I don’t think I can” to “I know I can!

Follow our Using PI to Manage AI blog series at EdVice Exchange

Follow our companion Using PI to Manage AI CAT Tips Video Series

Cliff Rouder serves as Pedagogy and Design Specialist at Temple’s Center for the Advancement of Teaching.

Using P.I. To Manage A.I. pt. 2: Designing Meaningful Assessments by Applying the UDL Framework

Dana Dawson, Ph.D.

Take a moment to think about the last time you worked really hard on something, a time where you focused intently and devoted care and attention to completing a task. It might have been a project related to a hobby, something you did to support a family member or a task required for your job. Consider what prompted you to spend time and effort on the task. Was it because you enjoy the process? Because the outcome was important?

As we think about how to teach in the face of ready access to content generating tools such as ChatGPT, it is important that we reflect on the “why” of the learning activities and assessments we are designing for our classes. Our students are influenced by the same types of motivations that drive us as instructors to make time for certain activities and deprioritize others. Our attention is a valuable resource… and a finite one! We are all constantly making decisions about how to allocate our time and attention. So, if we want our students to invest their time and attention in our coursework and to use AI tools to benefit learning rather than as time-saving shortcuts, we must design activities and assessments that are meaningful and that warrant dedication of effort. Universal Design for Learning provides a valuable framework for identifying strategies to recruit our students’ interest, encourage sustained effort and promote self-regulation.

Recruiting Interest

Here at the CAT, we often encourage faculty to design authentic learning tasks. An authentic learning task is an activity or assessment that connects with our students’ lived reality. Are we designing activities and assessments that encourage the kinds of thinking and problem-solving our students do in their lives outside the classroom or that they might anticipate doing in a future field of work or study? By creating a safe space for students to take risks and test new concepts or skills, we provide a context within which students can experience learning and personal growth.

The first step towards authentic learning is to reflect on whether your assessments are designed to assess learning goals for the course. We often defer to assessments such as essays or exams to gauge student understanding but there may be better mechanisms for making student learning visible. Assessments should be geared to as directly measure student progress toward achieving the learning goal as possible. What can students do to show they have mastered new skills? How can they document their learning? Creating multiple assessment options, multiple paths to success, will help students to feel a sense of agency in their learning and discourage turning to cheating options.

Sustaining Effort & Persistence

Deep learning requires engagement over time rather than in short, disconnected bursts. Our students are more likely to sustain their efforts if we clearly explain the purpose of the activities and assessments we assign, which of course requires clarifying for ourselves what the purpose is in the first place. Take a moment to reflect on the course content, activities and assessments you assigned during the last week of classes you taught. Was there a clear reason for each activity? Did you explain to students the purpose behind what you assigned to them?

We also know that our students are more likely to persist in learning-related tasks if they feel a sense of belonging in your class. Give students opportunities to work with one another to solve problems and share the choices they’ve made relative to course activities. A warm demeanor and supportive stance can help here, too. Students are more likely to persist without taking shortcuts if they see the instructor as a partner in their learning rather than as the course antagonist.

Self-Regulation

While our students will be more motivated to dedicate time and attention to course-related tasks if they understand the purpose, we do not have to be the sole generator of that purpose. Give students opportunities to connect course content with their goals. What do they care about and how might your course content relate? Studies have shown that simply encouraging students to reflect on their values in class promotes motivation and engagement. Provide tools such as rubrics that allow students to monitor their own learning and improvement over time and ensure feedback is targeted and communicates how students can build from where they’re at.

It can also be helpful to explicitly discuss with students the kinds of skills and practices students need to cultivate to succeed in your course. Instructors tend to assume that someone prior in our students’ education taught them how to study or take notes. But for some students, your course may be the first time where a systematic approach is needed to pass the class. Or perhaps your course requires different study skills than any other course they experienced before. We need to be able to articulate to our students what being a good student looks like in our course.

For more information on Universal Design for Learning, visit the CAST website and for help implementing the above ideas, don’t hesitate to reach out to the CAT for a one-on-one consultation.

Follow our Using PI to Manage AI blog series at EdVice Exchange

Follow our companion Using PI to Manage AI CAT Tips Video Series

Dr. Dana Dawson serves as Associate Director of Temple University’s Center for the Advancement of Teaching.

Using P.I. To Manage A.I. pt. 1: Introduction

Stephanie Laggini Fiore, Ph.D.

We are all teaching in a new reality created by powerful text-generation tools like ChatGPT that allow us and our students to compose text on demand. Lori Salem, Assistant Vice Provost and Director of the Student Success Center, and I wrote an initial post about this last semester. As instructors, we will all need to think hard about how to manage and harness the power of this tool. I use the words “manage” and “harness” intentionally here, as we cannot pretend that we can entirely ban these tools nor can we rely on anti-AI detectors (that I can assure you will not be foolproof). In addition, we have a responsibility as educators to guide our students in the ethical and effective use of AI tools that will be available to them beyond the university in their workplaces and in their daily lives. There is no body of research (yet) that can guide us in using AI for teaching, so we are all feeling our way along by reading, debating, and experimenting with some best ways forward. 

While the teams at the CAT and the Student Success Center are working towards developing a set of guiding principles for managing AI in our classrooms, the way to start right now is by considering how PI can help us manage AI. What is this magical PI, you say? Does it have something to do with Tom Selleck (if you’re my age, you get that joke)? Is it a fancy new counter-AI robot that will solve all of our problems? No, my dear colleagues, it is simply an invitation to examine the fundamentals, the Pedagogical Intelligence that should be the first stop on the road to a set of principles for thinking about teaching in the presence of Artificial Intelligence. In the CAT’s new spring series, Using PI to Manage AI, we will be exploring, both on our blog and on our CAT Tips series on social media, these pedagogical fundamentals as a way to start this conversation. The topics we will explore on our EdVice Exchange blog are all evidence-based ways of designing student assessments of learning in ways that will encourage academic honesty, motivation, and a desire to learn. We will follow each blog post with a CAT Tips video on social media outlining a few concrete ways to implement these assessment strategies in your classes. If you have not recently done a deep dive into evaluating how useful your assessments are for evaluating learning–and also for furthering learning by engaging students in meaningful learning tasks–now is the time! 

We will start the series by exploring how to design assessments that are meaningful for students, allowing them to connect to what we are teaching in ways that help them see the value of engaging in the work. The following blog post in the series will discuss how to use learning assessments to build student self-efficacy in ways that help them to be able to do the work well and  to feel confident in what they are learning. Then we will unpack iterative work that provides feedback and allows for revision along the way. We will subsequently examine summative assessments and strategies for supporting students to think reflectively about how they prepare for these usually higher-stakes assessments. Finally, we will complete the series by introducing some educational technology tools that can assist us in implementing better assessment protocols. 

It will be important to approach this new challenge as an opportunity. It will necessarily push all of us to think deeply about how we are teaching and how we are assessing learning, and in so doing, lead us to more effective practices. We may surprise ourselves by discovering that AI itself can be useful in exciting new ways for learning. In the meantime, know that we at the CAT are on this journey with you, and will be working to support you as you support our students’ learning.  

Note: If you are intentionally using ChatGPT to teach in your classrooms this semester, please email us at cat@temple.edu and tell us about it. Consider also that you can engage in the Scholarship of Teaching and Learning (SoTL) by designing a classroom study to evaluate the impact of ChatGPT on student learning. If you want to learn how to design a study related to your use of ChatGPT in the classroom, contact Benjamin Brock at bbrock@temple.edu for assistance. 

Follow our Using PI to Manage AI blog series at EdVice Exchange

Follow our companion Using PI to Manage AI CAT Tips Video Series

Stephanie Laggini Fiore, Ph.D., is Associate Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.

“Students are using AI to write their papers, because of course they are.”

Lori Salem and Stephanie Fiore

So says the title of a recent article in Vice that has been making the rounds at Temple.  The article describes a new tool called Open Ai Playground, that generates text on demand.  Playground uses GPT-3, a newly developed machine-learning algorithm, to compose the text.  GPT-3 is also the power behind Shortly-Ai, another text-generation tool offering a somewhat different set of features.  The sentences generated by both programs are surprisingly good – they flow, and they have clear and simple prose style.  A student could theoretically type their essay prompt into Playground or Shortly, and the program would generate the essay for them.  And because the sentences produced by GPT-3 are entirely original, the resulting text would not be flagged by a plagiarism detector like Turnitin.

So, is this the end of writing instruction as we know it?  We think not.  But these new programs do have implications for teaching, and that’s our focus in this post.    

We tested both tools to get a sense of what they can do and what it is like to use them.  Both tools make it easy to produce short (paragraph-long) texts that clearly and coherently state a few relevant facts.  It’s possible to imagine a student using them to produce short “blog-post”-type essays, which is exactly what the students in the Vice article say they do. At least for now, neither program would make it easy to produce a longer text, nor to produce a text that was argument-driven, rather than factual. 

But more importantly, these programs don’t—and can’t—help with the real work of writing.  They can create sentences out of sentences that have already been written, but they can’t help writers find the words to express the ideas that they themselves want to express.  If the purpose of writing was simply to fill a page with words, then the AI tools would suffice.  But if the writer wants to communicate something, and therefore cares what ideas and arguments are being expressed, then AI writing tools are not helpful.  

Don’t take our word for this.  In the sidebar, we provide information about how to access and use Playground and Shortly.  Try them and see if you can get them to write something that you can genuinely use.

If you find, as we did, that AI writing tools are not useful when the writer cares about the content of the writing, then we’re halfway to solving the problem of students using AI tools to plagiarize.

The Plagiarism Arms Race

Just because AI generated texts are undetectable right now, doesn’t mean that will always be the case. Someone somewhere is probably already working on a tool that will detect texts written by GPT-3, because of course they are. Students figure out ways to cheat, and companies invent tools to catch them, and then they sell their inventions to us. This is just the latest iteration of that cycle.

To that point, have you seen the YouTube videos instructing students on how to beat Proctorio at its own game? The same Proctorio for which we pay a hefty annual subscription fee?

There has to be a better way, right?

A better way, part I: Encourage Academic Honesty by Creating Better Assignments

This new AI tool is a “threat” to academia only insofar as we ask students to complete purposeless writing assignments, and ones that rely on lower-level thinking skills that ask students to reiterate factual information. The real answer to cheating systems that become more sophisticated is to create better assessments and to create conditions in our classrooms that encourage academic honesty.

There is some very good research on what works to encourage academic honesty. This is a longer discussion than we will take here, but in essence, we should think about what the factors are that lead to cheating behaviors and work to reduce those factors. These include 1) an emphasis on performance (rather than learning); 2) high stakes riding on the outcome; 3) an extrinsic motivation for success; and 4) a low expectation of success. There are very intentional steps that we as instructors can take to reduce these factors, including adjusting our assessment protocols to rely less heavily on high-stakes one-and-done writing assignments, centering writing assignments on issues students care about, and scaffolding writing assignments to allow for feedback and revision.

We also need to look at the kinds of assessments we are using in our courses. The more we move towards authentic assessments and grounded assessments (designed to be unique to the course you are teaching in the moment. They often include time, place, personal, or interdisciplinary elements to make them something not easily replicable), the better off we are. There is a lot of work to be done here, as we often rely on the kinds of assessments we had as students, very few of which were either authentic or grounded. It is much harder to cheat on these kinds of assessments.

Finally, findings from some interesting research on academic honesty suggest that communicating with students about academic honesty works better than you would think, reminding them of their ethical core and focusing on what academic honesty looks like and why it is expected. This is especially effective when timed close to an assessment.

Try it for yourself!
Open Ai Playground
How to try it: Use the link above to open the website and make a free account. From the home screen, click on the “Playground” tab (top right.) Then enter an “instruction” in the main text box. The instruction might be something like “Describe [topic you are writing about.]” Or “Explain [something you are trying to explain.]” Click “submit,” and your results will appear. If you don’t get what you were looking for, you can keep refining and resubmitting your instructions. ShortlyAIHow to use it: Use the link above to open the website and make a free account. Enter a title and a sentence or two and set the output length to “a lot.” Then click the “write for me” button. If you like the way the text is going, you can type another sentence or two and click “write for me.” Or you can refine your original title and first sentence and start over. Please share your results! Copy the text(s) that you “write” and email them to Lori.salem@temple.edu along with any comments you care to offer about the texts or your experience producing them.

A better way, part II:  Adapt instruction to reflect new writing practices

Once upon a time, writing instruction centered around penmanship and spelling.  Those days are gone because developments in the technology of writing (from pens, to typewriters, to word-processors) drove changes in writerly practice, which eventually led to changes in writing instruction. 

Automated text generators are just the latest technological innovation, and they have already changed the practice of writing in journalismonline marketing, and email.  And why not?  There is great value in making certain kinds of writing more efficient. 

Our approach to writing instruction will need to adapt to this new reality.  It’s not hard to imagine a future in which universities teach students how to use AI tools to generate text for some situations, even as they disallow the use of AI tool for others. 

Lori Salem serves as Assistant Vice Provost and Director for the Temple University Student Success Center.  Stephanie Fiore is Assistant Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching