Generative AI tools like ChatGPT are already being used by students, and are likely to become ubiquitous in the workplace of the future, so ignoring them is not the ideal solution. We need to be actively thinking about how students are achieving the learning goals of our courses in light of their possible use of these tools, but also whether our courses prepare them for future careers in which they may be using AI tools regularly.
That being said, as with any other technology, we and our students need to approach the use of these tools critically. With regard to your teaching responsibilities, we see three approaches to the use of AI:
Integrate the use of generative AI into your course activities and assessments;
Integrate critical examination of generative AI tools themselves into your course activities and assessments; and
Work around AI by designing assignments that are AI-resistant
If you’re not sure where to begin in deciding which of these approaches to take, we offer you this decision tree as a helpful tool. Determining how you will address generative AI in your classes will require you to reflect on your level of familiarity with the tools, take steps to become more familiar with them if you haven’t already, examine the learning goals for your course and determine whether AI can be useful in helping students to reach these goals, and consider your readiness to carefully vet content students create with the help of or in response to the effects of generative AI. Finally, whatever your decision, you’ll need to speak with your students about the use of AI in your course.
In upcoming blog posts in this series, we will address strategies for the three approaches listed above as well as how to speak with your students about your approach. Note that you don’t have to pick just one of these approaches. We urge instructors to consider a mixed approach to incorporating AI in the classroom. In some cases, the use or analysis of content created by generative AI may help your students achieve a particular learning outcome and in other cases, it may be counterproductive. Revisit this decision tree not only at the outset of your class planning, but as you consider activities and assessments throughout the semester. Then articulate your decision to your students clearly on your syllabus. We have made syllabus guidance available to help you craft a coherent syllabus statement regarding use of AI in your courses.
Follow this blog series for continuing guidance on how to think about AI in your classes, and remember that you can make an appointment with a CAT developer to discuss your decision.
Dana Dawson is Associate Director of Teaching and Learning at Temple’s Center for the Advancement of Learning. Decision Tree graphic by the Center’s Graphic and Design Specialist, Emily Barber.
Generative AI has revolutionized the way we interact with technology, opening new doors to faster and more efficient organization, and allowing more time for the creative and conceptual aspects of learning. However, there are considerable challenges that need to be addressed, especially when they could potentially impact how learning works in our classrooms. In our first blog of the AI series, we discussed what generative AI is and invited you to explore one of these tools (i.e., ChatGPT), in our second blog in this series, we explore some of the benefits and possible pitfalls of using generative AI in the student learning process.
Some benefits of incorporating generative AI into the student learning process:
Access to large quantities of information. Generative AI is trained using a vast amount of information from a variety of sources. With this access to an enormous amount of information, it responds to a wide range of language-related tasks. Used productively, the greater access to information can potentially help students gain multiple perspectives on a topic, and spark inspiration that leads to greater creativity and ideation.
Speed of responding or processing. Generative AI provides quick responses to queries, prompts, and requests. This is particularly helpful when we need real-time responses or quick turnarounds for questions or requests. Rather than spending excessive time searching for information, students can gather information efficiently, and spend time focusing on comprehension and analysis of information. Ultimately, this helps them advance to more complex thinking and learning.
Automation of routine tasks. Because generative AI can produce human-like text, it can be used as a starting point for any type of content from drafting emails and articles to creating social media posts or course outlines. It can also be used as an editor to develop unstructured transcript or notes into a well-structured text. Students can save time and effort in the organizing and editing processes to allow them to focus on more creative and strategic aspects of their work.
Conversational manner of speech/writing. The human-like conversational ability makes AI an enjoyable discussion partner (but also provides serious concerns – see pitfalls below). In fact, generative AI can refine its outputs in response to your prompts in an ongoing discussion, helping you and your students to iteratively refine your thinking. For this reason, it can be very valuable to learn simple prompt engineering, which is the process of refining input instructions to achieve desired results. This can enhance the conversational capabilities of generative AI.
Assistive technology. Generative AI has the promise to improve inclusion for people with communication impairments, low literacy levels, and those who speak English as a second language. It can also improve ease of communication and comprehension in a variety of ways.
Possible pitfalls when incorporating generative AI into the student learning process:
Engenders a false sense of trust. Generative AI’s conversational manner can be concerning as it can create a false sense of trust in the content it delivers. We must be aware that generative AI may disseminate inaccurate, biased, or incorrect information, and we must caution individuals against treating it as a source of truth.
Can generate inaccurate information. Generative AI has the ability to produce convincing responses to prompts and can offer properly formatted citations. However, it may not always provide accurate or reliable information as it has the ability to replicate outdated information, misinformation and conspiracy theories that exist in its data set. In all cases, the responses formulated by generative AI require careful vetting to ensure their accuracy. Therefore, it is important for students to take responsibility for verifying the accuracy and reliability of any information they obtain. This ensures students are not only learning correct information but also developing critical thinking and research skills that will serve them well in their academic and professional lives.
Can generate biased information. Just as generative AI can replicate inaccurate information, it can also replicate biased information. For example, if the training data used to train a generative AI contains biased or skewed information, the AI may inadvertently reproduce that bias in its results. Again, careful vetting of information is key to productive use of these tools.
Human authorship. Generative AI can produce responses that closely resemble pre-existing text, leading to an inability for the user to distinguish between AI-generated and human-authored text. Additionally, it is unclear which sources generative AI is using. This obfuscation of human authorship can make it challenging to attribute sources accurately. In fact, if you ask some generative AI tools to add references with citations and links, it can invent false sources, a phenomenon that is called hallucinating (although others are already becoming more sophisticated and can pull citations from real sources).
Simulated emotional responses. Generative AI can, in written form, simulate emotional intelligence, empathy, morality, compassion, and integrity. Devoid of nuance, AI simulates an emotional output by scouring the troves of text and training that has been provided to offer what seems most likely to be an accurate emotional response. It will be important for users of these tools to be cognizant of the fact that simulated emotional response is not the same as true interpersonal human connection and understanding.
Lacks context. When it comes to specific courses, class discussions, or more recent events, generative AI may not be able to establish connections between writing or arguments as they relate to the context of a course.
Proliferation of harmful responses. With prompt engineering, users can circumvent the safeguards put in place to prevent harmful, offensive, or inappropriate information from being proliferated in responses. Additionally, there are concerns that misinformation (inaccurate) and disinformation (deliberately false) will be shared as accurate and reliable information.
Can potentially widen the education gap. The digital divide that can already exist between students with means and those without may be widened as more powerful AI tools are placed behind paywalls. In addition, gaps in digital literacy skills may exacerbate the widening of the education gap.
While generative AI brings numerous benefits to students, the potential pitfalls must be considered. It is crucial for students to verify information, develop critical thinking skills, and exercise caution when relying on generative AI. These aspects are integral to developing digital literacy skills, which will be explored later in the series.
In next week’s post, we will discuss how to decide about the use of AI in your class. If you have questions about generative AI and learning, book a consultation with a CAT specialist.
Emtinan Alqurashi, Ed.D., serves as Assistant Director of Online and Digital Learning at Temple University’s Center for the Advancement of Teaching.
Jennifer Zaylea, MFA, serves as the Digital Media Specialist at Temple University’s Center for the Advancement of Teaching.
Welcome to our summer 2023 weekly blog series A Survival Guide to AI and Teaching! In today’s rapidly evolving world of technology, one innovation that is altering higher education is generative AI (such as ChatGPT, Dall-E, Bard, Bing, or visit SourceForge for their 2023 Best of listing). The impact of advanced generative AI on higher education assignments and assessments is complex. Our goal in this series is to help you understand and proactively address student use of generative AI at Temple. Over the coming weeks, we will provide an overview of how generative AI works, discuss some benefits and pitfalls of using these tools, help you decide whether you should integrate generative AI into your instruction, and offer supportive suggestions for how to whatever your decision is. We will address the topics of digital literacy, academic integrity and how to talk to your students about ChatGPT and similar tools. Join us as we delve into the influence of AI on the future of higher education and the ethical considerations it engenders for students and faculty. In this first blog post in the series, we will explore the question of what generative AI is and how it works.
What is generative AI?
Generative AI (such as ChatGPT, Bard, Bing, Dall-E, etc.) is a type of artificial intelligence that has the capability to create surprisingly coherent text and images. They are large language models that can write and converse with users by drawing on an enormous corpus of text from a variety of sources—including books, web texts, Wikipedia, articles, internet forums, and more—on which they have been trained. These models are growing progressively larger: for instance, GPT-4, the most recent iteration of GPT-3, has 170 trillion parameters compared to its predecessor’s 175 billion. Generative AI does not have cognition; that is, it can’t think. Instead, in a similar manner to the autocomplete function that works in other applications you use every day, it works by finding and replicating the most common patterns in speech available in its dataset. Note, however, that AI technology is constantly evolving and improving, and we cannot be sure what its future capabilities will be. For an expanded explanation of how these technologies work, visit this article “A curious person’s guide to artificial intelligence.”
How does AI work?
Generative AI have been developed to provide a detailed response, in a conversational way, to a given prompt in the form of a question, request, or instruction. Rather than providing the user a listing of information sources generated by a prompt, as search engines like Google do, it structures the response in a more humanistic question and answer format.
We recommend you spend time with a generative AI tool to understand what it produces. Not only will this reduce any anxiety you may be feeling about this unfamiliar technology, but it may help you discover unrealized uses for generative AI in your courses and your discipline. Using prompts from your own classes will give you a good idea of what students can generate using the tool. We’ll lead you through the steps for interacting with one of the AI tools, ChatGPT, which is available by visiting OpenAI’s website
Let’s try it out!
Step 1: Visit OpenAI and create a free account, by clicking on the TRY CHATGPT button, and then clicking on the SIGN-UP button.
Step 2: Become familiar with the interface. Navigation tool descriptions are provided below the image.
Step 3: Use the text entry box (area 8, “Send a message…”) at the bottom of the screen to type a prompt. This can be a question or a specific request. Press the paper airplane icon to submit. We have provided a video example, Prompting In ChatGPT, showing how to do this.
Step 4: ChatGPT will “type” out the response in real time. When it’s done you can click the REGENERATE THE RESPONSE button, if you want to see different wording of the output, and/or provide feedback regarding the accuracy of the response by clicking the THUMBS UP and THUMBS DOWN buttons. This will help teach the AI how to work better with you.
* Note: Each prompt kicks off a conversation. You can enter follow-up prompts or change the subject entirely. It will remember previous conversations.
Step 5: If you don’t think the response was good enough, you can simply ask it to try again and provide new parameters or details to guide its response.
Step 6: You can also let ChatGPT know when it’s incorrect about something.
Once you have explored ChatGPT, or other generative AI tools, you will have a stronger understanding of how the tools work and how you will have more confidence when addressing them in classes, determining how you might use them in assessments, or even think about using them to perform faculty administrative tasks.
In next week’s post, we will delve into the potential benefits and pitfalls of generative AI for students and educators. If you have questions about generative AI and its application in your classes, feel free to book a consultation with a CAT specialist.
Emtinan Alqurashi, Ed.D., serves as Assistant Director of Online and Digital Learning at Temple University’s Center for the Advancement of Teaching.
Jennifer Zaylea, MFA, serves as the Digital Media Specialist at Temple University’s Center for the Advancement of Teaching.
In recent years the term ungrading has been circulating in various educational circles. But what does it mean? Is there an opposite to grading and, if so, how can ungrading be used in the classroom? Does ungrading mean students don’t receive a final grade? In this blog post we’ll break down what ungrading is and offer some suggestions as to how you might practice it in your classroom.
The Problem with Grades
Grades are a quantitative, objective proxy for the qualitative, subjective student experience of learning. In theory, grades tell us whether or not students are learning. In theory, the prospect of getting better grades helps to motivate students to do the coursework prerequisite to learning. In theory, grades provide the administration a snapshot of how things are going in our classroom. But how successful are grades at doing that?
The short answer to that question seems to be that the efficacy of grading can vary wildly. Ken Bain, in his seminal work What the Best College Teachers Do, reports how students in a physics class could achieve A’s in the course without fundamentally shifting how they thought about mass, energy, and motion. Any instructor who has taught the second or third course in a sequence knows the frustration of needing to reteach concepts from previous semesters despite students getting A’s or B’s on their finals just weeks before.
Furthermore, Alfie Kohn, in his work Punished by Rewards, documents the extensive research demonstrating that attaching an extrinsic reward to a classroom assignment actually undermines intrinsic student motivation. It seems that if we tell students that a task is worth ten points in the imaginary economy of our course, then they will never value the task itself, only the ten points. That puts us in the precarious position of sending mixed messages about the intrinsic value of learning itself. If, as seems to be the case, our grades can never be more than proximate (even inaccurate) indicators of some student learning AND grading every task can have the effect of demotivating students then how valuable can collecting numerous grades really be?
Decoupling Grades from Assessment
Although much work has been in terms of developing better assessments, communicating priorities to students via rubrics, and other attempts to refine our grading practices, those involved in the ungrading movement attack the problem by assuming that grading itself is the problem. From this perspective grading is a largely unnecessary appendage to the practices we need to focus on with our students: feedback and assessment. We, like most educators, are professionally obliged to report final grades and to communicate to our students at the start of the semester how we will arrive at those grades. Although there are many ways to implement ungrading, the basic approach seeks to shift student attention from grades to learning by deferring (for as long as possible) that final moment of collapsing the student learning experience–in all its complicated, messy, nonlinear multidimensionality–into a quantitative approximation such as a single letter or number.
Instead, that messiness, the student learning itself, becomes a core component of the course content. Through ungrading students are challenged to directly interrogate their learning as well as respond to feedback from the instructor. Here are a few examples of ungrading practices:
Grade-Free Zones – Designate an early portion or experimental unit of your course as entirely grade free.
Self-Assessment – A few times during the semester students assign their own grades, providing evidence in a separate document they write and/or filling out a rubric. You accept or veto their assessment in dialogue with the student.
Process Letters – Students write directly about their learning process. The instructor provides feedback focused on meeting the course learning goals.
Minimal Grading – Instead of the usual percentage and letter schemes, assign one or more assessments complete/incomplete, pass/fail, or another simplified schema.
Authentic Assessment – Students are assigned a relevant real-world task that serves as their primary or sole graded activity, such as a cinema class organizing a film festival.
Contract Grading – Students are given “to-do” lists for achieving an A, a B, and a C in the course. At the start of the semester they choose which list they complete for the assigned final grade.
Portfolios – Students build portfolios over the course of the semester, which serves as the sole object of assessment for the course. The portfolio must supply evidence of having achieved the learning goals of the course.
Peer Assessment – Similar to self-assessment (which it can be paired with) students work in small groups to discuss and evaluate each others’ contribution to the learning in the course.
Student-Made Rubrics – A few times during the semester you devote a day of class time to collaborate on building a rubric which the students will then use to self- or peer-assess their progress in the course.
(Adapted from Stommel, “How to Ungrade” in Blum 36-8.)
Moving away from a constant stream of grading and points to a minimal number of graded objects does not mean you stop giving feedback! In fact, you will probably find yourself giving more feedback to students and having more conversations with them regarding their progress in the course. It is critically important to approach these interactions with respect for the student and a firm belief that they will want to learn if provided the right support.
Another important thing to keep in mind is the fact that ungrading is slower than many traditional assessment methods. Meeting with students to discuss their progress towards the course goals, reading reflective essays, writing specific feedback, etc. all takes time. You may find it necessary to pare back the total number of assessments you offer, especially if you normally use tools like auto-graded quizzes on Canvas.
However, this slowing down is a feature, not a bug. Learning requires time to process and reflect. Guiding that learning also necessitates extra time. Resist the ever-present urge to rush through the semester. Accept the invitation to slow down and linger over what really matters in the classroom: our students and their learning.
Getting Started
If ungrading interests you but you find the prospect of revising your course overwhelming, please know that you don’t have to ‘ungrade’ your whole course! You can try it for just one week or just one unit without upsetting the apple cart for the rest of the semester. If you try ungrading at a small scale, it will be important to clearly communicate that you are trying something new for a portion of the course and to unambiguously identify what portion of the course will be ungraded.
Messaging is always important when introducing an innovation of any sort to your learning environment. Remember, your students have years of experience informing their idea of what teaching and learning is “supposed” to look like! As Stephen Brookfield notes in The Skillful Teacher, student resistance in the face of change is “normal, natural, and inevitable” (238). Expect it and plan a response to it.
Temple faculty wanting support in implementing ungrading in their own courses can book an appointment for a one-on-one consultation with a pedagogy specialist. For any other teaching-related needs you can email the Center for the Advancement of Teaching at cat@temple.edu.
Resources
Bain, Ken. What the Best College Teachers Do. Cambridge, Mass: Harvard University Press, 2004.
Blum, Susan D., ed. Ungrading: Why Rating Students Undermines Learning (and What to Do Instead). West Virginia UP, 2020.
Brookfield, Stephen. The Skillful Teacher: On Technique, Trust, and Responsiveness in the Classroom. Jossey-Bass, 2006.
Kohn, Alfie. Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes. Boston: Houghton Mifflin Co, 1993.
Sackstein, Starr. Hacking Assessment: 10 Ways to Go Gradeless in a Traditional Grades School. Times 10 Publications, 2015.
Jeff Rients serves as Associate Director of Teaching and Learning Innovation at Temple University’s Center for the Advancement of Teaching.
It’s National Teacher Appreciation Week (May 8-May 12) and while you may be thinking about the ways you can show appreciation for the work of K-12 teachers in your life, we here at the CAT are thinking about ways we can recognize you for the unique and wonderful ways you contribute to the educational mission of Temple University. In that spirit, I would like to dedicate this end-of-year blog post to reflecting on the many ways our CAT team appreciates the work you do:
To the faculty who have attended our 12-hour (I know, oof! That’s a lot of time!) Teaching for Equity Institute, or a multi-part custom workshop series on teaching for equity offered in your school or college, we see you! Your willingness to learn new ways of thinking about teaching equitably shows a real commitment to your students. Our wish is that both you and your students benefit from the deep dive you’ve taken into discussions on equity. If you have not yet had an opportunity to join us for the Institute, look for our fall 2023 workshop announcements. If you wish to speak to us about a custom program for your school or college, please fill out this custom workshop request form.
To the faculty who participated in the Annual Faculty Conference on Teaching Excellence in January by presenting in a breakout, lightning round, or poster session, thanks for sharing! You brought new ideas for your colleagues to consider and jumpstarted conversations on how we might teach our students in ways that achieve rigor without the mortis. And for those of you who attended the conference, thanks for joining in the conversation! The conference takes place every January; look for our announcement for next year’s conference and join us!
To the faculty involved in frequent meetings to revise the curricula for their programs so that they remain fresh and relevant and so that their students reach truly meaningful goals in their courses, kudos! It’s not easy to redesign curricula, but it’s important work and will go a long way towards making an education at Temple a worthwhile endeavor. If you have not yet considered reviewing your curriculum, the CAT can guide you in how to get started. Just email cat@temple.edu and ask for an appointment with an educational developer.
To the faculty who try new technologies in an effort to engage your students in deep learning, provide a variety of ways for your students to participate in class activities, and spark motivation, we admire your willingness to experiment! We have had workshops or consultations with you on tools such as Padlet, Panopto, Perusall, and Jamboard, and have seen you develop exciting new ways of reaching and teaching students. Still others applied for the Innovative Teaching With Makerspace Technology Grant or participated in the Faculty Learning Community on Integrating Advanced Digital Methods and Tools. You found innovative ways to use digital fabrication, virtual reality, and physical computing to improve student learning. If you would like to consider new teaching technologies, book an appointment with an educational technology or digital media specialist at catbooking.temple.edu.
To the faculty who are members of so many task forces, committees, and work groups related to our educational mission either at the university level or in departments, schools, or colleges, thanks for lending your voices to these endeavors! We at the CAT serve on so many of these groups with you where we tackle issues like textbook affordability, student success, accessibility, and technology adoption, and see firsthand how important your perspective is in those sessions. Thank you for dedicating your time to these service duties.
To faculty who engaged with the CAT for support in conducting scholarship of teaching and learning, we appreciate what we have learned from you! Some of you participated in our Faculty Learning Community on Scholarship of Teaching and Learning (SoTL), exploring ways to engage in a type of scholarship that is not native to your disciplines, but that can reap ample rewards in helping us discover ever more impactful ways of teaching. Others took advantage of the Umbrella IRB offered through the CAT to streamline your approval process as you began to implement studies. If you need assistance with SoTL, we’re here to help. Just contact cat@temple.edu to request a consultation.
To faculty who have been immersed in trying to figure out the impact of AI on teaching and learning, have attended our workshops on AI, read our Using P.I. to Manage A.I EDvice Exchange Blog Series or watched our Cat Tips Video Series, participated in departmental meetings to brainstorm future directions, or simply stayed informed by playing with these tools to see what they can do, we appreciate your proactive stance! Together, we will explore new directions in teaching and learning and determine what works best for our students. If you have not yet begun to think about AI and learning in your courses, please do check out the above resources as well as the sample syllabus statements we created to assist you in writing your syllabi. Also, if you are interested in creating assignments that intentionally use AI, consider applying for our Teaching Circle on Using ChatGPT Intentionally for Teaching and Learning.
To faculty who joined us in person at the CAT in workshops, consultations, or at one of our on-campus drop-in locations at HSC, Main and Ambler, we were so glad to see you! While we are glad to see you in any way you can join us, we must admit we love to see our faculty community visiting us at our CAT locations. After all of that pandemic separation, it felt great to give a hug or a handshake to old friends, meet new colleagues, and engage in passionate discussion face-to-face. If you haven’t had a chance to visit us, remember we are here from Monday through Friday 8;30-5:00pm.We are also available for virtual workshops, consultations, and drop-in help and are happy to meet with you in any way you can join us!
We celebrate all of the many other ways you enrich the educational mission of Temple University, and enrich our work at the CAT! We feel so lucky to know you. Please remember how important it is to take some time away from Temple to enjoy your lives, unplug, rest and recharge. Best wishes for a great summer from your friends at the CAT!
Stephanie Laggini Fiore, Ph.D. is Associate Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.
Over the course of this series, we have presented what we’re calling pedagogical intelligence as an approach to addressing the impact of technologies such as ChatGPT in our classrooms. It is more important than ever that we apply time- (and research-) tested practices shown to engage students in authentic learning. We have recommended:
Building in opportunities for iterative work to help students see learning in terms of the process rather than the product and to give you a window into the development of their knowledge and skills over time;
At the foundation of these suggestions is the belief that the majority of our students want to learn and will dedicate themselves to assigned activities if they see the value of course content, believe they are capable of succeeding, and see learning as a process of incremental improvement requiring practice and recalibration on the basis of feedback.
We are not yet at the point where we can reliably detect the use of text-generating technology in writing and we may never get to that point. At the same time, the capacity of large language models to replicate human intelligence will continue to improve. We can’t beat it but we also don’t want to join it! Our task as educators is to show our students the value of finding and using their voice and the importance of their insight and creativity as living, feeling humans.
To this end, our final recommendation for applying pedagogical intelligence in an age of emerging artificial intelligence (AI) is to talk with your students about the implications of text-generation and other AI tools for their learning and for their post-graduation lives. Talk to your students about how these tools work – their potential and possible benefits but also their limitations and the risks they pose. Ask your students to stake out their own approach to the use of these tools. What is gained by learning to use them? What is lost by relying on them? Be transparent about your expectations for student use of AI tools. To help, the CAT has created a range of syllabus guidelines to assist you in articulating your own stance toward student use of AI tools in your classes.
If you’d like assistance in incorporating more iterative work in your class, you can make an appointment with a CAT staff member to talk through your plan. You can also arrange to have someone from the CAT come to your in-person or virtual class to give you feedback.
If you are intentionally using ChatGPT to teach in your classrooms this semester, please email us at cat@temple.edu and tell us about it. Consider that you can engage in the Scholarship of Teaching and Learning (SoTL) by designing a classroom study to evaluate the impact of ChatGPT on student learning. If you want to learn how to design a study related to your use of ChatGPT in the classroom, contact Benjamin Brock at bbrock@temple.edu for assistance.
Finally, continue the conversation on the Faculty Commons! We’ve added a new discussion category for talking about the impact of Chat-GPT and other similar tools in the classroom.
Dana Dawson serves as Associate Director of Teaching & Learning at Temple’s Center for the Advancement of Teaching.
How do we know our students are learning, and how do students themselves gain insight into their own learning progress? One way is to use formative assessments throughout the course. Formative assessment both helps faculty understand what students are learning and helps students see their own progress and also their gaps in learning. Formative assessment prioritizes the learning process, providing immediate feedback to both instructors and students during a learning activity. This information is then used to modify subsequent learning activities to promote new content understanding or revisit prior content knowledge. We have already introduced you to learning assessment techniques in part III of this series that provide ways to implement formative assessments. Consider also that incorporating technology into formative assessments can provide valuable insights into students’ progress and comprehension, making it easier to identify areas where students are excelling and where additional support is needed. Beyond its primary purpose of giving students (and instructors) an opportunity to assess their learning, formative assessments can also help build student confidence by developing the skills and knowledge necessary to meet the goals of the course [1].
Digital assessment tools can immediately show where things are working very well in a class, where a student requires more support for their learning, or where faculty might consider changing their own pedagogical approach. But what does a technology tool for formative assessment look like? Here are just a few technology tools and assessment ideas that work well with small groups of students followed by those that work well with the whole class.
FORMATIVE ASSESSMENT FOR SMALL GROUPS
Google Docs: Google Docs can be used to facilitate small group activities both in person and online that involve collaborative writing, peer editing, research and collaboration, and group projects. In-person students can work together on the same document, while online students can meet in breakout rooms in Zoom to discuss. As the student groups work, the instructor can also participate by monitoring progress in the Google Doc for each group and assessing where each group is making connections to the content and where additional support might be necessary for better understanding. Google Docs allow focus on the process of collaborative work instead of just the product, thereby short-circuiting the usefulness of generative AI (Artificial Intelligences) tools.
One way to create a formative assessment is using Google Docs and the jigsaw technique. In this approach, students are initially divided into small groups (round 1) and each group is given a specific section of a larger text or article to read and summarize. Once each group has completed their task, groups are reshuffled (round 2) so that each group now has one representative from each article who is considered the expert in their topic. These experts then teach their respective sections to the new group, sharing their insights and understanding to help build a collective understanding of the material. This technique encourages students to work collaboratively, helps to foster deeper learning, and provides opportunities for instructors to recalibrate the next activity.
Google Slides: Just like Google Docs, Google Slides can be an effective tool to show how students are implementing concepts from your course content. Students can use it to create collaborative presentations where they share their knowledge with one another and experience different perspectives on the course material. They can see others’ changes as they make them, and every change is automatically saved. In-person students can work on the same slide deck, while online students can work together in a Zoom breakout room to collaborate and edit the presentation. Google Slide activities can promote and encourage student collaboration, creativity, critical thinking, and problem-solving skills while also providing invaluable assessment opportunities to the instructor (particularly if the instructor is also dropping into Google Slides, as a participant, where they can offer praise or course correction in real-time).
FORMATIVE ASSESSMENT FOR THE WHOLE CLASS
VoiceThread: VoiceThread is a collaboration tool that reinforces students’ interaction with one another, provides peer and instructor feedback about where there might be knowledge gaps, and, used effectively by asking the right questions, shines a light on the student’s own life experience as it ties to the course content. It allows the student to respond using the media tool of their choice. The most effective VoiceThread activities tie course content to students’ own life experiences and provide space for all students’ voices, both of which are incentives to not rely on ChatGPT and similar recent AI technologies.
Padlet: Padlet is a highly engaging online tool that allows instructors to create a virtual canvas for collaborative activities such as brainstorming, discussion, sharing resources, creating multimedia content, exit tickets, and interactive icebreakers. Instructors can design the Padlet to allow students to respond anonymously and upvote other students’ contributions. By allowing the students to respond anonymously, it provides opportunities for more honest responses that will allow instructors to better gauge content understanding across the classroom.
Poll Everywhere: Poll Everywhere is an online polling tool that allows instructors to create a variety of interactive activities that allow for formative assessment in the classroom, including live polls, surveys, quizzes, word clouds, and open-ended questions. Students can respond anonymously to questions using their own devices, and instructors can view the results in real time. This enables instructors to quickly assess students’ understanding of the material and revisit the content to clarify muddy areas or reinforce accurate understanding of concepts. It also provides an opportunity for students to reply honestly, interact when they might otherwise be too shy, gauge where their responses are situated within the student group and develop community.
Custom Google Maps: Custom Google maps allow you and your students to create maps with markers, routes, and layers. It can be a desirable alternative to ChatGPT, as it allows for a more interactive and immersive learning experience where students get to explore, document, and present information in a spatially meaningful way. Students can work on the same custom map to work collaboratively on a class project or research assignment. For example, history and geography students can create maps highlighting important locations, events, or landmarks related to their topic and/or their own lived experiences. This tool is especially useful when the assessment is addressing the relationship between a learned experience and a personal experience. The information can provide insight into culturally and socially relevant ties and/or departures related to course content.
As instructors, it is important to remember that assessments serve a broader purpose than just assigning grades to students. In fact, many formative assessments work better when offered as low- or no-stakes activities. When we intentionally incorporate technology into assessments, not just because the tool is convenient but because the tool serves an objective purpose, we help students to become familiar with the technology while achieving formative assessments that provide more immediate feedback for both faculty and students. And remember, when introducing assignments that allow the use of innovative technologies, you might also want to provide resources and guidance on how to use tools ethically and responsibly. This will better prepare students for a world where evolving and innovative technology is a constant reality.
If you’d like to learn more about using and/or adopting one of these tools or exploring other tools, feel free to schedule a consultation with the CAT.
Footnotes:
[1] Angelo, Thomas & Cross, Patricia. Classroom Assessment Techniques: A Handbook for College Teachers. 2nd ed., Jossey-Bass, 1993.
Follow our companion Using PI to Manage AI CAT Tips Video Series.
Emtinan Alqurashi is Assistant Director of Online and Digital Learning and Jennifer Zaylea is Digital Media Specialist at Temple’s Center for the Advancement of Teaching.
I have such a vivid memory of a student, upon my returning his exam, quickly looking at the grade, crumpling the paper into a wad, and making a 3-point shot into the wastebasket–right in front of me! Why did I spend so much time providing meaningful feedback on the exam? Even more exasperating than seeing my written feedback ignored was seeing similar errors on his next exam! UGH! It never dawned on me that I could do more than say to my students, “Take some time to review my comments so that you can see where you can improve, and don’t hesitate to come to office hours if you have any questions about the comments.”
Unbeknownst to me at that time, there was a term and theory in cognitive and developmental psychology literature called metacognition. Proposed by John Flavell, a professor at Stanford University at the time, metacognition is often referred to as “thinking about one’s own thinking,” and includes a process of self-monitoring, self-evaluation, and self-regulation that leads to more intentional learning practices. While metacognitive strategies can be used in a number of ways to support our students’ learning, we can use our usually higher-stakes summative assessments (exams, projects, etc.) to help students “think about their thinking.” More specifically, we can help students to explicitly reflect on elements of their exam preparation in order to make a plan to improve on the next high-stakes assessment.
Enter the metacognitive activity of exam wrappers. Dr. Marsha Lovett and her colleagues at Carnegie Mellon University are credited with creating the exam wrapper technique. Wrappers were developed in reaction to their findings that many successful high school students were arriving at college with study habits that are ineffective for higher order learning. Exam wrappers ask students to answer a series of questions that require them to reflect on how they prepared for the exam, whether the results were what they expected, and how they might prepare differently for the next exam.
Saundra McGuire, author of Teach Students How to Learn: Strategies You Can Incorporate Into Any Course to Improve Student Metacognition, Study Skills, and Motivation, states that that there is a metacognitive equity gap. Often students from under-resourced public schools may not have yet learned how to reflect on their learning and thus continue to employ study strategies that are not as effective as they can be. While all students can benefit from an exam wrapper, the activity may be especially valuable for these students.
The Protocol
Faculty prepare a series of metacognitive questions that help students think about their preparation and performance on a summative assessment. Carnegie Melon’s Eberly Center has some example exam wrappers for different disciplines. What’s nice is you can tailor these questions to the unique nature of your summative assessment: questions getting at the specifics of students’ study/exam preparation strategies, asking them to assess the types of errors they made on the exam, and asking them to develop a concrete plan to improve are typical.
Exams are returned to students wrapped in a piece of paper with the metacognitive questions. Students are then asked to reflect on the questions and develop a plan for the next exam. They submit a copy of the plan to you for homework or extra credit–whichever way you think works best. You review the plan and seek clarification if responses are not specific enough. You can then remind them of their plan and perhaps check in with the class occasionally to make sure they are following the plan as they prepare for the next exam. If you give your exams online or if you’re a fan of reducing paper, the questions can be posed in Canvas.
Lisa Kurz at Indiana University’s Center for Innovative Teaching and Learning points out that exam wrappers can also be given to students before they take an exam. While the name is a little bit of a misnomer for this use, she states, “The exam wrapper provided before the exam might ask students to create exam questions at different levels of Bloom’s taxonomy, predict what major topics will appear on the exam, how much of the exam will be devoted to each topic, and what kinds of questions will be asked. Students can then use this ‘exam blueprint’ as they study.” This works well as a solo, pairs, or small group activity.
If you’d like to learn more about using or adopting one of these exam wrappers or creating your own, feel free to schedule a consultation with the CAT. So, wrap your exam and help your students become metacognitive pros!
The CAT’s STEM Educators’ Lecture, held on March 28, featured guest speaker Dr. Cynthia Bauerle (they/them), who engaged faculty from Temple and other regional institutions in this year’s topic, “Utilizing an Ethical Reasoning Framework to Create More Equitable STEM Education.” Dr. Bauerle is a Professor of Biology at James Madison University, formerly holding positions as Dean of the College of Science and Mathematics and Vice Provost for Faculty. Dr. Bauerle is a molecular biologist by training, publishing widely in both scientific and science education journals. Their career interweaves scientific expertise, passion for inclusive teaching, and a commitment to improving STEM education nationally.
In Dr. Bauerle’s interactive presentation, they introduced the ethical reasoning instrument (ERI), a tool that uses a framework developed collaboratively with colleagues Dr. Laura Bottomley from North Carolina State University, Dr. Carrie Hall from the National Science Foundation, and Dr. Lisette Torrres-Gerald from TERC (originally standing for Technical Education Research Centers, but now just known as TERC), a private organization that supports STEM projects and research.
The ERI guides STEM faculty through a series of questions related to eight dimensions of ethical reasoning. Faculty re-center ethical reasoning by using the instrument as they think about the design of their courses, i.e., their pedagogy, classroom activities, and assessments. Important to note is that these questions ask us to consider more than just bringing in course content that addresses ethical issues. Rather, the ERI enables us to reflect on how our course design can be more equitable. To that end, Dr. Bauerle reminded us that this reflective process also requires us to consider the various identities we bring into the classroom, as well as those of the students, with the belief that diverse identities provide richness. It also enables us to explore more intentionally the biases we may hold so that they don’t enter the classroom.
Users of the ERI should be able to incorporate at least some of the eight characteristics of ethical reasoning (listed below) into their STEM courses. Under each characteristic are sub-questions we can consider in our course design. For example:
Fairness – How can I (we) act justly, equitably, and balance legitimate interests?
Here, the sub-questions we can consider are if our courses provide opportunities for students to learn about inequities in science or the consequences of ignoring inequities in the practice of science. We can also consider if our course includes principles of Universal Design for Learning, including access and accommodation.
The other seven dimensions have useful sub-questions as well. Click on each one to see these sub-questions.
Outcomes – What possible actions achieve the best short- and long-term outcomes for me and all others?
Character – What actions help me (us) become my (our) ideal self (selves)?
Liberty – How do I (we) show respect for personal freedom, autonomy, and consent?
Empathy – How would I (we) act if I (we) cared about all involved?
Authority – What do legitimate authorities (e.g. experts, law, my religion/god) expect?
Rights – What rights, if any, (e.g. innate, legal, social) apply?
Faculty can also use the ERI to evaluate the success of their course implementation. The instrument provides scaffolding for assessment and activity development, as well as examples of each of eight key characteristics of ethical reasoning.
Dr. Bauerle explained that the ERI is being implemented in a variety of STEM courses. Because the ERI is in the beta phase, the research team encourages participants to share any feedback with them regarding its implementation and usefulness. An important reminder was that the ERI is not just a valuable tool for STEM courses–ethical issues that encompass equity are valuable for any course!
If you’d like to keep the ERI discussion going, post a comment on our Faculty Teaching Commons. What are your thoughts about–and experiences with–incorporating one or more of these ethical dimensions? Inquiring minds want to know! As always, if you’d like assistance planning your courses to incorporate elements of the ERI, our CAT staff is ready to help. Make a consultation appointment or email a CAT staff member directly.
We sometimes tend to imagine education as a linear, cumulative process. As we learn new knowledge and master new skills, we grow towards our future selves. Our education system reinforces this idea as we move from 1st grade to 2nd grade, sophomore to junior, or undergrad to graduate student via a linear progression. However, new learning always exists in the context of our prior learning. What we learned before is not something passive and fixed that we build upon; instead our new learning is in conversation with old learning. The new selves that learning produces don’t exist separate from our old selves, but because of them.
We can create better learning opportunities for our students, building upon the relationships between past and present learning, if we structure our courses not as linear movement through topics or units, but as a series of feedback loops where prior learning is explicitly invoked and connected to present learning. This iterative work takes the form of embedding every new learning opportunity into a context.
Questions to Connect
For example, some instructors start a course in a sequence with an overview of the key topics of the prior course or some sort of review activity. An interactive approach would then take that review and ask students to explicitly connect it to future learning, with a prompt such as “Looking over the key takeaways of the prior semester, which of those topics do you see coming into play with each of the units outlined on the syllabus?” This can also be done at the individual unit or project level, with prompts such as “What skills or knowledge do you already possess that will be crucial to your success at this task?” Explicitly asking these questions creates new connections between old learning and new, strengthening both in the students’ minds.
Draft and Revision
An iterative approach also provides more opportunities for students to receive feedback. The classic example of this is the semester-long paper, where students hand in pieces (thesis statement, bibliography, conclusion, etc.) and/or drafts prior to the final submission. Each of the earlier submissions receives peer and/or instructor feedback, giving the students the information that they need to improve their approach for the next submission. In this way small mistakes can be pruned away and misunderstandings of the task addressed before the final submission.
Recurring Examples
The use of recurring examples brings earlier content into later contexts. For instance, a case study introduced early in the semester can be redeployed, either with changes to challenge the students’ new knowledge or as an opportunity to reflect on what they have learned since the case originally appeared. Simple prompts like “How would you handle that old case differently, knowing what you know now?” will help make student’s new learning visible. Students can also be asked to critique earlier, intentionally simpler examples, to reveal the complexities those earlier cases glossed over. You could task students to write a more complete version of the case, one that includes the new nuances the students have learned since the original instruction of the case.
Revisiting Prior Work
Many times our students hand things in to us on a “fire-and-forget” model; the ordeal of the task is over and they would just as soon not think about it again. Painful as it might be, we need to help them do that thinking while showing them that their skills and knowledge are growing. For example, you could ask students to do a short write activity describing how they would do an earlier project differently, knowing what they know now. This can even be formalized into a system where students are given the opportunity to revise earlier submissions for a higher grade, provided that they incorporate one or more new skills and concepts introduced after the original assignment.
There are benefits to iterative work on the instructor as well as the student end. If our classes feature more tasks with built-in continuity, it will be easier for us to see our students in a more comprehensive way. Their growth as thinkers and their individual professional or scholarly identities will become clearer to us. That puts us in a better position to offer feedback that makes sense in the context of the individual learner. It also allows us to see when a student’s thinking or writerly voice suddenly changes. Maybe the student has stepped up to the challenge of the course in a new way. Or maybe the shift is due to some sort of distress and the student needs help (possibly including a referral to the CARE Team). Or perhaps the change in student voice is an indicator that the student is getting unacknowledged assistance. Which of these is true and whether the unacknowledged assistance, if any, rises to the level of cheating requires careful consideration and investigation. But without the record that iterative work provides, these issues are much harder to identify.
Note that iterative work is not offered here only an “anti-cheating” technique. Easier detection of changes in student writing is just one benefit of iterative work. The real gain to be made by this approach is that your students will be less likely to see our syllabi as a series of randomly ordered individual tasks and more likely to see the grand trajectory of their learning. We all too often assume that the students can see all the linkages we made between various content and activities, but those linkages are only readily visible to experts such as ourselves. Iterative work helps the students see how the moving parts of the course fit together and how the various things they have learned form a coherent education.
If you’d like assistance in incorporating more iterative work in your class, you can make an appointment with a CAT staff member to talk through your plan. You can also arrange to have someone from the CAT come to your in-person or virtual class to give you feedback.
Follow our Using PI to Manage AI blog series at EdVice Exchange