Using P.I. To Manage A.I. pt. 1: Introduction

Stephanie Laggini Fiore, Ph.D.

We are all teaching in a new reality created by powerful text-generation tools like ChatGPT that allow us and our students to compose text on demand. Lori Salem, Assistant Vice Provost and Director of the Student Success Center, and I wrote an initial post about this last semester. As instructors, we will all need to think hard about how to manage and harness the power of this tool. I use the words “manage” and “harness” intentionally here, as we cannot pretend that we can entirely ban these tools nor can we rely on anti-AI detectors (that I can assure you will not be foolproof). In addition, we have a responsibility as educators to guide our students in the ethical and effective use of AI tools that will be available to them beyond the university in their workplaces and in their daily lives. There is no body of research (yet) that can guide us in using AI for teaching, so we are all feeling our way along by reading, debating, and experimenting with some best ways forward. 

While the teams at the CAT and the Student Success Center are working towards developing a set of guiding principles for managing AI in our classrooms, the way to start right now is by considering how PI can help us manage AI. What is this magical PI, you say? Does it have something to do with Tom Selleck (if you’re my age, you get that joke)? Is it a fancy new counter-AI robot that will solve all of our problems? No, my dear colleagues, it is simply an invitation to examine the fundamentals, the Pedagogical Intelligence that should be the first stop on the road to a set of principles for thinking about teaching in the presence of Artificial Intelligence. In the CAT’s new spring series, Using PI to Manage AI, we will be exploring, both on our blog and on our CAT Tips series on social media, these pedagogical fundamentals as a way to start this conversation. The topics we will explore on our EdVice Exchange blog are all evidence-based ways of designing student assessments of learning in ways that will encourage academic honesty, motivation, and a desire to learn. We will follow each blog post with a CAT Tips video on social media outlining a few concrete ways to implement these assessment strategies in your classes. If you have not recently done a deep dive into evaluating how useful your assessments are for evaluating learning–and also for furthering learning by engaging students in meaningful learning tasks–now is the time! 

We will start the series by exploring how to design assessments that are meaningful for students, allowing them to connect to what we are teaching in ways that help them see the value of engaging in the work. The following blog post in the series will discuss how to use learning assessments to build student self-efficacy in ways that help them to be able to do the work well and  to feel confident in what they are learning. Then we will unpack iterative work that provides feedback and allows for revision along the way. We will subsequently examine summative assessments and strategies for supporting students to think reflectively about how they prepare for these usually higher-stakes assessments. Finally, we will complete the series by introducing some educational technology tools that can assist us in implementing better assessment protocols. 

It will be important to approach this new challenge as an opportunity. It will necessarily push all of us to think deeply about how we are teaching and how we are assessing learning, and in so doing, lead us to more effective practices. We may surprise ourselves by discovering that AI itself can be useful in exciting new ways for learning. In the meantime, know that we at the CAT are on this journey with you, and will be working to support you as you support our students’ learning.  

Note: If you are intentionally using ChatGPT to teach in your classrooms this semester, please email us at cat@temple.edu and tell us about it. Consider also that you can engage in the Scholarship of Teaching and Learning (SoTL) by designing a classroom study to evaluate the impact of ChatGPT on student learning. If you want to learn how to design a study related to your use of ChatGPT in the classroom, contact Benjamin Brock at bbrock@temple.edu for assistance. 

Follow our Using PI to Manage AI blog series at EdVice Exchange

Follow our companion Using PI to Manage AI CAT Tips Video Series

Stephanie Laggini Fiore, Ph.D., is Associate Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A Series on Evaluating Teaching – Part V: Assessment of Student Learning

Dana Dawson & Benjamin Brock

The earlier posts in this series discussed how to apply the lenses of self, colleagues, students and scholarly literature to the evaluation of teaching. In the final post in this series, we will reflect on how assessment of student learning at the course and program level allows us to take a step back and ask whether what we’re doing is working. Assessment of student learning in our classes helps us evaluate whether students have met the learning goals for the course: it tells us what our students know and can do, what they have yet to learn and are still working on, and whether our instructional decisions have been effective. Assessment of student learning at the program level helps us evaluate whether students have met the learning outcomes of the program (the curricular requirements, degree or Certificate): it tells us whether the program is designed to deliver the promised outcomes and is structured coherently and in such a way that information needed in later courses is adequately scaffolded in earlier courses. When thoughtfully designed, course and program assessment can foster reflection and dialogue that ultimately benefits the students in our classes and programs.

In this post we discuss student learning goals, learning outcomes, and the relationship between the two. At Temple, learning goals refer to what a student should know or be able to do at the end of a single course, whereas student learning outcomes generally refer to outcomes at the major, minor, certificate or curricular (e.g., GenEd or Writing Intensive) level. Assessment of student learning is most useful when it takes into account both course learning goals and student learning outcomes. For this reason, we encourage you to take into account the following factors when you design assessments. 

Curriculum Alignment

  • Ideally, course assessment and program assessment begin with student learning outcomes, or the overall goals of the degree program or curricular sequence your course is embedded within. While you may not have control over course sequence or student learning outcomes, simply knowing where your course fits into the bigger picture can help you thoughtfully design assessments aligned not just with your own course goals, but with the trajectory of student learning both before and after your course. The learning goals specific to your course should be designed to deliver the larger student learning outcomes of the program your course is nested within. 

Identify or Construct Learning Goals and Outcomes

  • Course learning goals and program-level student learning outcomes inform your overall course content, class activities and assessments. For this reason, it’s important that they are specific and measurable. By “measurable” we don’t mean that you need to be able to quantify student learning in relation to all of your goals. Rather, a goal should be written in such a way that you can devise a method to determine whether a student is making progress toward it and whether it has been met. For more information on writing course goals, visit our EDvice Exchange post, Learning Goals: Dream Big! 

When Designing Classroom Assessments, Begin with the Goals

  • Create assessments and activities that allow students to demonstrate whether they have met the learning goals you have established. Your assessments are an opportunity for your students to highlight their learning and development across the semester, and an indicator of your effectiveness teaching the content you set out to deliver. To learn more, see our post Looking for Evidence in all the Right Places: Aligning Assessments with Goals.

Consider Program Assessment

  • Course-based assessments may then be used in program assessments. You may want to work with colleagues in your program to review these artifacts using a rubric that aligns with one or more program-level student learning outcomes. Colleagues may also be called upon to help you interpret your students’ gains across the semester. While reviewing course-based artifacts such as exams, essays and written reflections, aim to identify areas of the course or curriculum in need of revision for future semesters. In other words, be sure to use the results of your assessments.

The Scholarship of Teaching and Learning (SoTL)

  • If this is beginning to sound a bit like research, that is the idea . . . once this all becomes more systematic we can move into what is called the Scholarship of Teaching and Learning (SoTL). Systematically inquiring into our pedagogical practices allows us, as instructors, to make evidence-based decisions about our teaching, our classroom activities, and our assessments. Engaging in SoTL ensures our students are learning and developing as best they can in our classroom (Brock & Rouder, in press). 

Where Student Feedback Forms (SFF) are up to our students, peer review requires the input of our colleagues, and assessment of courses and students learning generally falls to the department or program. When we focus on the instructor in the classroom we realize that this systematic, continuous evaluatory process aimed at pedagogical improvement is solely in our own hands as faculty. This process allows us the opportunity to provide evidence that we are performing highly as teachers and that our students are, in fact, learning. It is also an opportunity to consider why we might want to routinely assess our teaching and our students’ learning: are our actions aimed at developing (as opposed to demonstrating) our pedagogical knowledge, competencies and skills, and how might this further our motivation to do so over time? We can think of assessment of student learning as a means to communicate empirical evidence regarding our instructional practices and our students’ experiences. It can be used to demonstrate how we are consistently evolving our pedagogical practices so that our teaching can be as impactful as possible. 

For support with designing assessments, schedule a consultation with a CAT specialist. For help with developing SoTL projects, look for SoTL consultations on our CAT consultations page.

  • Brock, B. & Rouder, C. (in press). Celebrating the Scholarship of Teaching and Learning (SoTL). Faculty Herald, Temple University
  • Linnenbrink-Garcia, L. & Patall, E. A. (2015). Motivation. In L. Corno & E. M. Anderman (Eds.), Handbook of Educational Psychology (3rd ed., pp. 91-103). Routledge. https://doi.org/10.4324/9781315688244 

Dana Dawson and Benjamin Brock work at Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A series on evaluating Teaching – Part IV: The Literature on Teaching and Learning

Stephanie Laggini Fiore, Ph.D.

While reflection on one’s teaching, as well as student and colleagues’ feedback, are better-known methods for evaluating teaching, perhaps the most overlooked method is to consider how we use the scholarly literature on teaching and learning to improve our teaching. Instructors who engage with the literature of the scholarship of teaching and learning develop a vocabulary and way of thinking that moves them beyond replication of teaching methods they experienced as a student or ones that were taught to them when they were teaching assistants or junior faculty. Familiarity with this literature allows us to engage in reflection and experimentation that continually evolves our teaching practices. The insights gained from engaging with the extensive body of work on teaching and learning includes both validating effective practices you may already have been using, and of course, opening up new ways of teaching, designing curriculum, assessing learning, and supporting students that we may have never considered. It also clarifies for us why certain methods may work better than others.

It is clear to me why this criterion is often overlooked. When I started working at our teaching center after having taught for over 25 years, I was introduced for the first time to the scholarly literature on teaching and learning. I had dabbled a bit with very specific literature on teaching English as a second language, and I had read a little bit about oral proficiency methods for teaching world languages, but I never moved beyond these limited forays into this kind of scholarship. I don’t think my lack of awareness was unusual. Immersed in my disciplinary research, as most faculty are, I had never had occasion to explore the wealth of scholarship that provides guidance and evidence on how students learn. In my new role at the center, a whole world opened up to me that I never knew existed.

I remember in particular a brand new book that had come out just as I started my role at the center—How Learning Works: 7 Research-Based Principles for Smart Teaching. It was an incredibly good entry point as each chapter pulled together the research on teaching and learning on a variety of topics in coherent form and then suggested strategies we can employ in the classroom. The chapter on student motivation was transformational for me. It validated much of what I had been doing, especially around creating a positive environment for learning, but also provided so many ideas for how to support student learning in more effective ways. When I went back into the classroom, my newfound knowledge really helped me rethink my teaching and implement concrete changes that saw exciting results. If I had been asked to demonstrate how I utilized the literature on teaching and learning to improve student learning as part of a process for evaluating teaching, I could have pointed clearly to the changes I made as a result of this book and the impact those changes had on student engagement and motivation.

So how can you use this lens to evaluate teaching? In particular, you can demonstrate how you have engaged in a process of continual scholarly teaching by taking advantage of professional development opportunities that allow you to delve into the literature on teaching and learning. For instance, have you attended workshops at the CAT, met with an educational development or educational technology consultant at the CAT, or attended other similar programming offered by professional organizations in your discipline? Have you taken a deeper dive by enrolling in longer-term, intensive opportunities focused on particular aspects of teaching and learning? For instance, perhaps you have attended our 12-hour Teaching for Equity series, or you have met monthly with a cross-disciplinary group to explore a teaching topic in a faculty learning community. Maybe you have simply gotten your hands on some excellent literature (the CAT has a lending library available on all kinds of topics!) and have made changes to your teaching based on what you have read. And, of course, taking this a step further, you might contribute to the scholarship on teaching and learning by investigating how teaching or curricular changes you have implemented have impacted student learning, and then presenting or publishing on those findings.

If you have never before considered this particular lens, I urge you to give it a try! Faculty who begin that journey into the scholarship on teaching and learning find it a fascinating and energizing way to evolve their teaching and curricular practices.

Stephanie Fiore is Assistant Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A series on evaluating Teaching – Part III: Formative Peer Review of Teaching that Enhances Teaching and Builds Community

Stephanie Fiore and Linda Hasunuma

Peer review of teaching gets a bad rap. It conjures up images of being judged, of one’s teaching put under a microscope. Faculty express discomfort and nervousness at being observed in class, and, interestingly, they also resist the idea that they are “qualified” to provide feedback on a colleague’s teaching. That is, of course, if they even give feedback. I have a distinct memory of my chair coming into my class (unannounced), sitting in the back and writing furiously the whole time. Afterwards, I never received any feedback, but I knew that his mysterious impressions of my teaching were written in a report and filed somewhere with my name on it. And, of course, while faculty need a letter written by a peer reviewer for certain summative purposes, such as promotion, merit, or awards, these letters are often little more than a checkbox exercise written by a well-meaning colleague, and certainly aren’t intended to improve teaching.

But it doesn’t have to be this way!

Formative peer review of teaching (and by formative I mean, peer review intended to support continued growth in teaching excellence) should contribute to what Shulman calls making teaching community property. Just as we would never evaluate scholarly research on the basis of offhand comments made around the water cooler, nor should we evaluate teaching in this way. A community of colleagues can provide feedback in both our research and teaching worlds to help us improve the quality of our work. This word—community—is so important here. Done well, peer review should build community in your departments and colleges as you talk to each other about teaching and learning, promote shared educational goals, and of course, create natural support structures when our teaching goes sideways. Within this community of colleagues, a well-designed peer review process helps to encourage reflection and more intentionality in teaching, and energizes us as instructors as we gain more insight into our practices. Note that peer review can take the form of classroom observations, as well as review of a Canvas course, a syllabus, or other teaching artifacts (such as assignments, assessments, and materials). If your department is considering peer review as a professional development practice, the CAT can help you create a protocol that works for your specific department’s needs

Well-designed peer classroom observations should be a rewarding collaboration that contributes to the professional development of both the reviewer and the reviewed, as both gain insight into effective teaching practices through this process. There are three stages to an effective peer classroom observation: the pre-observation discussion, the observation, and the post-observation debrief. 

The Pre-Observation Discussion

Before the observation, the colleague conducting the review should try to learn as much as possible about the class goals and other helpful details, and any specific areas of concern the instructor may have about their teaching so that the reviewer can pay special attention to those areas during the observation and provide targeted feedback. 

The Observation

For the observation itself, it is very helpful to use an instrument to guide the reviewer. The CAT has recently created a new comprehensive instrument that may be useful for your peer observations, and there are other models we can share as well. Here are some helpful recommendations for conducting the observation adapted from “Twelve Tips for Peer Observation of Teaching” from Siddiqui et al, 2007):

  • Be objective. Focus on specific teaching techniques and methods that were outlined in the instrument. You should communicate your observations, not your judgments.
  • Resist the urge to compare with your own teaching style. Being peers does not necessarily mean that the two of you will have the same teaching style. Concentrate on the teaching style of the person and the interactions that you observe.
  • Respect confidentiality. Your professionalism and trustworthiness is essential in building a peer review relationship with your partner, so confidentiality is important.
  • Make it a learning experience. For the reviewer too, the process of conducting a peer observation is a learning experience, which both builds the reviewer’s skill at providing constructive feedback, and may spark new ideas useful for the reviewer’s teaching.  

The Post-Observation Debrief

Providing supportive and constructive feedback in a timely manner is key to making this experience meaningful to your colleague’s professional development. But this is, of course, the part that worries faculty most. We often advise reviewers to think of the debrief as a discussion between colleagues, focused more on asking questions than telling a colleague what went right or wrong. The guidelines below will help you give helpful feedback in peer observations:

  • Give your colleague an opportunity first to self-assess what they did well, what they have questions about, and what they might do differently. 
  • Limit the amount of feedback to what the receiver can use rather than the amount you would like to give (we recommend no more than 3 strengths and 3 areas of discussion and improvement)
  • Your feedback should be based on observations rather than inference 
  • Provide your feedback in descriptive rather than evaluative language, using “I” statements rather than “you” statements. “I saw that some students in the back were disengaged”, rather than “you should have really done something about the disengaged students in the back”.
  • Begin with some (genuine) positive comments. 
  • Offer constructive ideas, framed as possibilities for consideration. It can help to frame these ideas as questions. “Have you considered trying…?”
  • Invite dialogue about your comments and questions. 
Adapted from: 
Ende, J., M.DEnde, J. (1983). Feedback in Clinical Medical Education. JAMA;  250: 777-781; and Oxford Learning Institute.  Giving and Receiving Feedback. http://www.learning.ox.ac.uk/rsv.php?page=319

Peer review can be a rewarding and meaningful part of our professional development if designed with care and transparency and in the spirit of doing our best to support student learning. It can help us build community with our colleagues through a shared sense of responsibility and mentorship about our development as teachers, and encourage personal reflection about our teaching practice. Ultimately, of course, its purpose is to deepen student learning, a goal we share as educators. 

In the next part of this series, we’ll discuss evaluating teaching using outcomes and assessments.

Stephanie Fiore is Assistant Vice Provost of Temple’s Center for the Advancement of Teaching and Linda Hasunuma serves as an Assistant Director at the CAT.

“Students are using AI to write their papers, because of course they are.”

Lori Salem and Stephanie Fiore

So says the title of a recent article in Vice that has been making the rounds at Temple.  The article describes a new tool called Open Ai Playground, that generates text on demand.  Playground uses GPT-3, a newly developed machine-learning algorithm, to compose the text.  GPT-3 is also the power behind Shortly-Ai, another text-generation tool offering a somewhat different set of features.  The sentences generated by both programs are surprisingly good – they flow, and they have clear and simple prose style.  A student could theoretically type their essay prompt into Playground or Shortly, and the program would generate the essay for them.  And because the sentences produced by GPT-3 are entirely original, the resulting text would not be flagged by a plagiarism detector like Turnitin.

So, is this the end of writing instruction as we know it?  We think not.  But these new programs do have implications for teaching, and that’s our focus in this post.    

We tested both tools to get a sense of what they can do and what it is like to use them.  Both tools make it easy to produce short (paragraph-long) texts that clearly and coherently state a few relevant facts.  It’s possible to imagine a student using them to produce short “blog-post”-type essays, which is exactly what the students in the Vice article say they do. At least for now, neither program would make it easy to produce a longer text, nor to produce a text that was argument-driven, rather than factual. 

But more importantly, these programs don’t—and can’t—help with the real work of writing.  They can create sentences out of sentences that have already been written, but they can’t help writers find the words to express the ideas that they themselves want to express.  If the purpose of writing was simply to fill a page with words, then the AI tools would suffice.  But if the writer wants to communicate something, and therefore cares what ideas and arguments are being expressed, then AI writing tools are not helpful.  

Don’t take our word for this.  In the sidebar, we provide information about how to access and use Playground and Shortly.  Try them and see if you can get them to write something that you can genuinely use.

If you find, as we did, that AI writing tools are not useful when the writer cares about the content of the writing, then we’re halfway to solving the problem of students using AI tools to plagiarize.

The Plagiarism Arms Race

Just because AI generated texts are undetectable right now, doesn’t mean that will always be the case. Someone somewhere is probably already working on a tool that will detect texts written by GPT-3, because of course they are. Students figure out ways to cheat, and companies invent tools to catch them, and then they sell their inventions to us. This is just the latest iteration of that cycle.

To that point, have you seen the YouTube videos instructing students on how to beat Proctorio at its own game? The same Proctorio for which we pay a hefty annual subscription fee?

There has to be a better way, right?

A better way, part I: Encourage Academic Honesty by Creating Better Assignments

This new AI tool is a “threat” to academia only insofar as we ask students to complete purposeless writing assignments, and ones that rely on lower-level thinking skills that ask students to reiterate factual information. The real answer to cheating systems that become more sophisticated is to create better assessments and to create conditions in our classrooms that encourage academic honesty.

There is some very good research on what works to encourage academic honesty. This is a longer discussion than we will take here, but in essence, we should think about what the factors are that lead to cheating behaviors and work to reduce those factors. These include 1) an emphasis on performance (rather than learning); 2) high stakes riding on the outcome; 3) an extrinsic motivation for success; and 4) a low expectation of success. There are very intentional steps that we as instructors can take to reduce these factors, including adjusting our assessment protocols to rely less heavily on high-stakes one-and-done writing assignments, centering writing assignments on issues students care about, and scaffolding writing assignments to allow for feedback and revision.

We also need to look at the kinds of assessments we are using in our courses. The more we move towards authentic assessments and grounded assessments (designed to be unique to the course you are teaching in the moment. They often include time, place, personal, or interdisciplinary elements to make them something not easily replicable), the better off we are. There is a lot of work to be done here, as we often rely on the kinds of assessments we had as students, very few of which were either authentic or grounded. It is much harder to cheat on these kinds of assessments.

Finally, findings from some interesting research on academic honesty suggest that communicating with students about academic honesty works better than you would think, reminding them of their ethical core and focusing on what academic honesty looks like and why it is expected. This is especially effective when timed close to an assessment.

Try it for yourself!
Open Ai Playground
How to try it: Use the link above to open the website and make a free account. From the home screen, click on the “Playground” tab (top right.) Then enter an “instruction” in the main text box. The instruction might be something like “Describe [topic you are writing about.]” Or “Explain [something you are trying to explain.]” Click “submit,” and your results will appear. If you don’t get what you were looking for, you can keep refining and resubmitting your instructions. ShortlyAIHow to use it: Use the link above to open the website and make a free account. Enter a title and a sentence or two and set the output length to “a lot.” Then click the “write for me” button. If you like the way the text is going, you can type another sentence or two and click “write for me.” Or you can refine your original title and first sentence and start over. Please share your results! Copy the text(s) that you “write” and email them to Lori.salem@temple.edu along with any comments you care to offer about the texts or your experience producing them.

A better way, part II:  Adapt instruction to reflect new writing practices

Once upon a time, writing instruction centered around penmanship and spelling.  Those days are gone because developments in the technology of writing (from pens, to typewriters, to word-processors) drove changes in writerly practice, which eventually led to changes in writing instruction. 

Automated text generators are just the latest technological innovation, and they have already changed the practice of writing in journalismonline marketing, and email.  And why not?  There is great value in making certain kinds of writing more efficient. 

Our approach to writing instruction will need to adapt to this new reality.  It’s not hard to imagine a future in which universities teach students how to use AI tools to generate text for some situations, even as they disallow the use of AI tool for others. 

Lori Salem serves as Assistant Vice Provost and Director for the Temple University Student Success Center.  Stephanie Fiore is Assistant Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching

Beyond SFFs: A Series on Evaluating Teaching – Part II: Reflective Practice

Jeff Rients and Cliff Rouder

series title card

In Part I of this series, Stephanie Fiore outlined Brookfield’s four lenses of reflective practice: an autobiographical lens, our students’ lens, our colleagues’ lens, and the lens of theoretical literature. Today we’re going to look at the first lens, our own autobiographical understanding of what is happening in our courses. Reflecting on our own practices and the behaviors of our students is an important component of evaluating our teaching for four key reasons:

  • The single instructor model of the classroom sometimes makes teaching a lonely business. We only occasionally have a qualified professional in the room to give us feedback (more on that in the next installment). If we don’t take the time to seriously interrogate our daily practices, there’s simply no one else around to do the job.
  • A huge amount of the craft of teaching takes place inside your head! Instructors are constantly evaluating and adapting to the inherently fluid situation that arises when real people wrestle with complex topics. No one else can capture this valuable data, because only you know which thoughts drove your in-the-moment decisions. The only way to make sense of it all after the fact is through reflection.
  • Although our students’ opinions and insights are invaluable, if we uncritically accept their thoughts and suggestions then we run the risk of spending our teaching careers incoherently zigzagging from one extreme to another. That does neither us nor our next group of students any good.
  • We want our students to be reflective learners, so they can apply their learning in new ways and new situations. Well, we need to practice what we preach! If we are not reflective practitioners then our efforts to teach the principles of reflective learning will come off as inauthentic, because that’s what they will be.

But developing a reflective practice can be hard. For one thing, we might wince a little when we think back on mistakes we’ve made or times when our students just didn’t connect with what we were trying to teach them. For another, we’re all busy and it can seem like a luxury to take the time needed to stop what we’re doing, think about what’s working and what’s not, and revise our future actions. But the only way to understand ourselves and grow as instructors is to invest the time in ourselves that we need to turn our past misadventures into future successes.

The key to a solid reflective practice is to develop a specific regular discipline that works for you. Ideally, you would have a few minutes after every class session to reflect on the events of the immediate past, but a time set aside at the end of each day, or certain days of the week, or even one day a week can work. The longer between the end of the class session and your formal reflection time, the more important it becomes to scribble some notes to yourself during class, so you can remind yourself later what transpired. Additionally, you should consider making an appointment with yourself in your Outlook calendar or whatever scheduling tool you use. Not only will that serve as a reminder to do the reflection, but an appointment with yourself makes the task feel “more real” to a lot of us. If you find yourself regularly canceling or moving the appointment for other things, that may be a signal that you need to choose a different time.

Once you can sit down–preferably alone and in a relatively tranquil space–you will need a reflection method. Here are a few possibilities:

Mark Up Your Lesson

In this technique you add comments directly to your lesson plan and/or slide show. This can be helpful if you teach similar material from semester to semester, provided that you review each lesson well enough in advance that you can implement changes the following semester.

Journaling

We talked about this topic in another EDvice Exchange post. One major advantage of a journal, whether ink-and-paper or electronic, is that it collects all your thoughts together in one place for easy review.

Audio/Video Options

Talking out loud to yourself may sound weird, but it can help you process what is going on in your class. For audio only you can use a voice recorder app on your phone, or something like Audacity. For a video recording, a Zoom room of one and the record feature do the job nicely. Of course, if you’re feeling brave you could publish your ongoing reflections via YouTube or SoundCloud or TikTok! Not enough of us talk publicly about what is happening in our classrooms.

Two other things you’ll want to consider as part of your reflective practice: The first is talking to somebody. A regular debrief with a colleague (or a staff member at the CAT!) can help you put your thoughts into perspective. Even getting together once a month to talk about your teaching can help. The second is that at the end of each semester you should consider a reflection session where you go over everything that has happened in your course and try to synthesize what your big takeaways are. You may even find it useful to write a memo to yourself, with a page or two of ideas of how you want to do things differently next semester.

Whichever options you choose, make sure to go back and review your reflections when you receive your SFFs and when you sit down to revise your course. The former is important because you’ll be able to compare your own insights with those of your students, while the latter ensures that all your reflective work pays off in your future teaching.

In the next installment of this series, we’ll be looking at how our colleagues can assist us in evaluating our teaching.

Cliff Rouder and Jeff Rients both work at Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A Series on Evaluating Teaching – Part I: Developing a Holistic Approach to Teaching Evaluation

Stephanie Laggini Fiore, Ph.D.

Evaluation without development is punitive, and development without evaluation is guesswork. (Theall, 2017)

Lee Shulman, past president of the Carnegie Foundation for the Advancement of Teaching and professor emeritus at Stanford University, recounts his surprise that his vision of faculty life as a combination of quiet, solitary scholarly activity and vibrant, collegial interactions with a community of teachers was backward. Says Shulman (1993), “We close the classroom door and experience pedagogical solitude, whereas in our life as scholars, we are members of active communities: communities of conversation, communities of evaluation, communities in which we gather with others in our invisible colleges to exchange our findings, our methods, and our excuses.” In fact, when I speak with faculty about the possibility of implementing new methods of teaching evaluation (such as peer review) that will break down that isolation and begin to develop synergies among faculty for development in teaching and learning, they may fall prey to imposter syndrome, claiming not to be expert enough to provide feedback to colleagues. At the same time, they reveal a sense of vulnerability at the idea of having others observe their teaching.  

But a remarkable thing happened during the shift to remote learning during COVID-19. Faculty began to emerge from their isolation, connect with each other to talk about teaching and brainstorm together solutions to teaching challenges. New Facebook pages dedicated to pedagogy sprang up (the Pandemic Pedagogy group has 31K followers), national disciplinary organizations put information on their websites and circulated it through listservs, department meetings were dedicated to teaching and learning, faculty spoke with students about what worked. In short, because we were pushed into the deep end without a lifejacket, we focused our attention on teaching. And we grew by learning from each other and from our students!

Evaluation of teaching has long been practiced as a mechanism for summative decisions regarding promotion or contract renewal, and faculty will complain (often rightfully so) that it can be either a checkbox exercise devoid of real meaning or based heavily on student feedback. Evaluation of teaching should be so much more! It should create the kind of community that the pandemic briefly afforded us, one in which we as professionals reflect on our own teaching, discuss our practices with colleagues, learn from each other, from our students, and from how well students meet our learning goals, and move towards continual, formative improvement. Stephen Brookfield (2005) suggests that we look at our teaching through four lenses: an autobiographical lens, our students’ lens, our colleagues’ lens, and the lens of theoretical literature. We might also think about how we assess whether our students are reaching the learning goals we’ve set out for them, and what changes we might make to try to improve their ability to succeed in our courses. As Berk (2018) points out, multiple sources can be both more accurate and more comprehensive in evaluating a professional activity as complex as teaching. These multiple sources can be deployed for summative purposes, of course, but more importantly, they can be useful as a holistic tool to help us continue our growth as educators, and our effectiveness in supporting student learning.

We already have a long history of employing the student lens through student feedback forms (SFFs) so this series will not separately discuss this method of evaluation. However, I will mention here how important it is to be mindful of best practices in using SFF data in order for it to provide helpful information towards improvement of teaching. The Temple University Assessment of Instruction Committee has just put out a very helpful guide to using SFF data, Recommendations for the Use of Student Feedback Form (SFF) Data at Temple University. This comprehensive guidance includes a good overview on the purpose of SFFs, what they are and are not, advice for instructors on how to use SFF data, and advice for evaluators on how to use SFFs responsibly and effectively for evaluation purposes. See also How to Read Those SFFs and Flip the Switch: Making the Most of Student Feedback Forms for guidance on the best ways for faculty to use student feedback to improve teachingAnd, of course, you can make an appointment with a faculty developer at the CAT to discuss your SFFs.

Remember also that SFFs are not the only way to receive student feedback. I strongly recommend gathering mid-semester feedback as a check-in with your students while there is still time to make changes in the semester. It has the added bonus of having students reflect on their learning and consider changes they may want to make in order to achieve better results. You can also ask the CAT to perform a mid-semester small (or large) group instructional diagnosis

This blog series will continue throughout the fall semester with an exploration into the other teaching evaluation methods that can be used to both assess teaching practices and grow teaching excellence. Stay tuned for the following upcoming topics:

Part II: Reflective Practice

Part III: Peer Review of Teaching

Part IV: Assessment of Learning Outcomes

Part V:  Literature on Teaching and Learning

At the end of this series, my sincerest wish for you is that you find new ways to think about your teaching practices, that you engage with your colleagues (and with the CAT!) in productive and enlightening conversations about teaching, that you find a favorite resource on teaching, and that you connect with your students in ways that help them to learn deeply.

Stephanie Fiore serves as Assistant Vice Provost of Temple’s Center for the Advancement of Teaching. 

Looking for Evidence in all the Right Places: Aligning Assessments with Goals

Dana Dawson

You’ve written the learning goals for your course and are now ready to design learning assessments that align with your course goals, offer opportunities for formative feedback and are educative. Well-designed learning assessments will:

  • Provide evidence that students have met your learning goals;
  • Support students in progressing toward accomplishing your learning goals;
  • Allow students to assess their learning process and progress; and
  • Help you discern whether your learning materials and activities are effective.

Learning assessments are often described as formative or summative. Formative assessments are designed to give students feedback they can use for future work and are most commonly low stakes and assigned early and often in a unit or course. Summative assessments provide a snapshot of a student’s learning at a point in time (at the end of a unit or course, for example). Another way to frame this is using Dee Fink’s description of Auditive versus Educative Assessments. Auditive assessments are backward looking and are used to determine whether students “got it.” Educative assessments have clear criteria and standards (through the use of rubrics, for example), help us ascertain whether students are ready for a future activity, and provide opportunities for high quality feedback from the instructor and self-assessment on behalf of the student.

Here are some things to keep in mind as you design your assessments.

Start with your goals

You have determined what you want students to be able to do or to know by the end of your course and articulated those ambitions as learning goals. Now you must determine the activity or product that would provide the best evidence as to whether your students have reached a particular goal? What can your students do or create to demonstrate they have gained facility with the content or skills the course promises to deliver?

The previous post in this series outlined the six categories of goals that constitute Fink’s Taxonomy of Significant Learning (see the table below). The type of assessment you select will depend on the nature of the learning goal it is designed to address. For example, while a multiple choice quiz may be a good option for assessing foundational knowledge, it may not be a good fit for integration or caring goals. Here are some suggestions for types of assessments or assessment strategies that align with the dimensions of Fink’s Taxonomy of Learning. Note that many of the suggestions listed below will address more than one dimension. For example, a carefully constructed research poster assignment might assess how students define key concepts or methods (foundational knowledge), use communication skills (application), articulate the significance of the project (caring), consider their audience in designing the poster (human dimension) and pull together research skills taught and practiced throughout the semester into a coherent whole (integration).

Elements of Fink’s Taxonomy of Significant LearningExamples of Assessments
Foundational Knowledge
What key information is important for students to understand in this course or in the future?
Multiple choice quiz, guided notes, classroom polling, quotation summaries
ApplicationWhat kinds of thinking are important for students to learn? What important skills do they need to gain?Briefing paperdyadic essaylab reportannotated bibliographyproblem-based learning
IntegrationWhat connections (similarities and interactions) should students recognize and make in this course and with other courses or areas of learning? Or within their own personal lives?Reading prompts, learning portfoliocase studyresearch poster
Human DimensionWhat could or should students learn about themselves and others?Asset-mapping, role playtest-taking teamsstudent peer reviewdyadic interviews
CaringWhat changes/values/passions do you hope your students will adopt?Positive projects, contemporary issues journal“what, so what, now what” journal, class participation, critiquesWikipedia assignment
Learning How to LearnWhat would you like for your students to learn about how to be a good student, learn in this subject, and become self-directed learners, and develop skills for lifelong learning?Ask students to prioritize areas of feedback, advance organizers, self-reflection assignmentstwo-stage exams

Use this worksheet to reflect on assessments that align with your goals and whether your goals and assessments address all six elements of the Taxonomy of Significant Learning.

Don’t forget those situational factors

Assessments designed for first-semester undergraduates ought to differ from those assigned to graduate students. When designing your assessments, you will need to put on your own Human Dimension hat and transport yourself back into the shoes of a learner taking their first lab, completing their BFA exit portfolio, doing rotations, and so forth. You may need to design assessments that also align with department, program or accreditor goals and assessment efforts. Factors such as the number of students in your section and instructional modality will influence assessment decisions.

Use assessment to support student learning

If assessments are infrequent or completed only at the end of a unit or course, they will not give students an opportunity to practice prior to summative assessments or to use your feedback. Remember that learning assessments do not have to be graded. There may be times that the primary purpose of an assessment activity is to help students gauge their own understanding or for you to get a big-picture sense of whether students are following you. In-class or low stakes Learning Assessment Techniques can be used throughout the semester to give students immediate feedback. Consider whether there are opportunities to build revision into your assignment design.

Assessments give you information – use it!

A classroom polling activity may tell you that your lecture on a topic didn’t land with a significant number of your students and that you need to spend a bit more time on it in the next session. A series of ineffectual peer reviews or critiques may tell you that you need to provide more guidance on how to conduct peer reviews or critiques. Learning assessments provide feedback on our students’ progress and on our own work as educators. Take time to reflect on what assessment results tell you not only about your students’ learning but also about your instructional strategies.

When aligned to your learning goals and designed to accommodate situational factors, address the six elements of Fink’s Taxonomy and guide future effort, your assessments will be an essential component of successful course delivery.

For support in designing learning assessments, don’t hesitate to book a consultation with a CAT specialist.

Dana Dawson is Associate Director of Temple’s Center for the Advancement of Teaching.

Learning Goals: Dream Big!

Linda Hasunuma, Ph.D.

Take a close look at your syllabus. What do your learning goals (if you have them) say about what students are going to learn and achieve in your course? Often, our goals or course descriptions focus entirely on foundational knowledge and some application of that knowledge, but what about learning goals that go beyond facts, concepts, formulas, and theories? In this blog post, the third in our summer series on course design, we focus on how we can articulate learning goals that integrate our highest aspirations for learning and what Dee Fink calls our Big Dream for our students. What do we want students to take away, do, and remember years later from their time with us? Fink reminds us in his guide to creating courses for significant learning that we should lead our course design not with the content we will cover but instead with the goals we are hoping our students will reach.

So, what is your Big Dream and how can you craft that into a learning goal? Fink created a taxonomy to help you do just that. Fink’s Taxonomy of Significant Learning encourages instructors to think broadly about their goals for their students. A course goal might be focused on basic information you need students to know or on applying that foundational knowledge (the right side of the taxonomy), but goals focused on learning about oneself or others, or learning how to learn are equally important (the left side of the taxonomy). See below and think about where your current course learning goals are versus where you could go if you dared to dream big and include more of what is on the left side of his taxonomy. Most of us build goals in the foundational knowledge and application areas, but what can we do to include integration, the human dimensions, and caring into the learning experiences we create for our students?

By articulating goals that include more pieces of this pie, we can challenge ourselves to develop new and creative activities, assignments, and assessments that help our students make connections to one another and to the world. We can make our course content more meaningful to our students and their lives and can intentionally and thoughtfully build transformative and significant learning experiences.

The following questions can also help you brainstorm and draft learning goals so that we can aim to have students try to reach more of the goals on the left side of the pie:

  • Big Dream: A year or more after this course is over, what do you want and hope your students will do?
  • Foundational Knowledge: What key information (facts, formula, terms, concepts, relationships, etc.) is/are important for students to understand in this course or in the future?
  • Application Goals: What kinds of thinking are important for students to learn (critical thinking, in which students analyze and evaluate; creative thinking, in which students imagine and create; and practical thinking, in which students solve problems and make decisions)? What important skills do they need to gain?
  • Integration Goals: What connections (similarities and interactions) should students recognize and make in this course and with other courses or areas of learning? Or within their own personal lives?
  • Human Dimension Goals: What could or should students learn about themselves and others?
  • Caring Goals: What changes/values/passions do you hope your students will adopt?
  • Learn how to learn goals: What would you like for your students to learn about how to be a good student, learn in this subject, and become self-directed learners, and develop skills for lifelong learning?

As we expand our understanding of learning goals to make them more ambitious and think about what we want students to actually DO, the verbs we choose to write the goals make all the difference in helping to create an authentic, transformative and significant learning experience. At the CAT, we suggest using Noyd’s 2008 table of verbs based on Fink’s Taxonomy as you think about developing, revising, or refining your own learning goals for your classes and students.

After brainstorming some draft goals, you may want to review them with a colleague to make sure they are effective and clear. Are your draft goals too narrow? Are they written in language your students will understand? Do they motivate and challenge your students? Which areas of the pie are represented in that learning goal? We don’t just teach content; we teach human beings. Though we may not have been encouraged to include the human and caring dimensions in our syllabi and courses during our own education and training, this framework and taxonomy remind us to keep the bigger picture in mind and to be bold in  articulating our dreams for our students. Those dreams and hopes can be part of your learning goals!

Now that we have provided a framework for thinking about and designing your course and learning goals, we turn to assessments for the next post in this series. Working backwards from that Big Dream and our more ambitious learning goals, how can you evaluate learning and progress toward those goals?

References:

  • Fink, Dee L. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. San Francisco, Jossey-Bass, 2013 (pp.83-84).
  • Noyd, Robert K. and the Staff of the Center for Educational Excellence, (white paper 08-01), Primer on Writing Effective Learning-Centered Course Goals, 2008. Colorado Springs, CO. US Air Force Academy.

Linda Hasunuma serves as an Assistant Director at Temple’s Center for the Advancement of Teaching.

Context Matters: Considering Situational Factors in Course Design

H. Naomie Nyanungo

Imagine trying to plan a trip with limited knowledge of your destination. Maybe you know dates of your departure and return and that you will have some travel companions but not much else. You don’t know the weather at your destination, or even how you will get there? You don’t know how many travel companions you will have or anything about them. If you are like me, who likes to feel prepared before embarking on any adventure, this sounds like a nightmare situation. I hope you can see where I am going with this – it is hard to plan for something without considering the context. This is true for planning a trip as it is for designing a course.

The courses we design and teach take place in specific contexts, they do not happen in a vacuum. The situational factors in our context should inform the decisions we make about our learning goals, activities, assessments and feedback strategies. For example, the types of teaching and learning activities that I use in an asynchronous online course will be different from those in an in-person course. A well-designed course is one that takes into consideration relevant contextual factors. When we fail to consider the situational factors in the process of designing courses, we run the risk of setting unrealistic expectations of student performance and alienating our students. It could also result in poor alignment with standards set by departments, programs or accrediting agencies. Ultimately, it leads to frustration for both instructors and students.

Consideration of situational factors is the first step of Dee Fink’s Integrated Course Design Model. The model identifies five categories of contextual factors listed below (with examples of questions for each category):

  • Specific context factors: E.g. what classroom will be used for the course, how many students, how often will the class meet, how instruction will be delivered?
  • Expectations of others: E.g. what are the expectations placed on this course by the university, department, accreditation agencies, and the students?
  • Nature of the subject: E.g. is the subject primarily theoretical, practical, applied, or some combination?
  • Characteristics of the students: E.g. what are the characteristics of students who take this class? Are they working professionals? Are they majors in this field?
  • Characteristics of the teacher: E.g. what are the factors about your approach to teaching that are relevant to this course? What is your level of knowledge or familiarity with the subject? What is your level of comfort teaching in the specific modality?

It is important to note that not all of these factors are relevant to all teaching situations. You will need to determine which of these are relevant for you. As teachers we usually don’t determine which students will enroll in our class, the classrooms we will teach in, or the expectations of accrediting agencies. The challenge for us is to design good courses knowing the parameters beyond our control in the teaching context.

We encourage you to think about issues of equity and inclusion when assessing the situational factors of your course. In Inclusion by Design: Tool Helps Faculty Examine Their Teaching PracticesMoore and his colleagues share some helpful questions to guide our thinking about equity and inclusion in situational factors.

With some knowledge of the contextual factors in our teaching situation, we can be more confident about the decisions we will make when designing our courses, starting with the next step in this process – Setting Learning Goals.

H. Naomie Nyanungo is Director of Educational Technology at Temple’s Center for the Advancement of Teaching.