Beyond SFFs: A series on evaluating Teaching – Part IV: The Literature on Teaching and Learning

Stephanie Laggini Fiore, Ph.D.

While reflection on one’s teaching, as well as student and colleagues’ feedback, are better-known methods for evaluating teaching, perhaps the most overlooked method is to consider how we use the scholarly literature on teaching and learning to improve our teaching. Instructors who engage with the literature of the scholarship of teaching and learning develop a vocabulary and way of thinking that moves them beyond replication of teaching methods they experienced as a student or ones that were taught to them when they were teaching assistants or junior faculty. Familiarity with this literature allows us to engage in reflection and experimentation that continually evolves our teaching practices. The insights gained from engaging with the extensive body of work on teaching and learning includes both validating effective practices you may already have been using, and of course, opening up new ways of teaching, designing curriculum, assessing learning, and supporting students that we may have never considered. It also clarifies for us why certain methods may work better than others.

It is clear to me why this criterion is often overlooked. When I started working at our teaching center after having taught for over 25 years, I was introduced for the first time to the scholarly literature on teaching and learning. I had dabbled a bit with very specific literature on teaching English as a second language, and I had read a little bit about oral proficiency methods for teaching world languages, but I never moved beyond these limited forays into this kind of scholarship. I don’t think my lack of awareness was unusual. Immersed in my disciplinary research, as most faculty are, I had never had occasion to explore the wealth of scholarship that provides guidance and evidence on how students learn. In my new role at the center, a whole world opened up to me that I never knew existed.

I remember in particular a brand new book that had come out just as I started my role at the center—How Learning Works: 7 Research-Based Principles for Smart Teaching. It was an incredibly good entry point as each chapter pulled together the research on teaching and learning on a variety of topics in coherent form and then suggested strategies we can employ in the classroom. The chapter on student motivation was transformational for me. It validated much of what I had been doing, especially around creating a positive environment for learning, but also provided so many ideas for how to support student learning in more effective ways. When I went back into the classroom, my newfound knowledge really helped me rethink my teaching and implement concrete changes that saw exciting results. If I had been asked to demonstrate how I utilized the literature on teaching and learning to improve student learning as part of a process for evaluating teaching, I could have pointed clearly to the changes I made as a result of this book and the impact those changes had on student engagement and motivation.

So how can you use this lens to evaluate teaching? In particular, you can demonstrate how you have engaged in a process of continual scholarly teaching by taking advantage of professional development opportunities that allow you to delve into the literature on teaching and learning. For instance, have you attended workshops at the CAT, met with an educational development or educational technology consultant at the CAT, or attended other similar programming offered by professional organizations in your discipline? Have you taken a deeper dive by enrolling in longer-term, intensive opportunities focused on particular aspects of teaching and learning? For instance, perhaps you have attended our 12-hour Teaching for Equity series, or you have met monthly with a cross-disciplinary group to explore a teaching topic in a faculty learning community. Maybe you have simply gotten your hands on some excellent literature (the CAT has a lending library available on all kinds of topics!) and have made changes to your teaching based on what you have read. And, of course, taking this a step further, you might contribute to the scholarship on teaching and learning by investigating how teaching or curricular changes you have implemented have impacted student learning, and then presenting or publishing on those findings.

If you have never before considered this particular lens, I urge you to give it a try! Faculty who begin that journey into the scholarship on teaching and learning find it a fascinating and energizing way to evolve their teaching and curricular practices.

Stephanie Fiore is Assistant Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A series on evaluating Teaching – Part III: Formative Peer Review of Teaching that Enhances Teaching and Builds Community

Stephanie Fiore and Linda Hasunuma

Peer review of teaching gets a bad rap. It conjures up images of being judged, of one’s teaching put under a microscope. Faculty express discomfort and nervousness at being observed in class, and, interestingly, they also resist the idea that they are “qualified” to provide feedback on a colleague’s teaching. That is, of course, if they even give feedback. I have a distinct memory of my chair coming into my class (unannounced), sitting in the back and writing furiously the whole time. Afterwards, I never received any feedback, but I knew that his mysterious impressions of my teaching were written in a report and filed somewhere with my name on it. And, of course, while faculty need a letter written by a peer reviewer for certain summative purposes, such as promotion, merit, or awards, these letters are often little more than a checkbox exercise written by a well-meaning colleague, and certainly aren’t intended to improve teaching.

But it doesn’t have to be this way!

Formative peer review of teaching (and by formative I mean, peer review intended to support continued growth in teaching excellence) should contribute to what Shulman calls making teaching community property. Just as we would never evaluate scholarly research on the basis of offhand comments made around the water cooler, nor should we evaluate teaching in this way. A community of colleagues can provide feedback in both our research and teaching worlds to help us improve the quality of our work. This word—community—is so important here. Done well, peer review should build community in your departments and colleges as you talk to each other about teaching and learning, promote shared educational goals, and of course, create natural support structures when our teaching goes sideways. Within this community of colleagues, a well-designed peer review process helps to encourage reflection and more intentionality in teaching, and energizes us as instructors as we gain more insight into our practices. Note that peer review can take the form of classroom observations, as well as review of a Canvas course, a syllabus, or other teaching artifacts (such as assignments, assessments, and materials). If your department is considering peer review as a professional development practice, the CAT can help you create a protocol that works for your specific department’s needs

Well-designed peer classroom observations should be a rewarding collaboration that contributes to the professional development of both the reviewer and the reviewed, as both gain insight into effective teaching practices through this process. There are three stages to an effective peer classroom observation: the pre-observation discussion, the observation, and the post-observation debrief. 

The Pre-Observation Discussion

Before the observation, the colleague conducting the review should try to learn as much as possible about the class goals and other helpful details, and any specific areas of concern the instructor may have about their teaching so that the reviewer can pay special attention to those areas during the observation and provide targeted feedback. 

The Observation

For the observation itself, it is very helpful to use an instrument to guide the reviewer. The CAT has recently created a new comprehensive instrument that may be useful for your peer observations, and there are other models we can share as well. Here are some helpful recommendations for conducting the observation adapted from “Twelve Tips for Peer Observation of Teaching” from Siddiqui et al, 2007):

  • Be objective. Focus on specific teaching techniques and methods that were outlined in the instrument. You should communicate your observations, not your judgments.
  • Resist the urge to compare with your own teaching style. Being peers does not necessarily mean that the two of you will have the same teaching style. Concentrate on the teaching style of the person and the interactions that you observe.
  • Respect confidentiality. Your professionalism and trustworthiness is essential in building a peer review relationship with your partner, so confidentiality is important.
  • Make it a learning experience. For the reviewer too, the process of conducting a peer observation is a learning experience, which both builds the reviewer’s skill at providing constructive feedback, and may spark new ideas useful for the reviewer’s teaching.  

The Post-Observation Debrief

Providing supportive and constructive feedback in a timely manner is key to making this experience meaningful to your colleague’s professional development. But this is, of course, the part that worries faculty most. We often advise reviewers to think of the debrief as a discussion between colleagues, focused more on asking questions than telling a colleague what went right or wrong. The guidelines below will help you give helpful feedback in peer observations:

  • Give your colleague an opportunity first to self-assess what they did well, what they have questions about, and what they might do differently. 
  • Limit the amount of feedback to what the receiver can use rather than the amount you would like to give (we recommend no more than 3 strengths and 3 areas of discussion and improvement)
  • Your feedback should be based on observations rather than inference 
  • Provide your feedback in descriptive rather than evaluative language, using “I” statements rather than “you” statements. “I saw that some students in the back were disengaged”, rather than “you should have really done something about the disengaged students in the back”.
  • Begin with some (genuine) positive comments. 
  • Offer constructive ideas, framed as possibilities for consideration. It can help to frame these ideas as questions. “Have you considered trying…?”
  • Invite dialogue about your comments and questions. 
Adapted from: 
Ende, J., M.DEnde, J. (1983). Feedback in Clinical Medical Education. JAMA;  250: 777-781; and Oxford Learning Institute.  Giving and Receiving Feedback. http://www.learning.ox.ac.uk/rsv.php?page=319

Peer review can be a rewarding and meaningful part of our professional development if designed with care and transparency and in the spirit of doing our best to support student learning. It can help us build community with our colleagues through a shared sense of responsibility and mentorship about our development as teachers, and encourage personal reflection about our teaching practice. Ultimately, of course, its purpose is to deepen student learning, a goal we share as educators. 

In the next part of this series, we’ll discuss evaluating teaching using outcomes and assessments.

Stephanie Fiore is Assistant Vice Provost of Temple’s Center for the Advancement of Teaching and Linda Hasunuma serves as an Assistant Director at the CAT.

“Students are using AI to write their papers, because of course they are.”

Lori Salem and Stephanie Fiore

So says the title of a recent article in Vice that has been making the rounds at Temple.  The article describes a new tool called Open Ai Playground, that generates text on demand.  Playground uses GPT-3, a newly developed machine-learning algorithm, to compose the text.  GPT-3 is also the power behind Shortly-Ai, another text-generation tool offering a somewhat different set of features.  The sentences generated by both programs are surprisingly good – they flow, and they have clear and simple prose style.  A student could theoretically type their essay prompt into Playground or Shortly, and the program would generate the essay for them.  And because the sentences produced by GPT-3 are entirely original, the resulting text would not be flagged by a plagiarism detector like Turnitin.

So, is this the end of writing instruction as we know it?  We think not.  But these new programs do have implications for teaching, and that’s our focus in this post.    

We tested both tools to get a sense of what they can do and what it is like to use them.  Both tools make it easy to produce short (paragraph-long) texts that clearly and coherently state a few relevant facts.  It’s possible to imagine a student using them to produce short “blog-post”-type essays, which is exactly what the students in the Vice article say they do. At least for now, neither program would make it easy to produce a longer text, nor to produce a text that was argument-driven, rather than factual. 

But more importantly, these programs don’t—and can’t—help with the real work of writing.  They can create sentences out of sentences that have already been written, but they can’t help writers find the words to express the ideas that they themselves want to express.  If the purpose of writing was simply to fill a page with words, then the AI tools would suffice.  But if the writer wants to communicate something, and therefore cares what ideas and arguments are being expressed, then AI writing tools are not helpful.  

Don’t take our word for this.  In the sidebar, we provide information about how to access and use Playground and Shortly.  Try them and see if you can get them to write something that you can genuinely use.

If you find, as we did, that AI writing tools are not useful when the writer cares about the content of the writing, then we’re halfway to solving the problem of students using AI tools to plagiarize.

The Plagiarism Arms Race

Just because AI generated texts are undetectable right now, doesn’t mean that will always be the case. Someone somewhere is probably already working on a tool that will detect texts written by GPT-3, because of course they are. Students figure out ways to cheat, and companies invent tools to catch them, and then they sell their inventions to us. This is just the latest iteration of that cycle.

To that point, have you seen the YouTube videos instructing students on how to beat Proctorio at its own game? The same Proctorio for which we pay a hefty annual subscription fee?

There has to be a better way, right?

A better way, part I: Encourage Academic Honesty by Creating Better Assignments

This new AI tool is a “threat” to academia only insofar as we ask students to complete purposeless writing assignments, and ones that rely on lower-level thinking skills that ask students to reiterate factual information. The real answer to cheating systems that become more sophisticated is to create better assessments and to create conditions in our classrooms that encourage academic honesty.

There is some very good research on what works to encourage academic honesty. This is a longer discussion than we will take here, but in essence, we should think about what the factors are that lead to cheating behaviors and work to reduce those factors. These include 1) an emphasis on performance (rather than learning); 2) high stakes riding on the outcome; 3) an extrinsic motivation for success; and 4) a low expectation of success. There are very intentional steps that we as instructors can take to reduce these factors, including adjusting our assessment protocols to rely less heavily on high-stakes one-and-done writing assignments, centering writing assignments on issues students care about, and scaffolding writing assignments to allow for feedback and revision.

We also need to look at the kinds of assessments we are using in our courses. The more we move towards authentic assessments and grounded assessments (designed to be unique to the course you are teaching in the moment. They often include time, place, personal, or interdisciplinary elements to make them something not easily replicable), the better off we are. There is a lot of work to be done here, as we often rely on the kinds of assessments we had as students, very few of which were either authentic or grounded. It is much harder to cheat on these kinds of assessments.

Finally, findings from some interesting research on academic honesty suggest that communicating with students about academic honesty works better than you would think, reminding them of their ethical core and focusing on what academic honesty looks like and why it is expected. This is especially effective when timed close to an assessment.

Try it for yourself!
Open Ai Playground
How to try it: Use the link above to open the website and make a free account. From the home screen, click on the “Playground” tab (top right.) Then enter an “instruction” in the main text box. The instruction might be something like “Describe [topic you are writing about.]” Or “Explain [something you are trying to explain.]” Click “submit,” and your results will appear. If you don’t get what you were looking for, you can keep refining and resubmitting your instructions. ShortlyAIHow to use it: Use the link above to open the website and make a free account. Enter a title and a sentence or two and set the output length to “a lot.” Then click the “write for me” button. If you like the way the text is going, you can type another sentence or two and click “write for me.” Or you can refine your original title and first sentence and start over. Please share your results! Copy the text(s) that you “write” and email them to Lori.salem@temple.edu along with any comments you care to offer about the texts or your experience producing them.

A better way, part II:  Adapt instruction to reflect new writing practices

Once upon a time, writing instruction centered around penmanship and spelling.  Those days are gone because developments in the technology of writing (from pens, to typewriters, to word-processors) drove changes in writerly practice, which eventually led to changes in writing instruction. 

Automated text generators are just the latest technological innovation, and they have already changed the practice of writing in journalismonline marketing, and email.  And why not?  There is great value in making certain kinds of writing more efficient. 

Our approach to writing instruction will need to adapt to this new reality.  It’s not hard to imagine a future in which universities teach students how to use AI tools to generate text for some situations, even as they disallow the use of AI tool for others. 

Lori Salem serves as Assistant Vice Provost and Director for the Temple University Student Success Center.  Stephanie Fiore is Assistant Vice Provost and Senior Director of Temple’s Center for the Advancement of Teaching

Beyond SFFs: A Series on Evaluating Teaching – Part II: Reflective Practice

Jeff Rients and Cliff Rouder

series title card

In Part I of this series, Stephanie Fiore outlined Brookfield’s four lenses of reflective practice: an autobiographical lens, our students’ lens, our colleagues’ lens, and the lens of theoretical literature. Today we’re going to look at the first lens, our own autobiographical understanding of what is happening in our courses. Reflecting on our own practices and the behaviors of our students is an important component of evaluating our teaching for four key reasons:

  • The single instructor model of the classroom sometimes makes teaching a lonely business. We only occasionally have a qualified professional in the room to give us feedback (more on that in the next installment). If we don’t take the time to seriously interrogate our daily practices, there’s simply no one else around to do the job.
  • A huge amount of the craft of teaching takes place inside your head! Instructors are constantly evaluating and adapting to the inherently fluid situation that arises when real people wrestle with complex topics. No one else can capture this valuable data, because only you know which thoughts drove your in-the-moment decisions. The only way to make sense of it all after the fact is through reflection.
  • Although our students’ opinions and insights are invaluable, if we uncritically accept their thoughts and suggestions then we run the risk of spending our teaching careers incoherently zigzagging from one extreme to another. That does neither us nor our next group of students any good.
  • We want our students to be reflective learners, so they can apply their learning in new ways and new situations. Well, we need to practice what we preach! If we are not reflective practitioners then our efforts to teach the principles of reflective learning will come off as inauthentic, because that’s what they will be.

But developing a reflective practice can be hard. For one thing, we might wince a little when we think back on mistakes we’ve made or times when our students just didn’t connect with what we were trying to teach them. For another, we’re all busy and it can seem like a luxury to take the time needed to stop what we’re doing, think about what’s working and what’s not, and revise our future actions. But the only way to understand ourselves and grow as instructors is to invest the time in ourselves that we need to turn our past misadventures into future successes.

The key to a solid reflective practice is to develop a specific regular discipline that works for you. Ideally, you would have a few minutes after every class session to reflect on the events of the immediate past, but a time set aside at the end of each day, or certain days of the week, or even one day a week can work. The longer between the end of the class session and your formal reflection time, the more important it becomes to scribble some notes to yourself during class, so you can remind yourself later what transpired. Additionally, you should consider making an appointment with yourself in your Outlook calendar or whatever scheduling tool you use. Not only will that serve as a reminder to do the reflection, but an appointment with yourself makes the task feel “more real” to a lot of us. If you find yourself regularly canceling or moving the appointment for other things, that may be a signal that you need to choose a different time.

Once you can sit down–preferably alone and in a relatively tranquil space–you will need a reflection method. Here are a few possibilities:

Mark Up Your Lesson

In this technique you add comments directly to your lesson plan and/or slide show. This can be helpful if you teach similar material from semester to semester, provided that you review each lesson well enough in advance that you can implement changes the following semester.

Journaling

We talked about this topic in another EDvice Exchange post. One major advantage of a journal, whether ink-and-paper or electronic, is that it collects all your thoughts together in one place for easy review.

Audio/Video Options

Talking out loud to yourself may sound weird, but it can help you process what is going on in your class. For audio only you can use a voice recorder app on your phone, or something like Audacity. For a video recording, a Zoom room of one and the record feature do the job nicely. Of course, if you’re feeling brave you could publish your ongoing reflections via YouTube or SoundCloud or TikTok! Not enough of us talk publicly about what is happening in our classrooms.

Two other things you’ll want to consider as part of your reflective practice: The first is talking to somebody. A regular debrief with a colleague (or a staff member at the CAT!) can help you put your thoughts into perspective. Even getting together once a month to talk about your teaching can help. The second is that at the end of each semester you should consider a reflection session where you go over everything that has happened in your course and try to synthesize what your big takeaways are. You may even find it useful to write a memo to yourself, with a page or two of ideas of how you want to do things differently next semester.

Whichever options you choose, make sure to go back and review your reflections when you receive your SFFs and when you sit down to revise your course. The former is important because you’ll be able to compare your own insights with those of your students, while the latter ensures that all your reflective work pays off in your future teaching.

In the next installment of this series, we’ll be looking at how our colleagues can assist us in evaluating our teaching.

Cliff Rouder and Jeff Rients both work at Temple’s Center for the Advancement of Teaching.

Beyond SFFs: A Series on Evaluating Teaching – Part I: Developing a Holistic Approach to Teaching Evaluation

Stephanie Laggini Fiore, Ph.D.

Evaluation without development is punitive, and development without evaluation is guesswork. (Theall, 2017)

Lee Shulman, past president of the Carnegie Foundation for the Advancement of Teaching and professor emeritus at Stanford University, recounts his surprise that his vision of faculty life as a combination of quiet, solitary scholarly activity and vibrant, collegial interactions with a community of teachers was backward. Says Shulman (1993), “We close the classroom door and experience pedagogical solitude, whereas in our life as scholars, we are members of active communities: communities of conversation, communities of evaluation, communities in which we gather with others in our invisible colleges to exchange our findings, our methods, and our excuses.” In fact, when I speak with faculty about the possibility of implementing new methods of teaching evaluation (such as peer review) that will break down that isolation and begin to develop synergies among faculty for development in teaching and learning, they may fall prey to imposter syndrome, claiming not to be expert enough to provide feedback to colleagues. At the same time, they reveal a sense of vulnerability at the idea of having others observe their teaching.  

But a remarkable thing happened during the shift to remote learning during COVID-19. Faculty began to emerge from their isolation, connect with each other to talk about teaching and brainstorm together solutions to teaching challenges. New Facebook pages dedicated to pedagogy sprang up (the Pandemic Pedagogy group has 31K followers), national disciplinary organizations put information on their websites and circulated it through listservs, department meetings were dedicated to teaching and learning, faculty spoke with students about what worked. In short, because we were pushed into the deep end without a lifejacket, we focused our attention on teaching. And we grew by learning from each other and from our students!

Evaluation of teaching has long been practiced as a mechanism for summative decisions regarding promotion or contract renewal, and faculty will complain (often rightfully so) that it can be either a checkbox exercise devoid of real meaning or based heavily on student feedback. Evaluation of teaching should be so much more! It should create the kind of community that the pandemic briefly afforded us, one in which we as professionals reflect on our own teaching, discuss our practices with colleagues, learn from each other, from our students, and from how well students meet our learning goals, and move towards continual, formative improvement. Stephen Brookfield (2005) suggests that we look at our teaching through four lenses: an autobiographical lens, our students’ lens, our colleagues’ lens, and the lens of theoretical literature. We might also think about how we assess whether our students are reaching the learning goals we’ve set out for them, and what changes we might make to try to improve their ability to succeed in our courses. As Berk (2018) points out, multiple sources can be both more accurate and more comprehensive in evaluating a professional activity as complex as teaching. These multiple sources can be deployed for summative purposes, of course, but more importantly, they can be useful as a holistic tool to help us continue our growth as educators, and our effectiveness in supporting student learning.

We already have a long history of employing the student lens through student feedback forms (SFFs) so this series will not separately discuss this method of evaluation. However, I will mention here how important it is to be mindful of best practices in using SFF data in order for it to provide helpful information towards improvement of teaching. The Temple University Assessment of Instruction Committee has just put out a very helpful guide to using SFF data, Recommendations for the Use of Student Feedback Form (SFF) Data at Temple University. This comprehensive guidance includes a good overview on the purpose of SFFs, what they are and are not, advice for instructors on how to use SFF data, and advice for evaluators on how to use SFFs responsibly and effectively for evaluation purposes. See also How to Read Those SFFs and Flip the Switch: Making the Most of Student Feedback Forms for guidance on the best ways for faculty to use student feedback to improve teachingAnd, of course, you can make an appointment with a faculty developer at the CAT to discuss your SFFs.

Remember also that SFFs are not the only way to receive student feedback. I strongly recommend gathering mid-semester feedback as a check-in with your students while there is still time to make changes in the semester. It has the added bonus of having students reflect on their learning and consider changes they may want to make in order to achieve better results. You can also ask the CAT to perform a mid-semester small (or large) group instructional diagnosis

This blog series will continue throughout the fall semester with an exploration into the other teaching evaluation methods that can be used to both assess teaching practices and grow teaching excellence. Stay tuned for the following upcoming topics:

Part II: Reflective Practice

Part III: Peer Review of Teaching

Part IV: Assessment of Learning Outcomes

Part V:  Literature on Teaching and Learning

At the end of this series, my sincerest wish for you is that you find new ways to think about your teaching practices, that you engage with your colleagues (and with the CAT!) in productive and enlightening conversations about teaching, that you find a favorite resource on teaching, and that you connect with your students in ways that help them to learn deeply.

Stephanie Fiore serves as Assistant Vice Provost of Temple’s Center for the Advancement of Teaching. 

Looking for Evidence in all the Right Places: Aligning Assessments with Goals

Dana Dawson

You’ve written the learning goals for your course and are now ready to design learning assessments that align with your course goals, offer opportunities for formative feedback and are educative. Well-designed learning assessments will:

  • Provide evidence that students have met your learning goals;
  • Support students in progressing toward accomplishing your learning goals;
  • Allow students to assess their learning process and progress; and
  • Help you discern whether your learning materials and activities are effective.

Learning assessments are often described as formative or summative. Formative assessments are designed to give students feedback they can use for future work and are most commonly low stakes and assigned early and often in a unit or course. Summative assessments provide a snapshot of a student’s learning at a point in time (at the end of a unit or course, for example). Another way to frame this is using Dee Fink’s description of Auditive versus Educative Assessments. Auditive assessments are backward looking and are used to determine whether students “got it.” Educative assessments have clear criteria and standards (through the use of rubrics, for example), help us ascertain whether students are ready for a future activity, and provide opportunities for high quality feedback from the instructor and self-assessment on behalf of the student.

Here are some things to keep in mind as you design your assessments.

Start with your goals

You have determined what you want students to be able to do or to know by the end of your course and articulated those ambitions as learning goals. Now you must determine the activity or product that would provide the best evidence as to whether your students have reached a particular goal? What can your students do or create to demonstrate they have gained facility with the content or skills the course promises to deliver?

The previous post in this series outlined the six categories of goals that constitute Fink’s Taxonomy of Significant Learning (see the table below). The type of assessment you select will depend on the nature of the learning goal it is designed to address. For example, while a multiple choice quiz may be a good option for assessing foundational knowledge, it may not be a good fit for integration or caring goals. Here are some suggestions for types of assessments or assessment strategies that align with the dimensions of Fink’s Taxonomy of Learning. Note that many of the suggestions listed below will address more than one dimension. For example, a carefully constructed research poster assignment might assess how students define key concepts or methods (foundational knowledge), use communication skills (application), articulate the significance of the project (caring), consider their audience in designing the poster (human dimension) and pull together research skills taught and practiced throughout the semester into a coherent whole (integration).

Elements of Fink’s Taxonomy of Significant LearningExamples of Assessments
Foundational Knowledge
What key information is important for students to understand in this course or in the future?
Multiple choice quiz, guided notes, classroom polling, quotation summaries
ApplicationWhat kinds of thinking are important for students to learn? What important skills do they need to gain?Briefing paperdyadic essaylab reportannotated bibliographyproblem-based learning
IntegrationWhat connections (similarities and interactions) should students recognize and make in this course and with other courses or areas of learning? Or within their own personal lives?Reading prompts, learning portfoliocase studyresearch poster
Human DimensionWhat could or should students learn about themselves and others?Asset-mapping, role playtest-taking teamsstudent peer reviewdyadic interviews
CaringWhat changes/values/passions do you hope your students will adopt?Positive projects, contemporary issues journal“what, so what, now what” journal, class participation, critiquesWikipedia assignment
Learning How to LearnWhat would you like for your students to learn about how to be a good student, learn in this subject, and become self-directed learners, and develop skills for lifelong learning?Ask students to prioritize areas of feedback, advance organizers, self-reflection assignmentstwo-stage exams

Use this worksheet to reflect on assessments that align with your goals and whether your goals and assessments address all six elements of the Taxonomy of Significant Learning.

Don’t forget those situational factors

Assessments designed for first-semester undergraduates ought to differ from those assigned to graduate students. When designing your assessments, you will need to put on your own Human Dimension hat and transport yourself back into the shoes of a learner taking their first lab, completing their BFA exit portfolio, doing rotations, and so forth. You may need to design assessments that also align with department, program or accreditor goals and assessment efforts. Factors such as the number of students in your section and instructional modality will influence assessment decisions.

Use assessment to support student learning

If assessments are infrequent or completed only at the end of a unit or course, they will not give students an opportunity to practice prior to summative assessments or to use your feedback. Remember that learning assessments do not have to be graded. There may be times that the primary purpose of an assessment activity is to help students gauge their own understanding or for you to get a big-picture sense of whether students are following you. In-class or low stakes Learning Assessment Techniques can be used throughout the semester to give students immediate feedback. Consider whether there are opportunities to build revision into your assignment design.

Assessments give you information – use it!

A classroom polling activity may tell you that your lecture on a topic didn’t land with a significant number of your students and that you need to spend a bit more time on it in the next session. A series of ineffectual peer reviews or critiques may tell you that you need to provide more guidance on how to conduct peer reviews or critiques. Learning assessments provide feedback on our students’ progress and on our own work as educators. Take time to reflect on what assessment results tell you not only about your students’ learning but also about your instructional strategies.

When aligned to your learning goals and designed to accommodate situational factors, address the six elements of Fink’s Taxonomy and guide future effort, your assessments will be an essential component of successful course delivery.

For support in designing learning assessments, don’t hesitate to book a consultation with a CAT specialist.

Dana Dawson is Associate Director of Temple’s Center for the Advancement of Teaching.

Learning Goals: Dream Big!

Linda Hasunuma, Ph.D.

Take a close look at your syllabus. What do your learning goals (if you have them) say about what students are going to learn and achieve in your course? Often, our goals or course descriptions focus entirely on foundational knowledge and some application of that knowledge, but what about learning goals that go beyond facts, concepts, formulas, and theories? In this blog post, the third in our summer series on course design, we focus on how we can articulate learning goals that integrate our highest aspirations for learning and what Dee Fink calls our Big Dream for our students. What do we want students to take away, do, and remember years later from their time with us? Fink reminds us in his guide to creating courses for significant learning that we should lead our course design not with the content we will cover but instead with the goals we are hoping our students will reach.

So, what is your Big Dream and how can you craft that into a learning goal? Fink created a taxonomy to help you do just that. Fink’s Taxonomy of Significant Learning encourages instructors to think broadly about their goals for their students. A course goal might be focused on basic information you need students to know or on applying that foundational knowledge (the right side of the taxonomy), but goals focused on learning about oneself or others, or learning how to learn are equally important (the left side of the taxonomy). See below and think about where your current course learning goals are versus where you could go if you dared to dream big and include more of what is on the left side of his taxonomy. Most of us build goals in the foundational knowledge and application areas, but what can we do to include integration, the human dimensions, and caring into the learning experiences we create for our students?

By articulating goals that include more pieces of this pie, we can challenge ourselves to develop new and creative activities, assignments, and assessments that help our students make connections to one another and to the world. We can make our course content more meaningful to our students and their lives and can intentionally and thoughtfully build transformative and significant learning experiences.

The following questions can also help you brainstorm and draft learning goals so that we can aim to have students try to reach more of the goals on the left side of the pie:

  • Big Dream: A year or more after this course is over, what do you want and hope your students will do?
  • Foundational Knowledge: What key information (facts, formula, terms, concepts, relationships, etc.) is/are important for students to understand in this course or in the future?
  • Application Goals: What kinds of thinking are important for students to learn (critical thinking, in which students analyze and evaluate; creative thinking, in which students imagine and create; and practical thinking, in which students solve problems and make decisions)? What important skills do they need to gain?
  • Integration Goals: What connections (similarities and interactions) should students recognize and make in this course and with other courses or areas of learning? Or within their own personal lives?
  • Human Dimension Goals: What could or should students learn about themselves and others?
  • Caring Goals: What changes/values/passions do you hope your students will adopt?
  • Learn how to learn goals: What would you like for your students to learn about how to be a good student, learn in this subject, and become self-directed learners, and develop skills for lifelong learning?

As we expand our understanding of learning goals to make them more ambitious and think about what we want students to actually DO, the verbs we choose to write the goals make all the difference in helping to create an authentic, transformative and significant learning experience. At the CAT, we suggest using Noyd’s 2008 table of verbs based on Fink’s Taxonomy as you think about developing, revising, or refining your own learning goals for your classes and students.

After brainstorming some draft goals, you may want to review them with a colleague to make sure they are effective and clear. Are your draft goals too narrow? Are they written in language your students will understand? Do they motivate and challenge your students? Which areas of the pie are represented in that learning goal? We don’t just teach content; we teach human beings. Though we may not have been encouraged to include the human and caring dimensions in our syllabi and courses during our own education and training, this framework and taxonomy remind us to keep the bigger picture in mind and to be bold in  articulating our dreams for our students. Those dreams and hopes can be part of your learning goals!

Now that we have provided a framework for thinking about and designing your course and learning goals, we turn to assessments for the next post in this series. Working backwards from that Big Dream and our more ambitious learning goals, how can you evaluate learning and progress toward those goals?

References:

  • Fink, Dee L. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. San Francisco, Jossey-Bass, 2013 (pp.83-84).
  • Noyd, Robert K. and the Staff of the Center for Educational Excellence, (white paper 08-01), Primer on Writing Effective Learning-Centered Course Goals, 2008. Colorado Springs, CO. US Air Force Academy.

Linda Hasunuma serves as an Assistant Director at Temple’s Center for the Advancement of Teaching.

Context Matters: Considering Situational Factors in Course Design

H. Naomie Nyanungo

Imagine trying to plan a trip with limited knowledge of your destination. Maybe you know dates of your departure and return and that you will have some travel companions but not much else. You don’t know the weather at your destination, or even how you will get there? You don’t know how many travel companions you will have or anything about them. If you are like me, who likes to feel prepared before embarking on any adventure, this sounds like a nightmare situation. I hope you can see where I am going with this – it is hard to plan for something without considering the context. This is true for planning a trip as it is for designing a course.

The courses we design and teach take place in specific contexts, they do not happen in a vacuum. The situational factors in our context should inform the decisions we make about our learning goals, activities, assessments and feedback strategies. For example, the types of teaching and learning activities that I use in an asynchronous online course will be different from those in an in-person course. A well-designed course is one that takes into consideration relevant contextual factors. When we fail to consider the situational factors in the process of designing courses, we run the risk of setting unrealistic expectations of student performance and alienating our students. It could also result in poor alignment with standards set by departments, programs or accrediting agencies. Ultimately, it leads to frustration for both instructors and students.

Consideration of situational factors is the first step of Dee Fink’s Integrated Course Design Model. The model identifies five categories of contextual factors listed below (with examples of questions for each category):

  • Specific context factors: E.g. what classroom will be used for the course, how many students, how often will the class meet, how instruction will be delivered?
  • Expectations of others: E.g. what are the expectations placed on this course by the university, department, accreditation agencies, and the students?
  • Nature of the subject: E.g. is the subject primarily theoretical, practical, applied, or some combination?
  • Characteristics of the students: E.g. what are the characteristics of students who take this class? Are they working professionals? Are they majors in this field?
  • Characteristics of the teacher: E.g. what are the factors about your approach to teaching that are relevant to this course? What is your level of knowledge or familiarity with the subject? What is your level of comfort teaching in the specific modality?

It is important to note that not all of these factors are relevant to all teaching situations. You will need to determine which of these are relevant for you. As teachers we usually don’t determine which students will enroll in our class, the classrooms we will teach in, or the expectations of accrediting agencies. The challenge for us is to design good courses knowing the parameters beyond our control in the teaching context.

We encourage you to think about issues of equity and inclusion when assessing the situational factors of your course. In Inclusion by Design: Tool Helps Faculty Examine Their Teaching PracticesMoore and his colleagues share some helpful questions to guide our thinking about equity and inclusion in situational factors.

With some knowledge of the contextual factors in our teaching situation, we can be more confident about the decisions we will make when designing our courses, starting with the next step in this process – Setting Learning Goals.

H. Naomie Nyanungo is Director of Educational Technology at Temple’s Center for the Advancement of Teaching.

Design Your Course for Significant Learning! A Step-By-Step Guide

Stephanie Fiore

The first time I taught, I was a first-year graduate student assigned to teach a section of an upper-level multi-section Italian Cinema course. I had never taught before, and certainly didn’t feel that I possessed the expertise to teach upperclassmen about the subject. The faculty fed me notes from their lectures and handed me the syllabus I would need to follow. It was the usual syllabus with a list of policies, a schedule of topics—i.e. the list of films that we would discuss—and the dates of two midterms and the final exam. There was no explanation about why the course was structured in this particular way (except that it was chronologically organized by date of film) nor was there any instruction on how to teach it effectively. I muddled through as so many do when we begin teaching in higher ed, but I must admit that I had little idea what the overarching goals of the course were and what I was hoping my students would achieve by the end of it. I knew only that I needed to do a lot of research on each film and then ask interesting questions to get a discussion going, focusing on some of the salient points from my research. 

My experience is not unusual. The truth is that very few of us are provided guidance in graduate school about designing courses that will lead our students to what L. Dee Fink calls “significant learning.” As Fink explains it, “for learning to occur, there has to be some kind of change in the learner. No change, no learning. And significant learning requires that there be some kind of lasting change that is important in terms of the learner’s life” (Fink, 2013). But how to achieve that significant learning? This summer’s EdVice Exchange Course Design Blog Series offers a step by step guide to Fink’s Integrated Course Design model in order to lead you and your students towards that significant learning experience we all wish for. 

In Fink’s model, we begin by considering the situational factors that define the learning environment in which we are teaching. Then we start designing our course, starting with learning goals. This is a meaningful change from the content-first way we often plan our courses (What content do we have to cover? What order is the textbook in?), shifting our focus instead towards the learning goals we hope our students will achieve. We then identify assessments that provide evidence students have reached the learning goals, and activities that provide the practice students need to achieve mastery. This is not a linear process but instead an integrated one in which goals, assessments and activities work together to provide the significant learning we seek. We will discuss each of these elements of course design in greater depth as we move through the series, providing time between each piece for you to apply what you are learning to your course(s)

EdVice Exchange Course Design Series:

Part 2: Considering Situational Factors  

Part 3: Articulating Meaningful Goals

Part 4: Aligning Assessments with Goals

Part 5: Developing Teaching Activities

Part 6: Implementing Your Course Design  

This year, I taught a similar course to the one taught years ago. Using Fink’s model, I designed the course by focusing on the goals I wanted my students to achieve. Of course, I wanted them to develop an appreciation for Italian culture and film. But I also wanted them to develop analytical skills for critically discussing and writing about film. I wanted them to explore the complex interactions between film and its historical, cultural, and political context. I also wanted them to learn how to build new knowledge through productive and collaborative discussions. By clarifying for myself exactly what the learning goals were for the course, I was able to design learning activities that would get them there and assessments that would provide evidence that they had indeed achieved those goals. Student self-assessments at the end of the semester reflected their own awareness of having experienced significant learning. They spoke about formulating new ways of thinking that challenged their previous perspectives, looking at stories as an expression of culture, and learning to critically analyze film through an understanding of historical, cultural, and political factors. One commented, “discussions allowed us to practice analyzing these films, and hearing what other people noticed broadened our perspectives of not only the film itself, but the director, their choices, and the process of filmmaking.” The important thing to notice here is that designing our courses for significant learning means both we and our students understand where the learning journey is taking us and how it will fundamentally change us along the way. That is a powerful outcome for any one of us!


Interested in reading more about designing courses for significant learning? See L. Dee Fink’s Self-Directed Guide to Significant Learning.

Stephanie Fiore, Ph.D., is Assistant Vice Provost at Temple’s Center for the Advancement of Teaching.

How to Read Those SFFs

Kyle Vitale

Read for patterns, ideas, and humor

The Student Feedback Form can be a valuable source of information about your teaching and your courses. As often it can be a site of frustration, confusion, and irrelevant non sequiturs. How do you read your SFFs with an eye to learning about your teaching? We at the CAT would like to share a few approaches that can help you cut through the noise, find that helpful information, and even laugh off those cruder moments.

Read for patterns

The SFF is an inherently subjective object: students typically fill them out with their own personal experiences of your course in mind. While that doesn’t invalidate their observations, it does mean handling them with care. In this context, it makes more sense to look for patterns, where multiple students signal a shared experience about your course. Most of us have experienced that one student who had a terrible time and lets you know about it in the SFF. That single student’s strong language says far less than two, ten, or twenty students arriving at the same conclusion.

So, read for patterns. Do multiple students comment on course organization, or how accessible you are? That probably means a majority of students found you and the course to be clear. This kind of reading is broad, not deep: you resist getting mired in a single comment and instead survey your responses and start picking out keywords or subjects that recur. Make a list of those subjects, and only then return for deeper reading. This approach ensures you maximize your time with material that obviously deserves attention (either for compliment or revision).

Read for ideas

SFFs are not a statement of adequacy. We’ll repeat that so you can read it out loud to yourself and anybody nearby: SFFs are not a statement of adequacy. Research indicates that students sometimes respond to questions for a variety of reasons having little to do with your teaching ability, and of course, SFFs are not built to include evidence and citation. It is therefore best to enter them with an eye toward small ideas for improvement, rather than with your self worth on the line.

Do students share anything they wish they’d seen more of? Do they mention (and remember, look for those patterns) activities they liked? Are there any particularly kind and thoughtful comments that offer suggestions for changes to in-class activities or assignments? Treat these not as personal critiques, but as free advice for you to consider and (maybe, if it makes sense) adopt.

Read for humor

The internet has made us all familiar with the dangers of the “comment section,” and we have seen some crazy things in SFFs: professors accused of knowing nothing, ruining a major, and picking the wrong career. There are only two possible responses to such claims: believe them, or laugh them off. Comments like these prey on our innate insecurities, and almost always come as one-offs, not patterns. While they can hurt and be difficult to forget, keep in mind that they are almost always groundless and operate on an emotional, rather than curricular, level.

So, find a friend and laugh it off! Tell someone “they just have to hear this.” Have a party where you try to beat each other for the funniest, oddest, or worst comment out there. We promise that the moment you hear those statements hanging in the air, they’ll lose their power and you’ll find them easier to dismiss out of hand. If no one comes to mind, reach out to us — we’d love to share a chuckle with you!

Keep it in context 

Remember that SFFs are just one perspective on your teaching. Many others exist including peer observation, your own reflections, more informal conversations with students, or asking a CAT staffer to come visit. SFFs are not comprehensive, and should be digested in concert with other evidence for a fuller picture of your ongoing development as a teacher. As always, the CAT is here to help: if you’d like us to help you decipher your SFFs, feel free to make an appointment and we’d be happy to read them with you.

Kyle Vitale, PhD,  is Associate Director of Temple’s Center for the Advancement of Teaching.