This post was made possible by the excellent notes and input from Laura Chance, temporary art librarian at Paley. Thanks, Laura!
Last Friday 12 librarians gathered to talk about feedback received from students participating in the Analytical Reading and Writing library workshops. Our “text” for analysis consisted of several thousand comments in response to questions, including:
- What is one new thing you learned in today’s workshop?
- Which part(s) of the research process are you still nervous about?
The session was the first in the series of “real world assessment” meetings for library staff, facilitated by Caitlin Shanley and Jackie Sipes, designed to practice a hands-on assessment exercise in a group setting.
The AR&W workshops provide multiple opportunities for learning assessment, from the clicker responses, to the worksheets students complete during the session, to the online feedback form students submit at the end of two instruction sessions. The data generated via this form was the topic of the Real World Assessment workshop with librarians.
At the beginning of the year, Caitlin and Jackie tasked themselves with evaluating the existing feedback form with an eye towards getting beyond the “Did you like us?” questions and probing at the more complex question of what learning takes place through the instruction workshops. They considered the essential learning outcome for the class, “How do you evaluate the credibility of a source?” – Ideally, the assessment process should help us to understand if students are grasping that concept.
The discussion touched on the challenges of learning assessment.
- Is it meaningful when students can essentially regurgitate information shared with them during the session immediately prior?
- Could there be a pre- and post- test?
Recognizing that the assessment of learning is difficult to do immediately after a session, we asked:
- How do we measure the impact of our workshops outside the workshop itself?
- What if we could look at the bibliographies they include as part of their papers? A challenge is that instructors evaluate the papers differently, depending on their own vision for the goals of the class. The first year adjuncts grade papers as part of a peer process – what if we could sit in on these sessions as they talk about normalizing the paper grades?
- What are the ethics (privacy, sampling, providing differential services) for students who participate in studies like these?
Designing the Assessment Instrument
It also became clear that the assessment instrument dictates the kind of analysis that can be conducted. The feedback consisted of quantifiable data (limited-option questions) and more qualitative, free-text data. Analyzing free text can be more difficult and quantifying text, using a tool like Voyant, can be interesting but many not be meaningful as, again, students tend to repeat back the language provided by the instructor.
Our session generated more questions than answers, but the discussion brought home important issues for those of us engaged in designing an assessment of learning. We learned in a practical way, how:
1. Assessment is an iterative process
2. It can be hard to know how best to approach it until you gather the data and look at what you get.
3. Defining what you want to learn is essential before you begin, but that might be different depending on your role and/or how you plan to use the data that you collect.
One way or the other, it was a fun and useful meeting – and a great inaugural workshop for Assessment in the Real World series.