Are There Any Meetings on Library Assessment?

Assessment is a growing topic of interest at American Library Association meetings and last weekend I had the privilege of participating in several meetings to discuss trends and challenges. 

Look at How Far We’ve Come: Successes

Assessment practice is evolving from the solo librarian to the assessment conducted in multiple domains – user experience, collections analysis, space design. We started the ACRL Assessment Discussion with a sharing of successes. Grace YoungJoo Jeon at Tulane demonstrates that one librarian can accomplish alot. In her first year as Assessment and User Experience Librarian, she talked with everyone about assessment, learned about their needs, created a list of potential activities, and began to prioritize the work ahead. Grace described reaching out to other units on campus, including the Office of International Students and Strategic Summer Programs. She worked with them to design and moderate focus groups with international students.   All in one year!  

Penn State Libraries’  success this year is a growing department for assessment and metrics headed up by Steve Borelli.  Prioritizing assessment needs through the lens of budgetary operations, they are currently advocating for a position in collections assessment for a department of four. 

Joe Zucca at the University of Pennsylvania  is using the Resources Sharing Assessment Tool (Metridoc) as a space for collecting interlibrary loan statistics, enhanced with MARC data from the consortium’s individual library holdings. With connections to Tableau, data visualization enhances the ability to evaluate inventory and use, and provides potential for collection development at a collaborative level. 

We Still Have Some Challenges

In the example of RSAT, merging data from 13 institutions creates some challenges. There is a “near total absence” of data governance, including some 600 designations for academic departments. This lack of standardization makes cross-institutional analysis very difficult to do. 

Of course this isn’t just a problem for large-scale analysis across libraries. One assessment librarian discovered her public services departments have a “home grown” system for tracking reference and directional questions. While standard definitions provided by ACRL and ARL can provide some guidance, libraries may not want to be limited to these, more traditional, metrics alone. There is a spectrum of opinions as to how to count, what to count. How best to define a transaction? 

This lack of agreement related to counting has ramifications down the line, particularly if these metrics are used in performance review. What is to prevent someone from “bumping up” her numbers?  We talked quite a bit about how the library “reduced to bean counting” is no way to tell our story. Librarians may very well feel that a focus on counting diminishes the work that they do. 

The Rearview Mirror

We shared concern that assessment practice is “always looking through the rear view mirror”. When we look at trends only at annual review time, we fail to understand those trends to plan for the future.  We may prefer to ignore the trends.  We tend to keep our data silo’ed, making it difficult to see the full picture, or inter-relationships.  A great example is this one: Less questions about finding Huckleberry Finn (a decrease in numbers at the reference desk) could mean that our discovery systems are working even better.  Fewer page views on our website may result from a more efficient, user-friendly interface. We need to look at our numbers in a more integrated way. 

It was good to talk about our challenges, our successes, and best practices with a group of understanding peers. Then on to the next meeting, LLAMA Assessment Community of Practice, Hot Topics!

This entry was posted in uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.