Assessment in Atlanta

I recently attended the Southeastern Library Assessment Conference, a great forum that provides us an opportunity to hear about (mostly) academic library assessment activities in the Southeast and beyond. Several sessions related to a topic of special interest to libraries these days: How do we identify the measures that demonstrate value – that are meaningful for libraries?

The conference organizers were quite brave in kicking off the conference with comedic duo Consilience with Pete and Charlie. Both have day jobs at Georgia Tech. Pete Ludovice is an engineering professor by day, a stand-up comic by night. Charlie Bennett is a humanities librarian.

Provided with Amos Lakos’ article on creating a culture of assessment as fodder, their back and forth riff on the future of libraries (is there really a need?) and assessment was provocative and fun at the same time. Are there ways in which assessment actually hinders innovation?  The take-away: Start-ups take time. If you want innovation and creative thinking to thrive at your library, do assessment with a “loose”, backed up way. Don’t be Draconian about the numbers at the start. Assessment is not just about statistical significance. Embrace failure – we fail all the time and need to learn from that.

That said, we assessment librarians do like to collect numbers. So how do we manage all of the data that we are collecting?

Wrangling the Megalith – Mapping the Data Ecosystem

Mark Shelton reported on such a data identification and management project at Harvard University, where he is Assessment Librarian. Understanding the data that is available, where and how it is collected, and how elements relate to one another, Mark suggests, is a taxonomy problem – and one that librarians are good at devising. His approach thus far is examining many sources of data, meeting with staff to talk about assessment and the data that they are collecting. The data being wrangled comes from:

  • 73 libraries,
  • 931 staff and
  • 1364 circulating collections
  • 78 borrower types

And the data taxonomy comes with its own clever acronym:

COLLAPSE

  • Collection
  • Object Type – books, film, coins,
  • Library – physical places associated with schools, rooms, carrels
  • Loot – Projected, real, strategic alignment – Harvard has thousands of endowments
  • Activity – reference, learning, digitization
  • People – 78 borrower types
  • Systems – Internal, external, citation, collaboration – data in all these systems
  • Events – transactions for reference, article downloading

With this kind of complexity, even analyzing a data set as seemingly straightforward as circulation data is not for the faint of heart. But it’s essential work. We must map our data to what we say are our values. We must also recognize how this data can contribute to strategic planning efforts, although different stakeholders will want to look at and interpret the data in different ways.

Metrics with Meaning

The need for recognizing our different stakeholders was a message in another presentation from Lisa Horowitz, Massachusetts Institute of Technology, Kirsten Kinsley, Florida State University, Zsuzsa Koltay, Cornell University and Zoltán Szentkirályi, Southern Methodist University

Their session had two goals: the first to discuss strategies to work with publishers as they update requests to libraries for metrics that are meaningful – to libraries and to their respective audiences. The impetus for the question was the request from Peterson’s College Guide for data on the number of microforms owned by the library. What does the data we publish about our libraries help, or hinder our marketing of the libraries?

Publishers have interests too. They have audiences that are different than that of librarians and institutional administrators. Our challenge is to work with publishers to identify metrics that are useful to prospective students, for instance,

Through a set of messages to listservs, the authors gathered ideas about library metrics that could be useful to prospective students. A sampling:

  • Is there 24/7 available space for study and access to library resources?
  • Are laptops or other equipment is available to borrow at the library? Textbooks placed on reserve for their use?
  • What kind of reference support is available, and how – in person, virtual?
  • Students might prefer to know about the use of the web site and institutional repository before they’d care about the number of microfilms.
  • What are the unique collections owned by the library? What is the ratio of collection size to student population?

It’s clear that there is no “one size fits all” set of metrics.  A school with a focus on distance education attracts students with some facts (online journals, virtual reference) while on-campus students will look for study and collaborative work space in the library.

The presenters pointed out how metrics project an image. What we want to avoid is the potential for an outdated image of library services and resources to persist, influencing how people perceive the library. The challenge is overcoming intertia and re-considering the collection of entrenched statistics. And coming to consensus as to what those measures would look like.

Many of the presentations are available on the Georgia State institutional repository at: http://scholarworks.gsu.edu/southeasternlac/2015/

This entry was posted in conference reports and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.