Lighting the Path with Assessment

path-of-light

Every two years librarians engaged with assessment gather together to share stories, methods, and research findings. We inspire one another as we work toward creating a culture of assessment at our institutions. This year 600+ of us met in Crystal City (between Arlington and Pentagon City) at the Library Assessment Conference sponsored by the University of Washington and the Association of Research Libraries. Conference organizers invited keynote speakers with the expectation of providing us with provocative food for thought —  these two did not disappoint.

Lisa Hinchliffe (University of Illinois at Urbana-Champaign) spoke of Sensemaking and Decision-Making.

Hinchliffe noted that higher education is experiencing increasing competition and financial pressures. This environment requires libraries to “pivot” – to re-consider what is central to our work and what we can leave behind.

  • What is the new work we can do?
  • Are we prepared to step up fast enough so that our funders can see our value fast enough?

The value the library creates is not just economic, although shared services and collections DO create economic value, as important are the values of equality-building (i.e.  inclusion, equity and social justice).

And assessment can serve as a map, or compass, towards the future – a kind of strategic guide. While an assessment program allows us to see different directions that are possible, it can not tell us which path to choose. The path must be selected based on how best we align our resources to our goals. How best to demonstrate, with evidence, our outcomes and value. Yes.

The next day we heard from Brian Nosek, University of Virginia, on Promoting an Open Research Culture. Nosek also directs the Center for Open Science (COS).

Through several participatory activities, Brian demonstrated that we can not help but experience the world through our own mind. Once we see a picture, it can be hard to see it another way. We all looked at the Horse and Frog Illusion, and while half the room saw a horse, others saw a frog. Try it out here.

In the research world, this idea relates to open access to data. Crowdsourcing the analysis of data makes for a more accurate and neutral picture of reality. Silberzahn and Uhlmann reported on an experiment with 29 teams of researchers, all answering the same research question with the same data set.

They found that the overall group consensus was “much more tentative than would be expected from a single-team analysis.” Crowdsourcing research, or bringing together many teams of researchers can “balance discussions, validate scientific findings and better inform policymakers.” (See the article in Nature)

Nosek went on to describe the Open Science Framework as an infrastructure for creating more open workflows that increase process transparency, accountability, reproducibility, collaboration, inclusivity and innovation. Exciting and important work.

So how does this apply to assessment? If nothing else, perhaps it will make us more humble as we talk about decision-making with data. We need to recognize that the data can tell many stories, and if we are to be honest and diligent in our work, we need to be open to the many ways in which those data can be interpreted and used.

This entry was posted in conference reports and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.