January is a time of reflection – this post is just that, some ideas that sparked my interest last year, with hopes of delving into them more deeply in 2017.
Sunday morning’s radio listening was doubly-rewarding, as I heard two of my favorite woman in media: Krista Tippett, host of On Being: The Big Questions of Meaning, interviewed Maria Popova, the creator of Brain Pickings, an amazing weekly compilation of her reflections on vast and deep reading in a range of literature, from science and philosophy to poetry and children’s books. Popova covers deep territory in the interview, from the perpetual process of identifying self, to the balance of acquiring new information (easy) to thinking and knowing (hard). She has real skepticism about our pursuit of productivity, or the illusion of busyness as real productivity. But back to assessment.
Are We Using the Right Data?
Towards the end of the conversation Tippett asks Popova how she measures success; what external success might look like. My ears perked up. This is a question I am continually asking of my colleagues, as a way of considering appropriate measures for assessment. Popova describes how she used to pay attention, and “hang her sanity” on metrics such as Facebook likes and retweets. They are “so tempting and so easy because they’re concrete. They’re concrete substitutes for things that are inherently nebulous.“
But she says now, the “one thing that I’ve done for myself, which is probably the most sanity-inducing thing that I’ve done in the last few years, is to never look at statistics and such sort of externalities. But I do read all of the emails and letters — I also get letters from readers. And to me, that really is the metric of what we mean to one another and how we connect and that aspect of communion.”
Popova’s words eloquently express thoughts I’ve had this year related to data and meaningful metrics. We talk of data-driven decision making, but is the data we are using the right data? Can numbers alone measure success?
If we are going to make changes based on evidence, whether qualitative or quantitative data, we need to agree on those measures of success. Decision making for organizational change comes about through a collaborative negotiation of shared program goals and agreement as to how success will be evaluated.
Assessment and Organizational Structures
I’ve also been thinking a good deal about how assessment and organizational structure are connected. This year I participated in many teams, (Single Service Desk Design, Physical Collections, the Ithaka Religious Studies Faculty project, the Data Dashboard group). It makes sense, as much of our work in assessment necessitates a team organization.
I think these projects work well, are exciting and promote mutual learning – because of some factors:
- there is a common goal – sometimes there is a formal charge, but not always.
- they bring together interested staff members who bring expertise, but also a motivation and belief in the project at hand
- they also allow for a less departmentalized, “silo’ed”, work towards innovation and problem solving. Teams work best if there is partnership and collegiality rather than hierarchy.
- and ideally, teams engaged with research and assessment use their findings to promote organizational change
But that organizational change only comes about with agreed upon evidence for those changes.
So in my own role as assessment librarian, I battle with these two, almost contradictory things, all the time. How do we balance our value for data-driven decision making, while recognizing that these measures are imperfect in describing the complexity of real life and what is truly meaningful? Your thoughts, welcome!