Finding the Sweet Spot for Library Instruction

This month’s issue of The Journal of Academic Librarianship features the research findings of Barbara Junisbai, M. Sara Lowe and our own Natalie Tagge, Education Services Librarian at the Ginsburg Health Sciences Library.  Natalie and I talked about her compelling assessment project and its implications for practicing a more strategic approach to library instruction.

Nancy: What was the research question that you had?

Natalie: Well, the abstract to the paper nicely sums it up. We were looking to assess the impact of programmatic changes and librarian course integration on students’ information literacy skills.

Nancy: Since the research was conducted at your former library, can you tell me a bit about the school and the context for the research?

Natalie: Sure. Our study was conducted at Pitzer College, one of five undergraduate colleges that make up The Claremont Colleges. The library serves as the academic core for all the colleges. Of course each of the colleges has a personality, Pitzer is informally known as the “hippy” school! It’s small (1000 students) and the focus is on liberal arts college with strengths in environmental and interdisciplinary studies.  All the students across Claremont are required to take a First Year Seminar and  the seminar includes broad learning goals that include information literacy.

Nancy: How did you decide on your methodology?

Natalie: We wanted to use a rubric for a couple of reasons. We knew we wanted to look at research papers, the final product of the First Year Seminar. Rubric assessment is a good way for multiple people to evaluation/score work. Papers were collected from three consecutive years of First Year Seminar classes, a total of 337 papers from 44 courses. The librarian component for these classes changed between 2011, 2012 and 2013, respectively, allowing us to gauge impact of the librarian engagement with the class on the quality of student work, as expressed in the final paper.

Nancy: How did you develop the rubric you used?

Natalie: We didn’t want to re-invent the wheel. Carleton College had already developed a rubric that was well aligned with the ACRL (Association of Research and College Libraries) standards for information literacy.  The rubric is designed to assess written work in three areas: attribution; evaluation of sources and communication of evidence (integration of sources into the paper). We asked for permission to edit and use Carleton’s tool,  and also had the opportunity to review it with Megan Oakleaf, who is an expert in rubrics and instruction assessment (Meganoakleaf.info/publications.html).

Natalie: In addition to applying a rubric to the papers, the final product of these classes, the degree of librarian involvement was scored on a scale of 1 to 4, with 1 being no involvement and 4 representing practically a co-teaching arrangement. This method was to determine the optimal mix of librarian involvement that would yield positive results on student papers.

Nancy: Will you tell me about your findings?

Natalie: Faculty collaboration with librarian had a demonstrated impact on student IL skills. But there is a faculty librarian collaboration “sweet spot” which is the “intermediate level” of collaboration.

This can include many things. For instance, the librarian is listed on the course syllabus as a trusted resource. There is often an online course guide. Ideally, the librarian visits the class twice – the first time to introduce the library and the second time to conduct an assignment-related session. Often there will be a course assignment that includes use of the library – like a bibliography attached to the paper. This proved to be was the optimal level of involvement, the most strategic.  Librarians self-reported this information, but we also used our instruction statistics.

Faculty also needed to hear this message – that assignment design, and a strategic instruction session, is the most impactful for improving students’ information literacy skills.

Nancy: How did you use the results of the research?

Natalie:  We presented to faculty the findings as a way of advocating for the benefits of library instruction. The research served as a basis for discussions with faculty. For instance, the one short session does not appear to be the best – a little bit more librarian involvement would lead to good gains in information literacy.

Nancy: Do you have thoughts on how this research would translate here at Temple?

Natalie: Temple has a very different population, and in particular, the students served by the health sciences library. So we need to look at the classes and the possibilities within the curriculum, being strategic and targeting the classes we spend time providing outreach to.

Nancy: Thanks, Natalie. Important research. I like especially that you used it to work with faculty to dialog about the different ways, and most strategic ways, librarians could work with their class and students. Thanks for sharing.

Posted in instruction and student learning | Tagged , | Leave a comment

Assessment Anytime, Anywhere

mm-machine

Yesterday’s staff carnival was a fun affair – lots of opportunity to meet new colleagues and learn about all the different areas of the libraries and press. Thank you Continuing Education Committee!

The Library’s Assessment Committee hosted a table, nicely appointed with displays of Association of Research Library reports ( to which we contribute), examples of data visualizations, and this blog site featured live on a laptop. We had the opportunity to introduce many staff members, and not just new ones, to the work we do.

We sponsored a numbers quiz as well. You can take in now (see below), but you won’t get a “prize” of peanut M&M cleverly dispensed from a gum ball “machine.” As typical of our collaborative efforts, the dispenser was on loan from Cynthia Schwarz and crafted by her grandfather! To tell the truth, it was the highlight of the table.

The quiz seemed like a fun way of challenging and impressing staff with the levels of activity here at the library, from the numbers of downloaded e-journal articles to the reach of our programs into the university and community.

After the event, I thought that I might have used this “assessment” in a different way. I might have tallied the results and presented them here. Just as a way of demonstrating a kind of assessment in a fun way.

So this time around we’ll do the quiz online and save the answers, conducting a brief assessment of how well staff have memorized these numbers. Just kidding, but do

numbers-quiz

Take the quiz

And let us know if these numbers surprise you.

 

Posted in organization culture and assessment, surveys | Tagged , , , | Leave a comment

Speaking of Scholarly Communication: Interviews with Faculty

Last week staff from Reference & Instruction, Access Services, the Press, Digital Scholarship Center, Special Collections,  Digital Library Initiatives, and Library Administration gathered for a conversation to share findings from a series of interviews we librarians had this past spring. Annie Johnson, with Greg McKinney, Rebecca Lloyd, Steven Bell and Kristina DeVoe talked with 7 faculty members in the humanities and social sciences about their use of social media, how they view open access and new trends in scholarly communication.

In a separate project, as part of the Ithaka S+R disciplinary research, Fred Rowland, Rebecca Lloyd, Justin Hill and I interviewed 12 faculty from Temple’s Religion Department. Our interests were similar: How do faculty choose where to publish their scholarship? How do they view new forms of scholarly communication? Are they using Twitter or other social media to share their research?

Here is some of what we learned:

Choosing a Publisher

Faculty continue to select publication venues based on the prestige of the publisher and journal. While there is no codified “list”, it is common knowledge in a discipline. The benefit of publishing with traditional publishers continues beyond tenure and throughout a scholar’s professional career – as merit points are assigned based on this prestige factor. When seeking an outlet, faculty “do not take chances” – typically, smaller, more focused journals are not ranked as highly.

Faculty seek a press who will actively market their book. They favor those publications with an efficient turnaround time, particularly when they are on the tenure “clock”. This urge for expedited publication  means they tend not to pay much attention to copyright and license agreements when signing off on the rights to their work. And the journal subscription cost is not a particular concern.

Open Access

Faculty we spoke with, all from the humanities and social sciences, do not consider that open access journals have the degree of prestige they seek. Many feel their tweeting and blogging, serves the purpose of making their research widely accessible. This very public activity makes it less incumbent on them to publish in formally open access journals.

That said, there was a wide range of attitudes about open access – from one scholar who is an advocate and very deliberate about his choice of open scholarship, to those who understand this to be the equivalent of posting one’s scholarship on a “random web site.” Graduate students are interested in new models for disseminated their research, but put off by the idea of an Article Processing Charge. Although this business model is not used in the humanities, the mis-information persists.

Discussing the various business models for journals let to a lively discussion about the library’s role in this kind of work. Many libraries, and ours as well, have a fund to support these APC’s associated with publishing in an open access journal. Our cost for a subscription has turned into a different type of cost. Annie, our Scholarly Communication specialist, attests, “Library supports dissemination of research to the world’ and this is how it fits within the Library’s mission. This is an expansion of the library’s role in supporting the scholarly apparatus.

New Modes of Scholarly Communication

For some, Twitter has replaced academic conferences for learning of trends in the field. A faculty member referred to it as the “new water cooler”. We spoke with a faculty member who archives his tweets to use as a journal for tracking his scholarly path. Other scholars were wary of social media – as scholars thinking about sensitive topics in religion, they may feel vulnerable about posting on potentially controversial topics. The disciplines certainly have different cultures of social media use.

Research into  Practice

Many Library/Press staff already follow Temple faculty and Temple authors on Twitter. It’s a way of providing support, of staying attuned to news and trends. We discussed tools like, “If This Then That” and Hootsuite for managing social media in a more efficient way.

To better support faculty and their questions about publishing issues and in  particular, license agreements, we discussed a kind of KnowledgeBase or closed archives online that would provide Temple scholars with suggestions for alternative license language, particularly for areas that are negotiable. The tool, useful for librarians and faculty, could provide definitions for better understanding of pre-prints and post prints. While faculty have awareness of “better and worse” agreements, they have little free time to think about it.

Thanks to all who participated in the conversation. It was great to share our findings with colleagues and to discuss practical implications  based on the research. For those interested in hearing more of the Ithaka interviews with religious studies faculty, please take a look at our Final Report.

Posted in qualitative research, research work practice | Tagged , , , , | Leave a comment

LibGuides Usability Testing

Temple Libraries has over 500 LibGuides, or Research Guides. The purpose of the guides is to help library users with some aspect of the research process. Most guides fall into one of three categories – those that offer links or helpful information related to specific disciplinary resources (subject guides), those with course-related information or resources (course guides), or those that provide general information about the library and its services (“How Do I” guides). A typical course guide, for instance, might include links to databases, books, websites, or instructional videos intended to help students complete research or other course assignments. The guides, authored by a variety of library staff, make up a large percentage of the library’s online content.

Last year, the Code Rascals group consisting of myself, Jenifer Baldwin, Brian Boling, John Pyle, and Caitlin Shanley, turned its attention to user experience, and LibGuides seemed like one online space ripe for analysis. We decided to conduct usability testing to learn how and if the guides are working for undergraduates, a primary audience for many of our guides. Our goal was to identify usability issues and to address those issues through guide and system-level design improvements, better content curation, and better web writing. With so many content creators, we knew we would likely need a set of guidelines to accompany our findings. Another goal of the usability testing was to establish ongoing best practices for guide authors.

Preparation & Methods

We recruited five participants, all first- or second-year undergraduates with a variety of majors. During the sessions we asked participants to perform research tasks and to “think aloud,” or talk us through what they were thinking as they performed the tasks. We designed each task to give us insight into how users fared with the system homepage, guide navigation, and finding resources, such as databases for research, within a guide. Participants engaged with all three types of guides: course, subject and “How Do I’s”. We also asked participants to freely explore the Research Guides for a few minutes and give us their overall impressions.

Analysis & Findings

We recorded a screen and audio capture of each session for more thorough analysis later. In our analysis we reviewed each recorded session as a group, noting usability problems and generating a list of potential solutions or best practices along the way. Our full report details observations and the full list of best practices. Below is a sampling of our observations.

Homepage & Guide Discoverability Issues

Users have trouble selecting a guide that can help them with a broad topic, and they may not realize they’re on a Research Guide once they arrive.

When asked to select a guide to research the topic “public art,” participants expressed uncertainty in the absence of a subject link or guide explicitly titled “public art.” This indicated to us a need to improve discoverability of guides through a more prominent site search on the homepage and better metadata at the guide level. Even after selecting a guide to research public art, participants remained uncertain that they had landed on a research guide. One commented that she was not sure she was on a guide, and another asked if the guide she had selected was in fact a guide. To us, this demonstrated a need to better brand the guides as a tool for research.

Guide-Level Issues

Users choose databases that are familiar or at the top of a list. Users also spend time reading database descriptions.

In tasks where participants had to find books and articles, we observed that participants did spend time reading database descriptions; however, they opted to search in databases listed toward the top of a guide page or the top of a list of databases. Some participants mentioned selecting a specific database, such as JSTOR, because they had encountered it before in high school or previous courses. Our recommended best practice is for guide authors to list no more than three databases and to place lesser known databases towards the top to help build familiarity with resources students may not have previously used.

Extraneous information distracts users from finding what they need.

One task asked participants to find information in guides on how to cite a book in APA style. Some participants read explanatory text boxes and watched video tutorials on the guide’s homepage before moving on to the APA tab of the guide. Significant time was spent viewing this explanatory content before deeming it unhelpful. This indicated to us a need for How Do I guides to be oriented in a way that helps users complete specific tasks quickly. Also, we need to review video content to make sure it is up to date, short and to the point, and relevant to users’ needs.

Large headings help users scan the page to identify content that is most useful.

Though participants were sometimes distracted by content not immediately relevant to the task, we observed that large headings helped users quickly scan through guide content to locate what they needed. On the Audre Lorde Seminar course guide, which included large section headings at the time of testing, participants quickly scrolled to the appropriate area of the guide when selecting a database for research. We plan to increase the font size of box headings at the system level to create a better hierarchy of text on all guides and make the contents of the page easier to scan.

Novice users have difficulty with guides.

We observed that participants who were novice researchers struggled to find the resources and information they needed in LibGuides. This highlighted the need to design guides that work for audiences with a wide range of research skill levels. Guide authors might consider using language or visuals that instruct users on how and/or why to use a resource.

Future Steps

We’ve shared the final report and best practices generated from the first round of usability testing. Our overall study consists of three parts, two rounds of “think aloud” usability testing and one round of card sorting to learn more about the language and structure of guides. Card sorting was completed in the Spring, and we are currently analyzing the results. For the next round of usability testing, we plan to create two or three model guides based on findings so far and test the usability of those guides. At the conclusion of the usability study, we plan to create guide templates that reflect our best practices.

Posted in usability | Leave a comment

Reports from the Field: Assessment Discussion Group at ALA

orlando clouds

The ACRL Assessment Discussion Group meeting is always a bright spot at the ALA conference.  The group, with a discussion list of over 400+,  suggests topics of interest: this year we talked about  assessment of space in libraries and how it relates to student learning. We shared thoughts and best practices for assessing student learning outcomes, and we talked of visualization and dashboards for presenting library data. Three topics, three table discussions: here are some highlights.

Space Use Assessment

Libraries are using a variety of tools to gather data on how students use space, from snapshots of head counts and high traffic areas to tracking of wifi connections. Librarians continue to explore the use of high tech tools (infrared, pinging analysis, and camera’s that can blank out faces) to understand space use. But they also noted that decisions about library consolidation and closing are more often due to politics than based on library-collected data.

University of Missouri–Kansas City (UMKC) used a “photo elicitation” technique (made popular by Nancy Foster’s ethnography work). As the article about the research makes clear,  surveys and observation studies, “by design, … usually stay within existing boxes, making them suboptimal for holistic examinations required to lead major innovation or uncover new and unfamiliar aspects of user needs.” That idea and the project in full are described in: Nara L. Newcomer, David Lindahl & Stephanie A. Harriman (2016) Picture the Music: Performing Arts Library Planning with Photo Elicitation, Music Reference Services Quarterly, 19:1, 18-62

How do libraries connect space use with student success? Surveys on use of collaborative space, interviews with students about their “favorite” places to work, and correlation between computer use and GPA’s are methods in use.  While these relationships may be tenuous, it’s clear that student success and retention can be correlated to student engagement – this “sense of belonging” may be enriched by soft seating, a coffee shop, or the availability of partner services at the library, like a writing or tutoring center.

Learning Outcomes Assessment

Coordinators of information literacy   programs must balance a standardized approach to assessment balanced with their colleagues’  desire and need to maintain autonomy as teachers.  End-of-session worksheets to be completed by students may “stifle” teaching styles and yet, without standard measures that can be used throughout the program, it is difficult to get meaningful data about best practice.

All agreed about the necessity learning outcomes shared by both academic instructor and the librarian. To ensure there is common understanding, the University of Alabama  requires a 30 minute meeting prior to an instruction session. But in classes where instructors turn over each year, and there are high % of adjuncts, this kind of extended collaboration is a challenge.

Finally, librarians engaged in assessment need resources for good questions. Blackboard and LibWizard can support quizzes, and the Information Literacy &* Assessment Project out of Canada provides a good question bank for assessment of student learning outcomes.

While most of the librarians are using rubrics and pre/post testing, some see more value in understanding student practice rather than use of a standardized test. Are there qualitative methods that might help us come to this understanding? And if so, how best to report the results of that assessment? Because telling the story of our impact with instruction is a critical piece of our demonstration of value to the institution.

Data Visualization and Dashboards

Discussion at the data visualization table revolved around the various tools that libraries now use for making library data more accessible through visualization: Oracle, Tableau, R, PowerBI and others were mentioned.  Whatever the tool, a challenge shared by all is finding ways of presenting that data in a clear, digestible and usable form. How do we best tell the library’s story to its multiple stakeholders? From fact sheets and infographics to interactive story boards, this growing area of assessment practice merits further discussion and will be the subject of a future post.

The discussion group was lively and useful. Thanks go to the many participants and especially the meeting recorders who helped to capture all that transpired. The full set of notes is available at: ACRL Discussion Group Minutes

Posted in conference reports | Tagged , , , , , , | Leave a comment

Assessment and Games Intersect with Diamond Eyes

This spring Temple University Libraries commissioned a special project as part of the programming year’s theme of Games and Gaming. Nicole Restaino, Manager of Communications and Public Programming, worked with Drexel’s Entrepreneurial Game Studio as they developed a hybrid work of theatre and games – “The Diamond Eye Conspiracy.” The interactive work was enacted a Paley library this April, to great success.

EGS integrates video game design, physical theatre and dance. As they were developing the game, collaborators Daniel Park, Arianna Gass, and Joseph Ahmed collected data about how students used and perceived of the library. They conducted brief in-person interviews and surveys with students, using traditional methods of assessment but applied in a different way. Intrigued by their process, I interviewed a member of the creative team, Daniel Park.

NBT: Tell us a bit about your project, what was your intent, or your charge?

DP: Our primary charge was to create something that would help reveal the library’s resources, and make it feel like a unique and special place. Within that we were interested in creating a community, shining a light on the individual lives of the people that frequent the library, and of course, making a fun experience.

NBT: What about the data gathering process? What did you collect and how?

DP: When we were gathering data, we wanted to focus on questions that would help inspire the content of the piece. This included basic information like who uses the library and why, but also less traditional questions. We created a list of actions that might happen inside the library ranging from checking out a book and studying, to checking out a person and sleeping, and asked the people we surveyed to check out which they have done inside of Paley. We also asked people to tell us something they would want to do inside of the library, but felt like they weren’t allowed to. We thought we could use the performance as an excuse to let people break the usual rules of the space. We also performed one on one recorded interviews, as a way of collecting more stories about the library itself. One story in particular, about a student who had to live inside of the library, became the basis for the conflict of The Diamond Eye Conspiracy.

NBT:  How did you use the data you collected in your production? How did it inform you work?

DP: The data became a major part of both the story of the piece, as well as informing a little bit of the content and aesthetic. Because we didn’t have a specific starting point for the piece, other than that it needed to happen inside of the library, pretty much all of our inspiration was garnered from the data we collected, and a small bit of research on the history of Temple and the library.

NBT: Did anything surprise you that you learned about students and how they viewed the library/what behaviors were expected and what wasn’t “allowed”?

DP: Sleeping in the library is really divisive. A lot of students have done it, and a lot of them want to, but about half of the group we surveyed felt that it was, “Against the rules.”

NBT: Are there differences in how you’d use what you learned as artists than if, say, you were library staff making decisions about library design or service?

DP: Yes and no. I’d striving towards the outcome of making the library a communal space, arranging it towards how people actually use it, and how people want to use it, would be a goal in common. But we had more flexibility to do something impermanent that could be a little bit more interrupting. I think more questions of sustainability and practical usage would need to come into play. But, I think keeping a value of playfulness and activating the space in unexpected ways, could result in some cool design and service projects.

Overall, a great example of how creativity and assessment work together in providing for a more  user-friendly, user-aware library environment. Thanks to Nicole for getting behind this edgy, playful and ultimately very successful public program.

 

 

Posted in library spaces | Tagged , , | Leave a comment

Supporting the Needs of Faculty – The Research Services Forum

The Research Services Forum met last week to discuss the recent Ithaka Faculty Survey.  Ithaka S+R has conducted a survey of faculty every three years since 2000, providing libraries with a “snapshot of practices and perceptions related to scholarly communications and information usage.” This year, the survey population consisted of faculty members from all disciplines in the arts, sciences and most professions at U.S. colleges and universities (offering bachelor’s degree or higher). This translates to a  survey distribution of  145,500 faculty members yielding a response of 9,203 (6.3%). The Ithaka surveys rightfully receive a good deal of attention in the higher education and library press, as providing  useful data on trends in faculty perceptions and needs – essential for librarians to know about and consider for their own work.

Ithaka Faculty Survey: A Key Finding

Discovery starting points remain in flux. After faculty members expressed strongly preferring starting their research with a specific electronic research resource/database as compared to other starting points in previous cycles of this survey, they are now reporting being equally as likely to begin with a general purpose search engine as they are with a specific electronic research resource/database. Furthermore, the online library website/catalog has become increasingly important for conducting research since the previous cycle of the survey.

This finding was based on the following question and response, “Below are four position starting points for research in academic literature. Typically, when you are conducting academic research, which of these four starting points do you use to begin locating information for your research?”

ithaka survey questionThis finding is notable as an indicator of changing  research behaviors, as well as for its potential implications for library work. But it’s a bit more complicated.

Local Research with Religious Studies Faculty

The timing of Ithaka’s publication was fortuitous, as here at Temple Libraries we are also talking with faculty about research practice and beginning to synthesize our local findings. Ours is also an Ithaka-sponsored project with a different methodology – a series of structured open-ended interviews with 12 faculty in the Religious Studies department. The local research team is made up of Fred Rowland, Rebecca Lloyd, Justin Hill and myself. Fred did a great job of contacting faculty and helping to arrange the interviews – the research team shared conducting interviews and reviewing the transcribed recordings. Update: Final Report is now posted here:  templeuniversity_religiousstudies_finalreport.

 

The project is part of a broader investigation into discipline-specific practice (the final report will include 40 libraries), but covers similar ground to understand how faculty:

  • Position their work in the academy
  • Locate their sources
  • Select where to publish, including use of open access
  • Manage and store data
  • Use social media
  • Keep up with trends in the field

A question posed to Temple faculty was related to the survey’s “starting point” question but worded somewhat differently. We asked,  “How do you locate the primary and/or secondary source materials you use in your research?”  There was tremendous variation in how this question was answered, and how it was understood. For instance, faculty members do not have a fixed, single “starting” point. This depends on many factors: the specific project, the seniority of the researcher, and the scholar’s relationship to the librarian.  Faculty members are often unclear as to where they are, and what tools they are using,  when looking for information. They describe “The portal” 0r the “humanities search engine”. Is this Summon? JSTOR, The library’s website?

But the greatest challenge, for our religious studies scholars, is tracking down archival and primary source material that may reside in schools, theological libraries and historical societies. Senior scholars tend to use their own social/professional networks when exploring a new topic. Others may reach out to their liaison librarian for guidance in the specific databases and search strategies appropriate to their topic.

Methods

The value of a qualitative interview for these kinds of questions is the possibility of probing deeper with the participant. What seems like a simple question, “How do you begin your research?”, yielded complex and at times, confounding responses. Can a survey adequately get at this? Are there ways in which the library should be acting upon these findings? Fortunately, the Research Forum provides an opportunity to share and discuss this kind of research and its implications for library services and practice.

Thanks to the organizing group (Brian Boling, Kristina DeVoe, Andrea Goldstein, Josue Hurtado and Margaret Janz) for inviting us to talk about the Ithaka research projects. More to come.

Posted in research work practice | Tagged , , | Leave a comment

Report from the Field: NISO Conference on Library Value and Assessment

Steven Bell and I were privileged to participate this week in the NISO virtual conference on Library Value & Assessment. The conference slides are available at: http://www.niso.org/news/events/2016/virtual_conference/apr20_virtualconf/

Presenting virtually, for me at least, was a bit nervous-making. Although we presented slides with audio, and I didn’t have to worry about my hair, it’s disconcerting to “present” to an invisible audience. Are they sitting with rapt attention or yawning? Fortunately our presentation, Why Library Assessment? A Look at Current Practice, went smoothly and then we could relax and enjoy hearing of great work from our assessment colleagues. Here is a brief re-cap:

Jocylyn Wilk, University Archivist at Columbia University, discussed the assessment tools in use at Columbia’s Rare Book and Manuscript Library in her talk, Why Is This Assessment Different from all the Others?  The Archival Metrics Toolkits provide a rich set of survey and focus group instruments for use in assessing the user research experience, special collections websites and online finding aids, and instruction with special collections. Advantages for using these tools are: they are pre-validated and findings can more easily be shared with other institutions.

Elizabeth Brown (Director of Assessment and Scholarly Communications, Binghamton University) spoke about: Leveraging and Interpreting Library Assessment: Pulling the Wheat from the Chaff. Brown provided us with some good rules for telling a story with data:

  • Remember your audience
  • Match content to knowledge level
  • Make message succinct
  • Link analysis to existing data
  • Show trends clearly – use simple graphs and charts
  • Make quotes for qualitative data –

Ken Varnum (University of Michigan Library) spoke about Information Resources: Justifying the Expense; the kinds of data that libraries can use to demonstrate value. He raised some provocative questions related to our understanding of use data. For instance, does access (i.e. the download of an article as reflected in a COUNTER report) equal use? Is some use more valuable than other use, i.e. the use of collections by faculty rather than adjuncts? What about the kind of collection use that leads to new research? Or helps the faculty member get a grant?

Varnum noted that there is a big black hole, from the analytics perspective, in the use of the free web. Not only are we unable to track this usage, but we don’t know the scope of usage relative to what we license and pay for. To what extent do proxy users represent all users? While these are difficult, at times uncomfortable questions, they may be important ones to address as collection funds become limited.

In We’re Not Alone,  Jan Fransen at the University of Minnesota reported on that institution’s exciting work to understand the relationships between use of the library and student success. Their research reflects years of relationship building and data collection with libraries data, the Office of Institutional Research and Academic Advising for  student demographics as well as data related to student success.  The library can confidently demonstrate, for instance, that use of the library at least one time increases the odds of a student staying in school. More on their research is located at:  z.umn.edu/LDSS

Kristi Holmes (Northwestern Medicine) updated us on the Library-based Metrics & Impact Core. Northwestern’s Impact Core is a new kind of service that is a perfect fit for the library, and also supports the university’s reputation building capacity. With expertise in bibliometrics, data visualization, continuous improvement, information systems and alternative metrics, staff in the Impact Core provide extensive advisory services for researchers, as well as departments and university administration.  With classes on bibliometrics for researchers and increasing impact of research, the library has also become the “go to” place for publication data used not only for promotion and tenure but in award nominations and VISA applications. Exciting and fitting role for the library.

In Planning the Plan: Collaborately Aligning Strategic Plan Initiatives and Assessment, Starr Hoffman (UNLV Libraries) provided us with a detailed look at the process she used with senior staff to ensure that the libraries’ assessment plan was in line with the strategic goals of the library. And as importantly, staff take ownership of those assessment activities.

In Why Measure That When We Need to Show This, Carl Grant (University of Oklahoma) provided the day’s wrap up. He spoke of the disconnect between the kinds of statistics we collect and libraries’ new roles. We need to be using metrics that align with what the university values. Oklahoma has created a Data Governance Committee that supports the work of bridging these data silos, including Institutional Reporting, IT, Libraries, Finance, Administration. Grant asks, “How do we create this culture of assessment?” and asserts that we need to “get in front of assessment. Part of change management is telling staff where they’re going to end up, how you will train them, and how you will make them successful.”

This recap does not do justice to the rich assessment work presented. I encourage you to explore the slides, freely available at: http://www.niso.org/news/events/2016/virtual_conference/apr20_virtualconf/

 

 

 

Posted in conference reports | Tagged , , | Leave a comment

Exploring Circulation Data: Who’s Checking Out What?

When I began as Collections Analysis Librarian at Temple last fall, I met with Subject Specialists to learn more about their work and to discuss how my work might help them. Several said they wanted to know more about how our library collection is being used. One librarian was curious to see how borrowing varies by user group, such as graduate students, undergraduates, or faculty. Several others wondered more generally if the items they order are getting used.

I had warned the Subject Specialists that I probably would not be able to immediately provide the information they were asking for, but that hearing from them would help me plan future projects. Usage was not only one of the most common metrics the Subject Specialists asked about, it was also the one for which I had the best access to data, as the Data Dashboard team had begun collecting circulation reports and information on user status before I started my position. One of the Dashboard Project’s goals is to provide metrics on library performance that allow Temple University’s Schools and Colleges to see how their constituents are using the library, and the data it contains can of course be relevant to us within the library as well. To answer my colleagues’ questions about usage, I began with data that others had collected for the Dashboard, and arranged it in a way that I thought would be most useful to Subject Specialists. I presented the data to a room full of librarians in February.

The Data

In the Data Dashboard, we have information on circulation transactions that includes the call number of each borrowed item and the status (graduate, undergraduate, faculty, ILL, etc.) and school of the person who borrowed it. For this presentation, I took a year’s worth of circulation data and added information on the subjects of the borrowed items. Using the call numbers, I mapped each item to a broad subject such as Art or Physics as well as a narrower subject like Drawing, Painting, Optics, or Thermodynamics. The mapping was based on the approval plans and the various call number ranges on each plan. This is more granular than the way the data is reported in the Dashboard, where it is sorted by college so that it can be reported to the colleges. For this audience, I wanted a more detailed look at specific subjects. The report I shared included 45 broad subjects and 380 narrower subjects.

With large tables, it can be hard to know where to start. I find it helpful to focus on a particular question so that I know which numbers to look at, and then make comparisons between numbers to see what stands out as unusually high or low. For instance, I suggested that one way to look at the data would be to pick a subject and ask if, within that subject, there are some sub-topics in which graduate students borrow more books than undergraduates and other sub-topics where undergraduates borrow more.

Some Examples of Findings

To take one example, Anthropology had the exact same number of books checked out by undergraduates as graduate students (318 each). There are more undergraduate majors than graduate students, but the latter probably borrow more books per student. Interestingly, though, the different groups of students seem to be borrowing books on different subjects. On ethnology, 27.52% of checkouts were to grad students compared to 10.32% to undergrads. For manners and customs, undergraduates borrow more books, making up 27.77% of checkouts compared to 10.66% for grad students. This might reflect a difference in the kinds of books found within each sub-topic. Books in the ethnology section probably contain more in-depth studies, which would be more relevant to graduate students, whereas books on manners & customs might be written for a more general audience. Looking at these numbers, therefore, not only tells us something about students’ needs but is also an inroad into understanding the sub-topics of a discipline.

Sometimes the checkout patterns raise additional questions about who is using the books. For Linguistics books, 34% of checkouts were to graduate students, although there are no graduate students in Linguistics. These numbers can make the case that books purchased for one program often benefit others as well. It is possible, for instance, that graduate students in Speech and Language Sciences are using Linguistics books.

Future Uses of Data

Most of the subject specialists who asked about usage statistics did not have a specific question but wanted to add to their general knowledge of the collection. Numbers can provide an entry point for looking at how our print collection is used, and this could prompt more specific questions about the curriculum, enrollment, or our holdings. The report I sent out after the presentation included the full list of titles that were checked out, so anyone who is curious about the usage of a particular section can dig deeper to see exact titles.

There are more questions that we will be able to answer with circulation data in the future.  For example, since we have information on what college the users are from, we could see whether some of the usage is coming from people we might not expect. Looking at undergraduates by year could show us if print monograph use increases as students progress through their careers. I had simplified the user information to make my presentation more digestible, but questions from the audience showed they would be interested in details. Another suggestion was to compare print usage to e-book usage for various subjects, which is also something I am starting to look into.

As the Data Dashboard comes together, I will continue to look for ways that the data can be useful to Subject Specialists, and I will of course have other collections analysis projects that serve this purpose as well. I hope this will be a beginning, with more information to come.

Posted in data-driven decision making | Tagged , | Leave a comment

An Assessment of the 2016 Strategic Planning Retreat

Assessment and strategic planning go together – we can’t assess when we don’t know what we’re aiming for. In that spirit, this post reports on the Library/Press 2nd Annual Strategic Action Planning Retreat, hosted by the SAWG16 working group.

The SAWG16 group includes Justin Hill, Lauri Fennell, Margaret Janz, Sara Jo Cohen, as well as myself and Steven Bell as co-chairs. The March 3 event, to which all staff were invited, drew a healthy number – 65 participants from 12 departments. The working group had three goals for the day:

  • Provide opportunities for staff members to get to know one another – not just in a social way but collaboratively –  working towards a library-wide goal
  • Learn about the ongoing strategic plan objectives for the current year – all that’s been accomplished so far
  • Begin brainstorming and defining potential objectives for the upcoming year

At the end of the day, we conducted an assessment of the Retreat itself. I will admit that even as someone who loves assessment I had some trepidation about soliciting feedback! I’ve organized enough conferences and meetings to know that for every one who loves the event there will be others with less positive things to say. But for the sake of continuous improvement we developed a quick evaluation form, providing the SAWG with excellent feedback and many ideas to consider for future events.

The Speed Idea Generation activity was the most popular. Participants had 5 minutes to brainstorm ideas about a question, like “What are ways we might foster more flexibility and support staff in developing new skills?” and “What new services could the library and press provide to help address the needs of faculty as they conduct and publish their research?”

We asked if participants found this to be worthwhile. Out of 38 responses, 100 % found the activity either very worthwhile or somewhat worthwhile (8). We thought it was fun, and it was gratifying to know that participants did too.

We asked how well the retreat met the outcomes stated above: getting to know other library staff members, learning about ongoing objectives, and thinking about future ones. Again, results were quite positive:

Response Getting to know other library staff members %
Very Successful 17 44.74%
Somewhat 17 44.74%
Neutral 4 10.53%
Grand Total 38 100.00%

 

Response Learning about last year’s strategic action objectives %
Very 15 39.47%
Somewhat 17 44.74%
Neutral 6 15.79%
Grand Total 38 100.00%

 

Response Helping me think about upcoming annual objectives %
Very 22 57.89%
Somewhat 12 31.58%
Neutral 4 10.53%
Grand Total 38 100.00%

 

The only way to improve is to listen to all the feedback, consider it with an open mind, and make adjustments for the future.  There was excellent, constructive feedback:

  • The lightning round, planned to update staff on progress towards our current objectives, should have provided more time for presenters and provided opportunity for all departments to be represented. And let’s get our microphone working.
  • We need to provide more food options at future staff retreats.
  • Some suggested shorter breaks – others felt the time spent was just right. Most of us enjoyed being in the Owl Cove, a few would have preferred to meet in the Paley building.
  • “I would love to see a problem solving section, where we can discuss problems staff see in the library in an open way, and brainstorm solutions”

And one of our favorite comments,

“I’ve been in many strategic planning meetings over the years and have never seen one end both early and burn-out free. Kudos to the team!”

All of this will be taken into account as future meetings for staff are planned. Thanks again to all who contributed.

 

Posted in organization culture and assessment | Tagged , | Leave a comment