The Future on Pause: Reflections on the “How We’re Working at Charles” Project

Last week the Assessment Community of Practice gathered virtually to hear more about the Envisioning our Future project. The session was hosted by research team members Karen Kohn, Rebecca Lloyd, Caitlin Shanley, and myself. 

The project was conducted as part of the assessment initiative sponsored by the Association of Research Libraries to understand the impact of library spaces on innovative research, creative thinking and problem solving.  Coinciding with the opening of the Charles Library at Temple, we focused our research on how changes in library space impact the work of staff : their work as individuals, when working with colleagues, and in their work with users.  

Prior to the move we asked staff members, in one-on-one interviews, to imagine how their work would change upon moving to the new facility with spaces that support a quite different approach to service and resource delivery.   A second set of interviews was conducted in early 2020, after we’d been in the space for a semester.   Then in March 2020, the Libraries closed all its buildings. While many of our findings seem part of a now distant past, others went beyond the use of physical space and are as relevant as ever. 

The COP was an opportunity for the research team to share insights and reflections on the project.  The full report was shared with staff (see July 23 email), so the discussion focused on the approach. Those insights and reflections from the discussion are paraphrased here: 

What were some of the benefits for you in participating in this project?

It was helpful to know that our personal experiences were, in many cases, shared by our colleagues. From the control of window shades to norms for talking in shared spaces, it’s good to know that we’re not alone in our feelings of uncertainty. 

Being part of a research team provides access to a level of detail and complexity about the issues. Seeing patterns in the interviews helped us to think about solutions.

It was also nice to be part of a project that participants felt was supportive, providing an opportunity for staff to express their feelings about Charles in a safe  way. 

What were the challenges experienced by the team members? 

Qualitative research produces a rich body of text, and while we were appreciative of participants’ willingness to be candid, open and trusting of us with their thoughts – it can be challenging to distill that material without losing the richness of the sentiments that were shared. And people are human, so they’d say contradictory things, even in the course of one interview. 

We were close to the research. When interviewing our colleagues, it could be hard to keep a distance, be an observer. Oftentimes we’d empathize with what was being said, and yet we had to stay objective when listening and when presenting the material. In conducting the interviews, it was necessary to build trust in a short period of time. That’s a skill that will be helpful in other contexts.

It is also good to know that we are part of an  ARL research cohort. We’re hopeful that our work will be helpful to other libraries and will contribute to our colleagues at other institutions conducting similar projects. Libraries have a lot to learn about self-reflection, and thinking of themselves as organizations. 

Other thoughts from the Community? 

We noted that the report’s findings related to communication around change continue to resonate, as powerfully now as then. We are operating in a working environment that is volatile, requiring us to be thoughtful in how we insure direct and effective communication at all levels of the organization. Many of us are working in the Charles physical spaces, but most are not. While the physical spaces didn’t allow us to be all together, the virtual space does! This unexpected future provides for opportunities to be creative in communicating, connecting and establishing work norms together in new, and even more inclusive, ways.  

Posted in library spaces, qualitative research | Tagged , | Comments Off on The Future on Pause: Reflections on the “How We’re Working at Charles” Project

What Counts as Reference?

Reference desk from 1982.

Reference Desk. 1982. Paley Library. From Temple University Libraries, Special Collection Research Center.

Last month I completed six years of service on the editorial board of ACRL’s Academic Library Trends and Statistics Survey. Our meetings involved much discussion on how best to provide clear instructions to survey participants, debates over wording of trends questions, and work with ACRL staff in recruitment efforts to insure a robust response rate (this last year it was 1,676).  We focused mostly on new metrics of potential interest, like the number of computers provided by the library. Or new formats for instruction. We didn’t talk much about the definition of a Reference Transaction – a number requested also by the Association of Research Libraries, Association of Academic Health Sciences, and IPEDS, the Integrated Postsecondary Education Data System (more commonly, IPEDS). 

The definition all of these surveys use is one modified from the ANSI/NISO Z37.7 definition, updated last in 2004. 

An information contact that involves the knowledge, use, recommendations, interpretation, or instruction in the use [or creation of] one or more information sources by a member of the library staff. The term includes information and referral service. Information sources include (a) printed and nonprinted materials; (b) machine-readable databases (including computer-assisted instruction); (c) the library’s own catalogs and other holdings records; (d) other libraries and institutions through communication or referral; and (e) persons both inside and outside the library. When a staff member uses information gained from previous use of information sources to answer a question, the [transaction] is reported as a [reference transaction] even if the source is not consulted again.

Survey instructions make very clear that we do not count “directional” questions, those questions about the “logistical use” of the library. Examples of directional questions are:

  • “Which way is the restroom?” 
  • “Where is the nearest printer?”

Reference is counted when we are “looking up” a piece of information:

  • “Does the Library have a copy of Ivanhoe?”  
  • “What are the library’s hours today?”

Time and complexity don’t really count. ACRL has us distinguish between “reference” and “consultation”, but ARL does not.  Some libraries (like ours) define a consultation as a transaction that is complex and takes time. Others count a consultation as transactions for which a patron makes an appointment.

When we tally it all up, should looking up the hours on the library website count the same as a 1-hour consultation on the use of R at the Scholars Studio? Does working with a faculty member on defining the parameters of a systematic review count the same as looking up a known item in Library Search? What about the instruction of a faculty member in how to place an item on reserve in Canvas? ARL counts these the same, although one may take a staff member less than a minute, the other hours. Some reference questions require more training and specialized expertise. 

Our patrons are sometimes surprised to learn that behind the “curtain” of our online Chat Service, or our Digital Request Form, a human being is waiting to assist. As my colleagues described so well in the last blogpost, our numbers for reference are exploding, and we have many many thankful patrons helped by expert information searchers using their knowledge of the many information sources available to help them. 

But what if that automatically populated form was sent in an automated way to a federated search across the many information resources we provide? And no human was involved? And the patron found just what they were looking for? Would our robot get to count that as a reference question? 

What if our easy-to-use FAQ, or our LibAnswers, were so well-developed that a patron’s search query always mapped to the answer they sought? Do we get to count that? Is the only thing that counts a transaction that involves a human being?  

There was a time when most reference required the patron go to the physical reference desk. Behind the desk were massive shelves of reference books, non-circulating, and the job of reference was to match the question to the appropriate volume. As a reference librarian, I’d  often go to the shelves myself and serve up the book to the [thrilled] patron myself. The dark ages of librarianship!

Perhaps it’s time to rethink how we develop, measure and assess the ways in which our expertise supports the reference services we provide, whether physical or virtual.  

Posted in statistics | Tagged , | 1 Comment

Supporting Online Learning and Research: Assessing our Virtual Reference Activities

Today’s post is contributed by Olivia Given Castello, Tom Ipri, Kristina De Voe and Jackie Sipes. Thank you!

The sudden move to all-online learning at Temple University presented a unique challenge to the Libraries and provided a great opportunity to enhance and assess our virtual reference services. Staff from Library Technology Development and Learning and Research Services (LRS) put into place a more visible chat widget and a request button for getting help finding digital copies of inaccessible physical items.

Learning & Research Services librarians Olivia Given Castello, Tom Ipri, and Kristina DeVoe, and User Experience Librarian Jackie Sipes have been involved in this work.

Ways we provide virtual reference assistance

We are providing virtual reference assistance largely as we already did pre-COVID-19. We offer immediate help for quick questions via chat and text, asynchronous help via email, and in-depth help via online appointments. See the library’s Contact Us page for links to the many ways to get in touch with us and get personal help.

Our chat service now integrates Zoom video chat and screensharing. That was part of a planned migration that was completed just before the unexpected switch to all-virtual learning.

Since going online-only, we have also launched a new access point to our email service. By clicking the “Get help finding a digital copy” button on item records in Library Search (Figure 1), patrons can request personal help finding digital copies of physical items that are currently inaccessible to them. 

Figure 1. The “Get help finding a digital copy” button in Library Search, Summer 2020.

Usage this year compared to last year

The main difference we’ve seen since going online only has been in the volume of virtual reference assistance we are providing. We added a more visible chat button to the library website and Library Search. Since making that live, we have seen 88% more chat traffic than during the same period last year (Figure 2). The “Get help finding a digital copy” button also led to an enormous increase in email requests (Figure 3). Since that was launched we have seen more than a sevenfold increase in email reference. At the height of Spring semester, we received 347 of these requests in one week.

Figure 2. Volume of chat reference transactions compared for the same weeks of Spring/Summer semester in 2019 and 2020.


(Figure 3. Volume of email reference transactions compared for the same weeks of Spring/Summer semester in 2019 and 2020.)

Figure 3. Volume of email reference transactions compared for the same weeks of Spring/Summer semester in 2019 and 2020.

Our team has handled this increased volume very well. When we first went online-only we made the decision to double-staff our chat service, and that turned out to be wise. We also have staff from other departments (Access Services and Health Sciences) single-staffing their chat services so that we can transfer them any chats that they need to handle, or transfer them chats if there happen to be many patrons waiting.

Email reference handling is part of the chat duty assignment, so the double-staffing has also served to help handle the increased email volume. Outside of chat duty shifts, the two other disciplinary unit heads (Tom Ipri and Jenny Pierce) and I are doing extra work to handle emails that come in overnight. Our two part-time librarians, Sarah Araujo and Matt Ainslie, both handle a large volume of chat and email reference and we are grateful for their support.

Types of questions we receive 

Since April, about 75% of email reference requests are for help finding digital copies of books and media submitted through our “Get help finding a digital copy” button that is embedded in only one place: Library Search records. The remaining 25% include a diverse range of questions about the library and library e-resources.

The topics patrons ask about in chat and non-“Get help finding a digital copy” email reference vary somewhat depending on the time of year. Overall, about 40% of the questions patrons ask are about access to materials and resources, particularly articles. About 5% of questions appear to come from our alumni, visitors, and guests, which shows that outside communities seek virtual support from us.

During the period since we’ve been online-only, we received questions that mirror the proportions we’ve seen all year long. However, 45% of the alumni/visitor/guest questions, and about 44% of the media questions, we have received this year have been during the online-only period. 

Analyzing virtual reference transactions to understand user needs

Our part-time librarians, Sarah Araujo and Matt Ainslie, led by librarian Kristina De Voe, have created and defined content tags for email tickets and chat transcripts. They systematically tag them on a monthly basis, focusing on the initial patron question presented, and have also undertaken retrospective tagging projects. The tagging helps to reveal patterns of user needs over time. For example, reviewing the tags from questions asked during the first week of the Fall semester in both 2019 and 2018 show a marked increase in questions related to ‘Borrowing, Renewing, Returning, and Fines’ in 2019 compared to the prior year. This makes sense given the move to the new Charles Library, the implementation of the BookBot, and the updated processes for obtaining and checking out materials (Figures 4 and 5).

Figure 4. Tagged topics represented during the first week of Fall 2019 semester [Aug 26 - Sept 1, 2019]. Number of chats & tickets: 125. Number of tags used: 144.

Figure 4. Tagged topics represented during the first week of Fall 2019 semester [Aug 26 – Sept 1, 2019]. Number of chats & tickets: 125. Number of tags used: 144.

Figure 5. Tagged topics represented during the first week of Fall 2018 semester [Aug 27 - Sept 2, 2018]. Number of chats & tickets: 107. Number of tags used: 136.

Figure 5. Tagged topics represented during the first week of Fall 2018 semester (Aug 27 – Sept 2, 2018). Number of chats & tickets: 107. Number of tags used: 136.

Analyzing virtual reference transactions also allows us to aggregate and analyze the language patrons use when searching and communicating with us — via text analysis or simple word cloud tools. Understanding language used can better inform us of how users interpret our services as well as how we might more effectively communicate with them across various platforms.

(Figure 6. Word cloud of approximately 100 most frequently used words in chat transcripts during the move to online-only period [March 16-June 30, 2020]).

Figure 6. Word cloud of approximately 100 most frequently used words in chat transcripts during the move to online-only period (March 16-June 30, 2020.

We have reviewed “Get help finding a digital copy” requests at two points in time to ascertain how often we are able to find a digital copy (about 50% of the time), what other suggestions we make to patrons, such as referring them to Interlibrary Loan for chapters or working with a subject librarian to find alternative and available books, and to fine tune our request handling.

With a colleague from Access Services, Kathy Lehman, we also analyzed email transcripts from this academic year in order to refine our process for passing reference requests between LRS and Access Services.

We do not systematically re-read email and chat transcripts beyond discrete projects like this, except when there is some new development related to a particular request that requires we review the request history.

We analyze anonymous patron chat ratings and feedback comments, as well as patron ratings and comments from the feedback form that is embedded in our e-mail reference replies. Some librarians also send a post-appointment follow-up survey, and we analyze the patron ratings and comments submitted to those as well. Patron feedback from all these sources has so far been overwhelmingly positive.

Changes to the service based on our analyses 

We have refined our routing of email requests, and chat follow-up tickets, based on what requests we are seeing and the experiences of staff. In our reviews of “Get help finding a digital copy” requests at two points in time, we made suggestions to staff members as a result of our review at Time 1. Then we later found there was an improvement in request handling at Time 2, as a result of these adjustments.

We developed a suite of answers, as part of our larger FAQ system, and engineered them to come up automatically when we answer an email ticket. This saves our staff time, since they can easily insert and customize the text in their replies to patrons.

Guidance for referring to Access Services was improved, particularly when it came to referring patrons to ILLiad for book chapter and journal article requests and Course Reserves for making readings available to students in Canvas. We have also streamlined how we route requests that turn into library ebook purchases or Direct to Patron Print purchases, and we are working with Acquisitions on a new workflow that will proactively mine past “Get help finding a digital copy” request data for purchase consideration.

Using virtual reference data to learn about the usability of library services 

Analyzing virtual reference transactions can also provide insight into how users are interacting with library services more generally, beyond just learning and research services. Throughout spring and summer, virtual reference data has informed design decisions for the website and Library Search. 

One example is the recent work of the BookBot requests UX group. The group, led by Karen Kohn and charged with improving item requests across physical and digital touch points, used virtual reference data to better understand the issues users encounter when accessing our physical collections. This spring, we focused on how we might clarify which items are requestable in Library Search and which items require a visit to our open stacks — an on-going point of confusion for users since Charles Library opened.

The data confirmed that the request button does create an expectation that users can request any physical item. Looking at the transactions, we also saw that users did not mind having to go to the stacks, but they simply didn’t always understand the process. We realized that our request policies are based on the idea of self-service — if a user can get an item themselves, it is typically not requestable. One outcome of this work is new language in the Library Search request menu that instructs users about how to get items from the browsing stacks themselves. 

Next steps for assessing virtual reference service

We are working on several other initiatives this summer. One is a project to test patron’s ability to find self-service help on our website. Hopefully it will lead to suggestions for improvement to our self-service resources and placement of online help access points. We have also made revisions to the “Get help finding a digital copy” request form based on feedback from staff, and changes to the placement of the request button are planned related to our Aug. 3 main building re-opening. It will be helpful to test these from the user perspective once they are live.

Posted in process improvement, research work practice, service assessment, usability | Tagged , , | Leave a comment

We All Make Mistakes

Last week I learned a lesson about making mistakes, and it was both humbling and helpful. Just one day before the deadline for locking the University’s numbers into the IPEDS system (Statistics for the U.S. Dept of Education) for FY18-19, I was contacted by the University’s director of data analysis and reporting. “Where are the libraries’ numbers? “After a brief email exchange, password in hand, I was quickly able to input those numbers – prepared months ago for other purposes – ARL, ACRL, AASHL.

Annoying, each agency asks for data sliced and diced in different ways. Sometimes we separate out physical and digital titles, sometimes we separate articles and books in reporting interlibrary loan transactions. Reference can be most complicated, with data coming from multiple systems and channels (SMS, Chat, Analytics. LibAnswers), Excel spreadsheets, manual reports). 

But I felt confident in my IPEDS numbers, and the data input was easily completed. I reported back to Institutional Research and Assessment, and pointed them to the required backup documentation – multiple spreadsheets, reports from Alma, Read-Me files. From year to year,  if numbers seem out of line with previous years’ reports, those anomalies need explanation. For instance, I noted that physical circulation  declined due to the closing of Paley library. All good.  Next step, the data is AGAIN verified by the University Data Verification Unit. DVU provides another thorough audit, also requiring documentation to back up each number.

At 4:45 the day the numbers are due, DVU discovers an error. In calculating the total monographic expenditures for Law, HSL and Main libraries, I double counted two figures. I felt stupid, of course. But the error was easily fixed, verified by the Unit, and at 5:07 pm, the University’s data was locked. Yeh!

A long story, but I learned a couple of things. While the University-mandated data verification process is sometimes annoying, especially when time is tight, there is real value in having an external reader to double-check formulas, data input, and logic. No one is perfect. Internally here in the libraries,  I’ve begun the practice of my own verification – so I will double-check numbers provided to me. I want to understand so I can explain to others. 

It is equally important for those “on the ground” to help me understand the numbers.  Why did circulation go down? Why did interlibrary loan go up? What happened on March 19 that caused our gate count to plummet? What was the impact of making our  webchat more visible? 

When we ask for documentation, it is not to be mistrustful or to create extra work. It ensures that the data we report for surveys, to accrediting bodies, for funding agencies and to our professional associations  is as accurate and reliable as we can make it. 

I don’t believe that mistakes are a good thing. But I learn more from my mistakes than pretending I don’t make them. I’m much better off when I am willing to ask for help, allow time for others to check my work, and consider the perspectives (and expertise) of my colleagues. And next time, maybe I’ll remember to keep my thumb off the camera lens. 

Posted in statistics | Tagged , | 2 Comments

Using Social Media to Engage Library Users

Today’s very special post is authored by Kaitlyn Semborski and Geneva Heffernan, from Library Outreach and Communications at Temple Libraries. 

At Temple Libraries, we use social media to build and maintain relationships with library stakeholders. Daily, our Instagram, Twitter, and Facebook platforms allow us to engage with students, faculty, staff, and community members through posts, replies, comments, and more. As we grow our audience, it is increasingly important to regularly track and evaluate our strategies and interactions on each platform to best serve them. 

We learn a great deal from tracking social metrics. First, we gain insight into our audience. Who are they? What do they like? What content do they interact with the most? What questions do they have for us? When are they online? Knowing our audience directly informs the type of content we create and share. The things faculty seek on social media often differ from the things undergraduate students seek. We work to cater to those disparate interests. 

There are some topics that span across audiences. For example, the number one shared interest of our followers on Twitter is dogs. This tells us that posting about National Love Your Pet Day or National Puppy Day will likely go over well with our followers (and they both did). 

Given our segmented audience, we commit to having some content for each subgroup, rather than having all our content interest everyone. For example, posts promoting specialized workshops tend not to perform the best in terms of engagement. This may be because the majority of our followers do not fit into the small niche of that particular subject (for example most might not know what PubMed, Gephi, or QGIS are) and are therefore less likely to engage with our post promoting it. Does this mean we should stop promoting the variety of workshops offered through the Libraries? We think not. The metrics show us that the audience segment interested in those posts is smaller, but we still value promoting our opportunities to all the Libraries’ patrons. 

It is important to note that our sole goal for social media is not to get the most “likes.” If that were the case, we would only post photos of puppies reading books all day. While we like to increase engagement on our social platforms, our ultimate goal is to use social media to increase engagement with our online and in-person services across our libraries. Social media is one of the most direct ways we have to engage with our users online, and we want to inform them about what the Libraries are doing to serve them. 

Each social media platform we use (Twitter, Facebook, and Instagram) provides native analytics and each scheduling platform we use (Later and Hootsuite) has its own metrics collection. In order to stay consistent with what we track, we collect our metrics in a spreadsheet, independent from the platforms themselves. This gives us the freedom to evaluate and compare data across platforms.  

Likes and followers by platform

Assessment practice at-a-glance:

  • Weekly updating of metrics spreadsheet
  • Monthly tracking of followers on each platform
  • Twice yearly thorough review of each platform and evaluation of what is working well and what is not

While tracking varies by platform, here are samples of what goes into our spreadsheet:

  • Likes
  • Shares
  • Engagement rate
  • Comments
  • Reach
  • Content type

Instagram posts over time

So, what are some changes we made and improvements we have learned from our analytics insights? We learned that Facebook is for storytelling. That means posts about university and library news, as well as event updates after an event are what our audience wants.

Facebook Insights

Facebook Insights Example 1

Facebook insights

Facebook Insights Example 2


Twitter is a platform for news and conversation. It is where announcements are made and questions are asked. We have learned to go to Twitter first to spread pithy, important information, such as the closing of all our physical locations. It is also a place where followers can ask questions of us and know they will get accurate responses. 

Twitter Post Example 1

Twitter Post Example 2

Twitter Post Example 3


We’ve learned that on Instagram people want pretty pictures. It is a visual platform and people engage with a post only when they are drawn in by the visual. Because of this, we have been emphasizing photos taken by the university photographers or user-generated content that we are tagged in that are already strong photos. 

Instagram Post Example 2

Most of all, social media metrics tracking is a form of feedback about the Libraries as a whole. When we evaluate our interactions with our community on social media, we learn about what they need from us and what they like about our work. The metrics reflect interest in the Libraries. People using our resources are more likely to engage with our social media presence. As our number of users grow, so do our followers. As buzz grew around the opening of Charles Library, engagement with our content reflected that buzz. We work hard to show off the great work being done by our staff and the great work brings more attention to our channels of communication. There is always room for improvement, and we will keep striving for it.

Posted in data-driven decision making, statistics, web analytics | Tagged , | Leave a comment

A New Day for Assessment Practice?

sunrise from airplane

It is difficult to believe that in early March we convened the Assessment Community of Practice, joining Margery Sly and Matt Shoemaker to talk about changing needs for assessment measures as we develop new library services. The new Charles Library affords us the opportunity to offer more facilities, technologies, and expertise.  We talked about how best to assess the impact of those new types of spaces on our community.  We agreed that by necessity, much of our “assessment” is counting: the numbers of visitors to the reading room, attendance at instruction and workshops, use of physical collections, use of computers and specialized software in the Scholars Studio. 

We talked about the differences in academic departments who had more or less interest in our offerings. Some faculty take advantage of special collections and the instruction offered on use of primary resources. Others find value in new types of research questions and collaborations made possible through the Scholars Studio.

In just three weeks, this important discussion seems less relevant. The questions continue to be useful – how best to gauge the usefulness and long-term impact of our services on the students, faculty and community we support?  But even the most basic of measurements: the gate counts, the use of physical materials, the attendance at in-person workshops and instruction sessions – these are no longer available to us.   There are no physical bodies to count. There are no hands-on workshops to evaluate. 

This is a loss, of course. (I hate to think how our library trend-trackers like the Association of Research Libraries will accommodate this year’s statistical anomaly.) But for Temple, it provides an opportunity to explore our questions in new ways, with new tools. We are impelled to think about how to mine our web analytics data more deeply. We continue to have access to data related to the use of the website, our discovery systems, our licensed resources and the many channels of social media output from the library. Springshare and Ezproxy provide us with tools on the use of library-curated content and collections.

Demonstrating the use of our expertise in providing access, research and instruction support takes a very different shape now. It also provides us with a testing ground for many of the initiatives that are already underway. Instructors of English 802 will be in a much better position to help us improve our online version of that library workshop.  The Health Sciences Libraries quickly transitioned to Zoom versions of their popular workshops – perhaps making these even more accessible to busy students and faculty.  Jackie Sipes is exploring ways of doing remote usability testing of Library Search and other online discovery tools. 

Just a week ago the libraries had a physical space to which students, faculty and community could come. We were solid. Our buildings and physical spaces staffed with humans had a presence that signified the essential place of the Library on the campus. Now that place may not be as obvious to our users.  At least for the near term, we will need to re-imagine how the library positions itself and how we demonstrate that continued impact and value to our community. 

Posted in library spaces, statistics, web analytics | Tagged , | Leave a comment

When a Marker is More than a Marker 

Picture Credit: Zombeiete from Flickr Creative Commons

User experience is all around us. In libraries, we often think the assessment of user experience relates to web interfaces, or building way finding and navigation. We might, ask, “Is the language that we use on the website clear to non-librarians?”  “When visitors come into the library, are they provided sufficient affordances  for orientation to the services and spaces available? “

Of course these are questions we already have on our plate for exploration, particularly now as we deal with issues of user experience in a very new library building, the Charles. 

But dry erase board markers? That seems like a pretty small operational decision. We either make them available for check out, or we don’t. But when the option of providing markers to students arose, it got a bit more complicated, and everyone had an opinion.  

Charles Library has 36 study rooms each equipped with whiteboards. These are quite popular, as evidenced by the sprawling, specialized, and creative work we see in the rooms. It is gratifying to see how this simple tool sparks collaboration among students.  Exactly the behaviors we hoped to see in these new library spaces. 

In providing study rooms, there are operational decisions to be made, from how we manage room reservations to policies on use of the rooms.   When the rooms opened, the issue of markers was raised. Should we provide them? And how? Multiple options were discussed, and each might be evaluated on a kind of user experience. 


Make markers always available in study rooms

Make markers freely available at the service desk, but don’t check them out

Check out markers at service desk

Make markers available for purchase in vending machine

Make students responsible for bringing markers for use in study rooms


There may be other solutions, of course. It’s clear that there is a range of options, and each has implications for the user experience. Each option needs to be balanced against library operational concerns, including staff time and effort (creating records in catalog for checkout, preparing the material for checkout, time for transaction at checkout, collecting fines for lost markers) and of course, the outright cost of the markers.  

We may decide that while students might love to have each each study room supplied with an array of colored markers, all full of ink, each time they visit – that may not be the experience we can afford to provide, given other organizational priorities and expectations.

Fortunately, students seem happy to bring their own markers,  as we see many wonderful expressions of collaborative work in the study rooms. While there is no right or wrong answer as to providing markers,  it’s always useful to remind ourselves that 1) there is a range of solutions available to us and 2) the solution we choose may impact user experience.  

Posted in library spaces, usability, user experience | Tagged , | Leave a comment

Are There Any Meetings on Library Assessment?

Assessment is a growing topic of interest at American Library Association meetings and last weekend I had the privilege of participating in several meetings to discuss trends and challenges. 

Look at How Far We’ve Come: Successes

Assessment practice is evolving from the solo librarian to the assessment conducted in multiple domains – user experience, collections analysis, space design. We started the ACRL Assessment Discussion with a sharing of successes. Grace YoungJoo Jeon at Tulane demonstrates that one librarian can accomplish alot. In her first year as Assessment and User Experience Librarian, she talked with everyone about assessment, learned about their needs, created a list of potential activities, and began to prioritize the work ahead. Grace described reaching out to other units on campus, including the Office of International Students and Strategic Summer Programs. She worked with them to design and moderate focus groups with international students.   All in one year!  

Penn State Libraries’  success this year is a growing department for assessment and metrics headed up by Steve Borelli.  Prioritizing assessment needs through the lens of budgetary operations, they are currently advocating for a position in collections assessment for a department of four. 

Joe Zucca at the University of Pennsylvania  is using the Resources Sharing Assessment Tool (Metridoc) as a space for collecting interlibrary loan statistics, enhanced with MARC data from the consortium’s individual library holdings. With connections to Tableau, data visualization enhances the ability to evaluate inventory and use, and provides potential for collection development at a collaborative level. 

We Still Have Some Challenges

In the example of RSAT, merging data from 13 institutions creates some challenges. There is a “near total absence” of data governance, including some 600 designations for academic departments. This lack of standardization makes cross-institutional analysis very difficult to do. 

Of course this isn’t just a problem for large-scale analysis across libraries. One assessment librarian discovered her public services departments have a “home grown” system for tracking reference and directional questions. While standard definitions provided by ACRL and ARL can provide some guidance, libraries may not want to be limited to these, more traditional, metrics alone. There is a spectrum of opinions as to how to count, what to count. How best to define a transaction? 

This lack of agreement related to counting has ramifications down the line, particularly if these metrics are used in performance review. What is to prevent someone from “bumping up” her numbers?  We talked quite a bit about how the library “reduced to bean counting” is no way to tell our story. Librarians may very well feel that a focus on counting diminishes the work that they do. 

The Rearview Mirror

We shared concern that assessment practice is “always looking through the rear view mirror”. When we look at trends only at annual review time, we fail to understand those trends to plan for the future.  We may prefer to ignore the trends.  We tend to keep our data silo’ed, making it difficult to see the full picture, or inter-relationships.  A great example is this one: Less questions about finding Huckleberry Finn (a decrease in numbers at the reference desk) could mean that our discovery systems are working even better.  Fewer page views on our website may result from a more efficient, user-friendly interface. We need to look at our numbers in a more integrated way. 

It was good to talk about our challenges, our successes, and best practices with a group of understanding peers. Then on to the next meeting, LLAMA Assessment Community of Practice, Hot Topics!

Posted in uncategorized | Tagged , | Leave a comment

Furniture feedback in Charles Library

When we opened Charles Library in August of 2019, we knew right away that we needed to increase the seating capacity in the building. During the day, a walk through the upper floors of the building gives the impression that we are at, or quickly approaching, full capacity.

The first physical space UX project I did in Charles was to gather data about the current furniture. In the fall, I worked with Rachel Cox, Nancy Turner, and Evan Weinstein to collect data on furniture use and preferences. We’re using that data now to recommend additional furniture and space improvements.


To kick things off, Rachel and I conducted floor-by-floor walkthroughs, counting the number of seats occupied and recording observations about furniture use. We carried out the daily walkthroughs for one full week in October. We entered the headcounts and observation notes on a worksheet. The student building supervisors handled data collection at night and on the weekend.

Worksheet for recording observational data

The sweeps gave us a sense of how full the building was at different hours and how students were using the space. But, we also wanted to know if students liked the new furniture.

To find out more about this, Rachel and I built a survey that asked what students came to the library for, if the furniture met their needs, and what they liked and disliked about it.

survey p. 1

We got 213 responses. Rachel and I analyzed over 600 survey comments over a couple of days, sorting the comments into categories.

Survey analysis


The majority of students (78.4%) came to the library to study alone but studying as a group was also very common (48%). Other reasons to visit included resting between classes, finding a book, going to the cafe, visiting the Scholars Studio, Graduate Study Area, Special Collections, or Student Success Center, and hanging out with friends. 54% responded that the furniture didn’t meet their needs.

The over 600 survey comments provided us with more detailed information about what students wanted:

  • More seating generally
  • Comfortable seating  and “cozy” spaces
  • More spaces to support individual study
  • An environment mostly free of distraction that also allowed some level of chatting/interacting with others

More seating

The headcounts revealed, somewhat surprisingly, that we rarely reached over 50% occupancy on any floor of the building, even during the day when the building looks full. Survey comments told us that students sometimes cannot find a seat in the library, and a few asked specifically for more seats. Students also described the existing seating as being “too close” to other students and too small for spreading out their work. Lack of personal space at the large open tables was a common thread throughout the survey comments.


Right now, the open study areas in Charles Library are furnished mostly with open seating tables. This is a dramatic shift from Paley Library where students were accustomed to semi-private, partitioned desks and armchairs with individual side desks. Students miss the private seating they found in Paley; one commented,

“Please, provide some cubicles or private study spaces. It takes so much time to book a study room because they’re rarely available. The cubicles that used to be in Paley fit my needs well and I have not been nearly as productive since moving to Charles.”

Others spoke directly about distractions, both noise and visual, at the open table seating,

“…perhaps these spaces could be furnished with … divided desks to remove distraction from the (sometimes embarrassingly noisy) library environment and to center focus on the work I came here to do…”


Another common theme was the desire for comfortable seating. Students frequently asked that we add “comfortable” furniture and make the space cozier. “Comfortable” and “cozy” were used to describe soft seating like beanbags and couches, as well as upright tables and chairs. The comments overall conveyed a strong preference for furniture that supports work, and in many cases, “comfort” meant seating that is ergonomically designed for sitting and doing work over extended periods,

“[I want] couches and seating like you would find at a cozy coffee shop. The furniture I have used has been very uncomfortable and I cannot do work for a long period of time without something in body aching. The library is absolutely gorgeous, but the furniture is probably the biggest complaint…”

Even our lounge furniture doesn’t meet students’ expectations for comfort. They described the lounge chairs and small polygon tables as too “low to the ground” and lacking “function.” One student demonstrated their distaste for the lounge furniture with a drawing of a stick figure hunched uncomfortably in one of our lounge seats to reach a laptop on the small polygon table. Others asked for armchairs and other soft seating,

“I don’t really feel that there are any comfortable arm-chair like seats at the library like there were at Paley. I can’t get comfortable anywhere, and as a result, I don’t feel welcome to stay long…”

During the building walkthroughs, we mostly saw students using the lounge seating to do work, rather than for socializing or other short term activities more suitable to lounge seating. But, the lack of a work surface at those seats meant that students either hunched over the small tables or created work surfaces using benches, laps, window ledges, or a second lounge chair. The survey made it clear that students do not like the lounge seating because it’s not ergonomically suited for doing work.


Despite increases in group and collaborative work, student space needs are still strongly tied to individual study. The survey showed that students primarily come to the library to study alone and they want to do that in a comfortable environment that is relatively free of distraction.

The survey provides us the evidence for prioritizing future purchases of furniture, including pods or carrels for individual study. But we’re also exploring ways of re-configuring our current furniture to provide semi-private spaces with minimal distractions. Table partitions and different furniture arrangements can create barriers, providing students with a psychological sense of privacy.

tables around ledge with students studying

Extra seating for exam period

When we added the individual table seating around the ledge during exams, it was immediately popular. Though these seats are in high-traffic areas, they allow students to face away from others, providing a buffer from visual distractions.

We also want to get a better understanding of study room use and continue to gather student input as we select potential new furniture.

Posted in library spaces, service assessment, surveys | 1 Comment

We Don’t Want to Work with Mummies

At Charles Library we are experiencing a more open office environment. I saw an  extreme version at the Penn Museum this weekend ; the conservator’s workspace is actually in the gallery, on view several hours a day. But the office mate, a mummy, is very, very quiet.

These issues of privacy, of noise, and of how we move around our space without disturbing co-workers, surfaced in several workplace norms conversations that I conducted in September. The meetings, organized by work area, were designed to surface issues of concern about the kinds of environment (and behaviors) that would enhance, and detract, from the “ideal” working conditions. Would it be a problem to eat tuna salad at one’s desk? How can we signal to colleagues that we are not “interruptable”?

Some anxiety about establishing workplace norms emerged when we talked one-on-one with staff as part of the Envisioning our Future project. Even before the move, staff were concerned about the more open workplace environment and the need to set guidelines for behaviors. Recognizing as well that multiple types of work would be taking place in our new work spaces. Would this lead to more collaboration in person because of casual interactions,  or just more use of electronic communication?

The discussions were summarized and shared with all of the staff in that workspace, including those who were not at the in-person conversations. A summary email was sent to all staff.

Did the workplace norms conversations make a difference?  Last week I launched a short survey to staff to gauge the effectiveness of the process. It’s never easy to ask for feedback, risking the possibility that it did not make a difference at all, or worse, it was considered a waste of time. While that didn’t happen, I did learn some lessons, about the process and about assessment.

About 40 staff members participated in the five sessions. 15 of those people responded to the survey. They were from all work areas, 

  • 66.7%  participated in the session
  • 86.7% read the document resulting from the session
  • 66.7% read the email to staff

I asked three additional questions:

From your perspective, what degree of change has occurred in attending to workplace norms in your area? (0 is “none at all” and 5 “a good bit”)

From your perspective, what degree of change has occurred in attending to workplace norms in your area based on issues discussed in the facilitated conversations/shared documents? (0 is “none at all” and 5 “a good bit”)


Although this is a very small sample, it seems that time in our new spaces has had more of an impact than the actual conversations. (I base this on the fact that the average goes down for the second question).  The comments provide a bit more insight into this interpretation:

The process elicited quite a bit of variation between work areas in some aspects of the work environment, like tolerance for noise. Some were disappointed that the outcome did not result in concrete decisions about policy. Others felt that the conversations led to more comfort in discussing the workplace norms.  But they were not the real impetus for changes in developing norms that staff are perceiving.  Those arise from just just being together. And there are outstanding areas of negotiation, like the use of the breakout rooms. 

What did I learn as a facilitator? That I can be challenged by remaining neutral in these situations.  I need to follow my own ground rules in providing equal voice to all participants.

But overall, I feel positive about:

  • providing an opportunity for staff to get together and
  • opening up a dialogue that may  not have come about as readily without these meetings

Mainly, I am appreciative of my colleagues who participated in this effort. We don’t necessarily want mummies as our work mates, but it’d be nice to play our music as loud as we like.  

Posted in organization culture and assessment, process improvement | Tagged , , | Leave a comment