This month’s post is from Sarah Bauman, Head of the Charles A. Kraus Library at Temple’s Schools of Podiatric Medicine. Sarah was so excited by the recent conference she attended, I asked her to post about it.
How do we select an appropriate study design for a research question?
How do we find evidence?
How do we create a good clinical foreground question using the PICOTT approach (Patient, Intervention, Comparison, Outcome, Type of Question, and Type of Study)?
The workshop consisted of lectures and small group work, where we learned to assess standard research methods used in the health sciences, like systematic reviews. We used a traditional a CASP checklist for the assessment in our critical appraisal by looking at whether the review, in general, addressed a sensible clinical question, the search was relevant and exhaustive, the studies found were of high methodological quality, and the selection and assessment of studies were reproducible.
Librarians in the health sciences are not usually required to evaluate the calculations used in medical research. Still, we learned some “basics,” like how to use nomograms to specify likelihood ratios and create tables to calculate the sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios. Understanding what’s behind these calculations helps one appreciate how the efficacy of trials and treatments for patient care is rigorously evaluated before publication—or at least the best practices.
I liked learning about this so much; I plan to take the Critical Appraisal Institute for Librarians course this spring semester to delve deeper into the topic. And the workshop was not all work – I also enjoyed restaurant “dine-arounds” and even a ghost tour of Charleston.
Ghostly Graveyard (Photo by N Turner, Charleston, SC, 2016)
In my capacity as coordinator of library assessment, I’m often consulted about survey design. I even represent the Libraries on the University’s Survey Coordinating Committee. So I should know and use best practices.
But frequently we want to toss out a quick survey to gauge reactions to a situation – surveys are generally considered a straightforward and suitable method for this. I wanted to gather feedback from library and press staff on the perceived effectiveness of activities put into place to build staff engagement. The survey followed was our staff engagement event (retreat) conducted in December. Six months post event seemed like a good time to conduct a brief assessment using examples of staff activities taken directly from the report’s recommended next steps.
It was a short survey. The first question,
On a scale of 1-5, how effective is this activity in helping you connect to others in the organization?
A minor mis-step.
The question references a scale of 1 to 5, but the options were not numbered! I doubt that this confused too many, but it was, admittedly, a bit sloppy on my part.
But the second question did cause some confusion. Using the same set of activities, the survey asked,
On a scale of 1-5, how effective is this activity for communication and building trust?
This is an example of a double-barreled question. A common survey question mistake, asking about two things at once. Communication is not always the same as building trust. When a survey question is not clear and precise, responders are justly confused, and the data collected is difficult to analyze. Because the question is not understood in the way it was intended.
Fortunately, the stakes here were not too high. The survey yielded important feedback and suggestions for improvement. But for those large-scale efforts, the University Survey Coordinating Committee, and the Office of Institutional Research & Assessment, serve an essential purpose. This office protects Temple students and faculty from being over-surveyed. Additionally, its review process puts lots of expert eyes on surveys that go out to our community. Ultimately, our committee work supports better response rates as it prevents survey-fatigue, and the careful review helps ensure the survey data we do collect is sound.
And of course, it’s a lesson for me and other assessment professionals — to always know there is room for improvement!
The crowds are back! For ALA’s annual conference, almost 16,000 registrants gathered at Chicago’s McCormick Place Convention Center. The return to fully in-person meetings, coupled with sessions on the new hybrid work environment, has me revisiting the question of howspace, both physical and digital, contributes to fostering connection and community within an organization. The topic has interested me since the move to Charles Library’s open office spaces and the subsequent pandemic, as it impacts our work, our communication and our sense of cohesion as an organization.
We had perfect weather for a tour of library renovations sponsored by ALA CORE’s Buildings for College and University Libraries Committee, on Innovative Learning Spaces: A Tour of Chicago-Area Academic Institutions.
I was most interested in the staff space renovation taking place at Regenstein Library, a 1970’s brutalist structure, almost 600,000 square feet and housing 3+ million volumes (including an onsite ASRS). John Carey, Director of Building Services and Operations, described a post-pandemic shift in building use. While the library sees over 5,000 visitors a day, there is a shift towards pandemic-weary undergraduates seeking in-person experiences as well as wifi and power outlets.
I was particularly interested in the staff areas. The renovation plan deliberately included an “investment” in staff spaces, towards all librarians having an “enclosed” space. We toured these new office suites, including both large (for supervisors) and smaller for librarians. There are consultation spaces for work with patrons, a staff kitchenette, conference room, and a common table for informal unit gatherings. While enclosed, the suite is located near open stacks and study areas and available for drop-in service. Only one staff member occupied the space on our Friday visit, but according to a UC colleague, staff are using up vacation time. Plus the upcoming NASCAR race wreaking havoc with commutes! So it will be interesting to see how these spaces work when classes are in session and library staff are onsite once again.
Renovated space for librarian offices at the University of Chicago.
Hybrid Work Culture Post-Pandemic: Challenges and Opportunities from Geographically Diverse Libraries was a panel discussion on how pandemic lessons are informing our increasingly hybrid work environment. Panelists from North Caroline, Virginia and Miami shared experiences and best practices.
From the University of Miami:
Think creatively about the library organization structure
Expect to train managers of hybrid teams
Understand that achieving fairness and inclusion may be more difficult than expected
Maintaining a healthy work/life balance and engaging in networking can also be challenging
Glen Wiley, Associate Dean for Access and Discovery Services
North Carolina State University also has:
a renewed emphasis on staff well-being, touching bases regularly with “How are you doing?” surveys
is rethinking office and meeting space to ensure that meeting rooms are hybrid-enabled
reconsidering the current practice of assigning office space in a “hierachical.”
has established in-person days for many departments, when everyone is onsite
makes an effort to find remote work for all staff, with managers typically working on site at a higher level, “as a way of supporting staff.”
David Goldsmith, Associate Director for Collections & Research Services
Space for Community and Connection
Whether our workspaces are physical or digital, in the end it’s not the space, per se, but how our space use supports communication, connection and community. Whether we are in-person, working in a more hybrid environment, or fully remotely, we need to design our spaces to support new communication modes and recognize that optimal use of these modes requires new skills. And for some kinds of community-building at work, there continues to be value in the face to face, in the flesh connection.
Chairs can be checked out at the Illinois Institute of Technology
Walking into the vast atrium space of Charles Library is awe-inspiring. It can also be overwhelming, particularly to new students when the semester first starts. While we have many ways of counter-acting that feeling – helpful security guards, Emily Schiller’s friendly and creative whiteboards, Access Services also instituted a greeter program in October 2022.
Student greeter Kyler Epstein
Access Services’ Alex Rosen supervises the students who work at the One Stop Assistance Desk (OSAD) and now, the kiosk, providing two complimentary service points on the first floor.
Working with Jackie Sipes, they developed an assessment plan using both metrics and qualitative data to understand and improve the greeter service. They carefully analyzed transaction counts, question types, and traffic patterns to optimize the use of students at the two service locations. In addition, students regularly provide reflections on their experience through an online form designed by Jackie.
I spoke with Alex last week about the project, what they’re learning and the new kinds of questions the assessment process generates.
Nancy: Tell me more about how you decide to deploy students where?
Alex: As part of their regular shifts at the desk, student workers are deployed at the kiosk. Originally, we hired someone expressly for the kiosk, but out of necessity, each student is now assigned one hour a week. There is the option to pick up more. Mostly we’re basing this off the data tracker (transactions) number, averages, considering the traffic and balancing with interactions at the OSAD, which can be very busy. So we try to make sure the kiosk is staffed when we expect it to be busy elsewhere.
We are also learning about how important it is to place the kiosk where it is visible, especially because students don’t always see the OSAD. The kiosk gets the most visibility when it’s outside the Events Space, but that can interfere with traffic to the space itself, or attendees think the kiosk is for information about the event. It’s uncomfortable when the kiosk is stuck inside a tour group of high school students. What works best is pairing the white board with the kiosk.
One advantage of the portability of the kiosk is we can try different locations, and also be adaptable to other things happening in the atrium. Combined, time and placement influence the traffic the most.
Nancy: Do the students get special training? Is this something they opt into or is it part of the regular duties of the OSAD student?
Alex: Each student is scheduled to work at the kiosk one hour each week, but some prefer working at the main desk, where they feel less “exposed”. The kiosk has been useful in helping students to navigate the atrium area’s services. Interaction statistics indicate that the majority (61%) of questions are directional, but students also handle a good number of basic reference questions.
Question Types
The students’ reflections provide additional insight into what is going on at the desk:
“I noticed a lot of people who seem to be hesitant to ask for help at the OSAD desk are a lot more willing to ask for help at the greeter kiosk. I assume because the experience is more intimate for them without a desk and plexiglass in between us. That is probably the most rewarding part of the greeter kiosk, as it is easier to help and connect with people who are more shy/timid”
“what is hard is hearing how busy OSAD is and not being able to help”
Alex: The irony too is that currently, students working at the kiosk are limited in the kind of information they are supposed to provide – so that can be frustrating to them. But the potential for side-by-side review of the computer screen makes some kinds of information sharing easier. Students assume that everyone does the same thing and can be frustrated with the response to their question of, “I’m not a librarian, let me give you three handouts.”
Nancy: The challenge for you is to figure out how best to balance the types of services available at each of the service points – in a way that is natural to the space and the traffic flow as well as the kind of information needs that you’re seeing. So we avoid bouncing students from one desk to another!
It sounds as though you are learning a lot. The assessment process is providing some useful insights into scheduling needs, but your analysis informs our understanding of the different types of interactions supported by different physical setups. Thanks for sharing your experience with us.
If you’ve ever set foot in a research library, you know that it’s a magical place filled with books, articles, and information galore. But what makes a research library successful? How do they know what their patrons need and how to provide it? The answer lies in assessment.
Assessment is the process of gathering and analyzing data to inform decision-making. In research libraries, assessment helps us understand how our resources and services are being used, and how we can improve them to better meet the needs of our patrons.
Take Temple University Libraries, for example. They use assessment to better understand their patrons and to make data-driven decisions that improve their services. Let’s explore some examples of their assessment efforts.
One of Temple’s most impressive assessment initiatives is their annual user survey. The survey asks patrons to rate their satisfaction with various aspects of the library, such as the collections, the physical space, and the staff. The survey also includes open-ended questions that allow patrons to provide feedback on specific areas of interest. This data is then used to inform decision-making around resource allocation, service improvements, and strategic planning.
Another example of Temple’s assessment efforts is their analysis of circulation data. By tracking the number of times a book or article is checked out, the library can better understand which resources are most popular and which ones may need to be removed or replaced. This analysis also helps the library identify trends in usage, such as an increase in demand for resources related to a specific subject area.
In addition to these assessment initiatives, Temple also uses focus groups and usability testing to gather feedback on specific projects or services. For example, they conducted a focus group to gather feedback on their website redesign, and used the feedback to make improvements to the site’s navigation and search features.
All of these assessment efforts help Temple University Libraries better understand their patrons and improve their services. But assessment isn’t just important for research libraries. It’s also important for anyone who wants to make data-driven decisions.
Whether you’re a business owner, a student, or just someone who wants to improve their own productivity, assessment can help you understand what’s working and what’s not. By gathering and analyzing data, you can identify areas for improvement and make changes that will help you reach your goals.
So, next time you’re in a research library, take a moment to appreciate the assessment efforts that go into making it a success. And if you’re looking to improve your own performance, consider using assessment to unlock the secrets to your own success.
*To the reader:This post was written entirely by ChatGPT 3.5, using the prompt, “write a blog post on the topic of assessment in research libraries for a novice audience in a intriguing style with examples from Temple University Libraries at https://sites.temple.edu/assessment/ “
Consider this: How long did it take you to discern the difference between this AI-produced post and a “human” written one? What were the clues? While this post exercise was “just for fun”, in the future I’d like to explore ways in which AI might support our work in library assessment – identifying themes in survey comments, polishing a post, designing images, or conducting quick literature reviews. It’s both exciting and scary!
DALL-E Prompt: Research library in the style of Edward Munch
When the planning of Charles Library began in 2013 there were literally hundreds of design considerations and decisions to make. From where to locate service units to the configuration of service desks to the technology in instruction rooms, a multitude of design proposals needed vetting and final decisions. Among those many design factors and features, there was only limited discussion of how biophilic design could be incorporated into the planning of the new library building.
Despite that, Charles Library ended up with one of the most significant examples of biophilia that can be designed into a building’s structure – its green roof.
Green roof aerial view
Even though the greenery of plants is conspicuously absent within Charles, we are fortunate to have an amazing display of biophilic design – one that is unique among academic libraries. While we primarily engage with the green roof as part of our fourth-floor experience, it is important to recognize the entire roof of the building is covered in plant life. Here is a photograph of that rarely seen upper-level green roof – quite impressive!
Biophilic design is a simple concept. It is the inclusion of elements of nature into designed structures, whether they be houses, retail shops, hospitals, hotels or even libraries. Adding plants to our built environment is one obvious form of biophilia. A lesser one might be Charles’ extensive incorporation of large windows. Providing these external views of nature, such as it exists in our densely population urban environment, is another way to incorporate biophilic design elements.
Charles’ terrace could be included in that definition. Libraries often incorporate plants, trees, water ponds, fountains and other examples of nature into their outdoor spaces, when available. When they are not, everything from small potted plants to large plant walls bring nature into our physical spaces. Here is an example of interior biophilic design from Penn State’s main Pattee Library after its most recent renovation in 2019:
PSU library green wall
On regular walks through Charles Library fourth floor, I observed that areas proximate to the green roof, be they study rooms or tables, were most frequently occupied. In the spring of 2022, I serendipitously discovered an article about biophilic design, which up to that point was an unknown topic to me. It made me wonder if there could be a connection between the green roof and students being drawn to it. I become curious about the possible biophilic design qualities of the green roof and how it might affect a student’s study space choice. The green roof was designed into the building for its sustainability features. Did the architects also intend to leverage the roof to provide the benefits of biophilic design?
The primary value of biophilic design is wellness. Whether it invites us to relax, achieve calmness, meditate or practice mindfulness, focus or simply smile, inserting nature and greenery into our built spaces does us physical and mental good. Research done at health care facilities suggests that patients exposed to biophilic design heal more quickly, have improved vital signs like lower blood pressure, all of which results in shorter hospital stays. As the current mental health crisis among college students is of great concern, anything we can do to provide spaces that contribute to wellness is of benefit. These values were reinforced at our fall 2022 Designing Libraries Conference, where a team of architects presented on well buildings. This session spoke to the importance of biophilic design’s contribution to the health and well-being of a well building’s inhabitants.
To better understand the possible connection between our green roof and student preference for study spaces, I constructed a simple survey instrument to gather information about student preferences for study space on the 4th floor of Charles Library. Nancy Turner and Jackie Sipes reviewed my survey instrument, and offered feedback and insights on what might work best to obtain useful results. We also discussed possible strategies for the distribution and collection of the surveys. How might we get the survey to students at the point of relevance without disturbing their study? There were several possibilities – all with potential challenges.
With Andrew Diamond’s assistance, we decided to hand distribute a paper survey to students as they studied at tables and cubicles on the fourth floor. The survey instructions indicated that if they chose to respond, students should leave the survey on the floor. Andrew then waited approximately 30 minutes after the distribution to collect completed surveys. This method was used multiple times during a two-week period near the end of the fall semester. In a second phase, during February 2023, students in study rooms were targeted. Again, paper survey forms were made available for students to complete and leave for later collection. This method yielded 209 surveys usable surveys, 158 from students sitting in open study areas and 51 from students in study rooms. Andrew then transferred the data from the paper surveys to an online form.
After analyzing the survey results, it appeared that there was a modest connection between the green roof and student study space preference. Students were given a list of eight factors and asked to rate the importance of each of those factors in their study space selection. The green roof was the fourth highest ranked factor, falling below (in rank order) quiet, access to an outlet, and “my go-to spot. “Most convenient” was a close fifth factor (See Fig. 1):
Figure 1. Why did you choose this study location?
Then students were asked how important several study space features were to them in choosing their preferred space. For the green roof, a lesser number of students identified it as important in choosing a study space compared to other features (see Fig. 2):
Figure 2. How important is a view of the green roof?
However, when asked a similar question about proximity to a window or natural light sources, the response was quite different. A significant number of students, over seventy percent, indicated it was important to their choice of study space (see Fig. 3):
Figure 3. How important is being close to a window or natural light?
My big take away from this research is that we should not discount or underestimate the important of the biophilic design features of Charles Library, the primary two being our unique green roof and the presence of abundant natural light and outdoor views. In the future, the addition of seasonal planters to the terrace could add a third biophilic design element.
While quiet and device charging are significantly important factors in student preference for study space at Charles, we have a great opportunity to further capitalize on the benefits of biophilic design. Based on this study, I would advocate for the addition of more greenery around the building. Even artificial plants, given concerns about the maintenance of real ones, could contribute to student and staff wellness.
As a next step I plan to do some additional survey collection in later spring and summer months when the green roof is in full bloom. The responses may differ when the green roof is at its peak as a home for bio-diverse plant life, pollinators, and small birds It may also be of interest to conduct the survey on other floors to see if access to natural light and outdoor views is a factor there – or perhaps not so much.
Assessing our services and resources to understand how they impact those who use them should always be at the core of our mission. In doing so we improve our capacity to advance the success of our students, researchers and instructors. It ultimately depends on our capacity, willingness and drive to act on what we learn from assessment as we strive to design better library experiences.
Contributed by Steven J. Bell
Acknowledgements
Many thanks for Andrew Diamond for his assistance with the distribution and collection of surveys, as well as the input of data into the online survey form.
Kudos to Evan Weinstein for sharing aerial photos of the upper-level green roof.
Survey question feedback and suggestions from Nancy Turner and Jackie Sipes were greatly appreciated.
Thanks to Temple Libraries colleagues who stopped by at ACRL to visit my poster session and share their comments and questions.
Gillis, K.; Gatersleben, B. A Review of Psychological Literature on the Health and Wellbeing Benefits of Biophilic Design. Buildings 2015, 5, 948-963. https://doi.org/10.3390/buildings5030948
Salih, K.; Saeed, Z.O.; Almukhtar, A. Lessons from New York High Line Green Roof: Conserving Biodiversity and Reconnecting with Nature. Urban Sci. 2022, 6, 2. https://doi.org/10.3390/urbansci6010002
Tekin BH, Corcoran R, Gutiérrez RU. A Systematic Review and Conceptual Framework of Biophilic Design Parameters in Clinical Environments. HERD: Health Environments Research & Design Journal 2023;16(1):233-250. doi:10.1177/19375867221118675
Back in November of 2020, Michael Krasulski (Philadelphia Community College) approached Justin Hill and me about contributing to a new edition of ACRL’s Twenty-First Century Access Services.The 2013 edition was in desperate need of a refresh – so much has changed in the ways that we manage our collections and provide services to users.
Justin and I worked furiously to get our chapter submitted on time, and two years later, we were thrilled to see this announcement:
We can’t wait to read the entire volume, but in the meantime, we share some key points from our own chapter on assessment. We structured our section as a series of questions:
How do we get started with assessment?
Start your assessment journey by identifying and getting to know the data you currently collect and have easy access to: Circulation statistics, interlibrary loan data, user feedback, space use statistics. Make sure you have a systematic, centralized approach to data collection and retention.
How do we compare to our peer institutions?
Learn about the external surveys your library is contributing to: ARL, ACRL, PLA, IPEDS, and use these datasets of comparative data for peer analysis. Consider comparing ratios, like circulation rate per student FTE, study carrel use by school and college. These will be more useful to you than just counting transactions. Consider your purpose. Are your peers institutions like your own in terms of size or budget, or are they aspirational peers – libraries you’d like to emulate?
How do we best manage our local collections?
Analyze data on circulation, in-house and reserve material use to establish patterns of usage.
For instance, understanding peak times for circulation of physical materials can inform staffing models at the service desk.
Usage patterns by specific user populations may suggest unidentified needs and potential for new services.
How can we expand access to content for our community?
Interlibrary loan data can shed light on gaps in the local collection and guidance for the selection of e-journal backfiles for purchase.
Solicit feedback from users through suggestion forms, conversations with faculty, and regular collaboration with liaison libraries to share insights about disciplinary trends.
How can we be more efficient in our workflow?
Get together staff around a whiteboard and diagram a complicated access services process. Figure out the pain points. Temple Libraries explored its Rush Reserves process using this approach; that project is profiled here: Rush Reserves: A Collaborative Workflow Analysis – a simple model that yields good results.
How can we improve the user experience?
How is the library experienced by a first-year student? Imagine navigating the Library of Congress shelving system for the first time. To learn more about how services can be improved for users, consider usability testing, focus groups, observation, or applied ethnography. An excellent model is the work done at the University of Rochester: Studying Students: The Undergraduate Research Project.
What is the story we want to tell?
Find captivating ways of telling your libraries’ story. Many libraries, both public and academic, use data dashboards to show the public what they are doing. Data awareness can also be a staff training issue – Kennesaw State University applied a scaffolded approach to training staff, with beginners using Excel to apply replacement costs to lost items, to advanced, preparing analysis for sharing with administration.
How do we develop an organization where assessment thrives?
Assessment thrives in organizations that encourage continuous improvement and are willing to change based on evidence. A culture of assessment is supported by sharing data and assessment findings openly, using those findings to generate new questions, and taking a collaborative, team-based approach that helps to break down silos in the organization.
While getting from draft to final publication seemed like an eternity, Justin and I think our chapter on assessment in access services will be a valuable contribution to our colleagues in the field – it’s meant to start library staff thinking about the many ways they can improve services to our communities, as well as empower staff at all levels to engage with these questions.
Our chapter provides extensive examples, references and links. While we can only hope that those links still work, we’re confident that the general guidance we provide remains sound and relevant until it’s time for ACRLL to publish the third edition.
In the assessment world, we generally like to see numbers that trend upwards. Increases in gate count, use of study rooms, the number of programs and participants – all positive. But the reality is that numbers don’t always rise, and that trend may be harder to understand. Physical circulation in academic libraries is a good example of typical decline – replaced by rising use of e-books. Numbers for reference questions may be decreasing, but we’re spending more time with research consultations.
A recent post at the Scholarly Kitchen blog by David Crotty was frank about the declining number of page views on that important site. He refers to a “significant drop in readership” – in spite of an impressive 232 posts over the year. While daily posts remain as popular, the significant content that makes up the archive is getting less use. He theorizes, “Some of this may stem from changes in search engine algorithms, and some from our older material becoming increasingly obsolete.”
Assessment on the Ground is not nearly as widely read as the Scholarly Kitchen, but Crotty’s transparency and thoughts about numbers prompted me to explore our statistics more deeply.
Google Analytics provides us with some interesting information about “reach”. Our core audience is here at Temple, and the Philadelphia area, but we had viewers last year from Texas, Wyoming and London, a total of 804 unique visitors.
Has readership of Assessment on the Ground dropped due to stale content? We have steadily turned out posts since 2014 – and I’d like to think most of those have not aged too badly. Certainly not “obsolete” as Crotty so bluntly describes older posts at Scholarly Kitchen. The first substantial story was an interview with Katy Rawdon on processing special collections. Still relevant and important.
Newer posts do get the most attention, with a bump each time a new post is published.
Out of 1188 sessions in 2022, over 87% viewed just the one page, but others explored a little more deeply and viewed 2 pages or more.
Big or little numbers don’t immediately translate to impact, although trends need to be paid attention to understand the meaning behind those trends. And a willingness to be humble. Why the declining numbers?
Is there more “competition” for good information on library assessment?
Have viewers learned that the quality of these posts is not worth their time?
Over the years, our readership has remained pretty steady. Comparing this year with last, the number of visitors has gone down by 12%, but the number of pageviews has increased by 14%. And we continue to abide by our core purpose, “Profiling assessment activities at Temple University Libraries and beyond” – to celebrate the work of our staff in the work of assessment and continuous improvement. So there, Chicken Little.
Chicken Little at the library, as imagined by Dalle 2
Library Search, Temple University Libraries’ discovery layer, recently underwent a significant updates to the user interface. In Fall 2023, the Discovery Oversight Group, a group responsible for providing guidance and setting priorities for the development of Library Search, began closely examining our approach to the display of MARC record data in Library Search. Traditionally in the physical card catalog format, International Standard Bibliographic Description (ISBD) punctuation has been used as a way of recognizing and displaying distinct data elements in catalog records and acting as a visual delimiter between MARC subfields.
Screenshot of record for Hidden Figures (film) with title statement and statement of responsibility is treated as one string of text with punctuation “/” preceding statement of responsibility
With this configuration, item records with long title statements or multiple subfields (like the one above) can become lengthy and hard to decipher, making search results hard to scan and distinguish from one another. It’s not ideal for how users read online, scanning quickly for the information most relevant to them. We know from our own past user research that users tend to scan through results in Library Search quickly, in search of key bibliographic information like author, title, or date, that serve as sign posts to help them find what they need and discard what they don’t. We’re also aware of how challenging it can be to distinguish one item from another when looking through dozens of results.
With Library of Congress moving towards omitting ISBD punctuation from MARC records, we needed to rethink how we are indexing and delimiting between different MARC subfields for relevant Library Search displays, such as title statement, imprint, etc. With the physical space of a 3×5 card catalog record no longer a limiting factor in how we display information about works, we can explore alternative approaches that do not rely on prescribed punctuation, such as using the infrastructure of our discovery systems to create a human-readable display.
Design sprint
The Discovery Oversight Group formed a working group to investigate these opportunities and challenges. Led by Emily Toner (Library Technology Development), the group included Leanne Finnegan (Metadata and Digitization Services), Jackie Sipes, (user experience) Holly Tomren (Metadata and Digitization Services), and Joi Waller (graphic design). Ahead of the fall Library Search development sprint, the working group held a week-long design sprint to explore how we might display MARC data without the use of ISBD punctuation. The design sprint was held ahead of the development sprint to allow ample time for collecting user feedback before moving forward with significant design changes.
We completed the design sprint across 5 days with group members spending anywhere from a 1-3hrs per day on meetings or other sprint activities. Focusing on highly visible data, such as title, author, and publication information that provides essential information to our users when they are reviewing search results and record pages, we proposed three “How might we” questions to guide us as we brainstormed solutions:
How might we convey the different levels of information for record details (ex. hierarchy of title/subtitle, author/creator/contributor, etc.)?
How might we help users to disambiguate records?
How might we make finding and identifying items quick and intuitive for users?
A few of our ideas included:
Separate title, subtitle, statement of responsibility
Highlight fields with background color
Make different levels of information visually distinct — different font sizes, line breaks, etc.
Strip out unnecessary punctuation and use other visual indicators
Create separate field for secondary creator/contributors (rather than displaying that info alongside primary author)
Prominently display information that helps users distinguish between records (resource type, location
Simplify brief record displays
Based on these ideas, Joi created 6 different prototypes, each with different configurations of the title, subtitle, and statement of responsibility.
For the prototypes, we used the search Hidden Figures which provides more than one example of the problems we wanted to solve – helping users disambiguate between different items by the same title in multiple formats as well as containing items with longer title statements where ISBD punctuation is used to separate MARC data.
User feedback
To gather feedback, Joi and I (with lots of help from John Pyle) set up a table and rolling whiteboard in the Charles atrium. Noticing that traffic in the building felt unusually low that day, we hauled our setup outside and positioned ourselves under the overhang near the 13th street entrance on a perfect, sunny, fall day.
Exterior of Charles Library showing where we set up
We had 8 participants in total, including 3 undergraduates, 3 graduate students, 1 faculty, and 1 staff. All but two had used Library Search previously, and those who hadn’t had experience using library catalogs at their previous institutions. Participants came from a range of TU schools and colleges including Biology, English, French, Music, and Public Health.
To get feedback we used a combination of visual preference testing, informal in interviews, and a brief survey.
For the preference testing, we showed participants the 6 prototypes printed on large 11×17 paper and taped to the white board for easy comparison. Participants were given the following prompt:
Here’s a scenario. You’ve just searched for the book Hidden Figures, and these are the first three results you see. Spend a minute or so examining the prototypes. There are subtle differences; don’t get too caught up trying to look for those. Just think about would be most helpful to you if you were looking at these results and trying to evaluate these results.
Title statements
Of our prototypes, most participants preferred E, with F a close second. Though design differences were subtle throughout the prototypes, participants agreed that the blue background in E and F helped draw them to the item title right away. The response to the updated displays was generally very positive, confirming our decision to separate the statement of responsibility, which contains author information, as well we any other persons, corporate bodies, etc. that contributed to the content of the source, out from the title section.
Original Hidden Figures, film Prototype E/F Hidden Figures, film, title and statement of responsibility displayed hierarchically on two separate lines
To implement a separated the title and statement of responsibility in Library Search, we created separate fields for indexing the title and the statement of responsibility and configured the display of those fields through design elements and page structure. Shifting the statement of responsibility below the title suggests a hierarchy to the information presented, while additional styling elements, such as the high-contrast font and royal-blue background color and reconfigured (larger!) font sizes in the title area, help users hone-in on the details needed to identify and disambiguate sources as they browse search results.
There was less consensus from users around the display of subtitles. Some participants found prototype E (with the subtitle on a separate line) to be less cluttered and easier to read. A participant who preferred F, commented that she liked seeing title and subtitle together on a single line as it provided context for how the item was different than the titles above and below.
Prototype E, Hidden Figures, book, title and subtitle displayed hierarchically on two separate lines Prototype F, Hidden Figures, book, title and subtitle displayed on one line
We initially planned to implement the separated subtitle (prototype E), but once the development sprint started, we realized the need for a careful and more thorough investigation before proceeding with additional metadata mapping updates. So, that change is on hold for now.
Summary previews and other improvements
The near consensus from users on the visual design was interesting, but not unsurprising. Breaking the statement of responsibility out of the title statement was a clear improvement for readability. Talking with users also helped us clarify what item information they find most useful in the search results display. We asked participants to review a list of 10 items and indicate which 5 they rely on most when scanning search results.
Summary (brief overview)
8/8
Resource type
8/8
Author(s) / Creator(s)
7/8
Title
6/8
Availability
6/8
Date
5/8
Physical description
1/8
Information about the publisher
1/8
Table of contents
0/8
Contributor(s) (e.g., editor, illustrator, etc.)
0/8
Information that users rely on most when selecting items from search results
All participants selected summary and resource type, neither of which were displayed very prominently in Library Search before this update. In response, we added a summary preview to the brief record display in the search results (this wasn’t the first time we’d heard users express a desire for making this information more prominent) with a toggle to expand to the full summary. We also moved resource type to the title section to make it more noticeable.
Final design
Before the update, item records mostly lacked any significant visual indicators that might signal hierarchical or other relationships between data. Scrolling through results that share similar or exact titles or exist in multiple formats (see: Hidden Figures, MacBeth, Jaws) it was easy to overlook important details such as resource type, statement of responsibility, or date. Moving towards omitting ISBD punctuation from MARC records offers the opportunity to build a display more readable, usable display.
Sharing our designs with users and talking with them about their search habits was immensely helpful as we grappled with the complexities on both the cataloging end and user end. As we moved through the end of our design sprint and later into a full development sprint with the Blacklight team, the user feedback helped to ground various technical and design decisions that arose along the way.
This last week I had the opportunity to experience two very special conferences in two ways – the Library Assessment Conference was held virtually from November 1 to November 3. As co-chair, I was deeply involved in the planning for a positive experience – for presenters and attendees. A few days later, Temple Libraries hosted the Designing Libraries IX conference here in Philadelphia. Again, I also shared in planning the conference experience, as a member of the local committee and a moderator of two panels. After a very, very busy two weeks, I have a much richer understanding of the benefits and challenges of these conference formats for learning, sharing and networking.
As we started to plan the 2022 Library Assessment Conference, there was uncertainty about COVID, travel, and the risk of planning for an in-person conference many months into the future. Our Steering Committee believed that a virtual experience would be more inclusive and accessible to our targeted audience. Virtual conferences allow for lower registration fees, and of course, no expenses are incurred for travel. Participants are not taken away from regular work and home responsibilities and can elect a level of participation that suits their needs: turning video on or off, chatting and engaging in questions and answer sessions or not. These many benefits resulted in a high participation rate: 102 papers, posters, “challenge talks” and mini-workshops we called Assessment Accelerators, presented in both plenary and concurrent sessions, attracting over 550 attendees.
What was the user experience? For the Steering Committee, it was hectic, with a constant stream of messages over WhatsApp while managing the live conference. The Zoom events platform was clunky, and throughout the conference moderators battled frozen computer screens and Zoom crashes. Chat provided the primary vehicle for exchange among attendees, and this more ephemeral content is more difficult to capture, much less synthesize. The formal feedback has not yet been analyzed, but our expectation is that participants will report positive experiences with all the rich content, the lively interactions through chat – but definitely – an experience requiring the user’s attention and focus to take full advantage. And patience with technology.
The Designing Libraries conference was in person. Place DOES make a difference, and Philadelphia always delivers, in spite of no ticker-tape parade for the Phillies. The program was filled with experiences that took full advantage of being in person, from a welcome reception with live music, food and drinks, to tours of the Charles Library that wowed our attendees.
The in-person conference is not without its own constraints, however. Space limited us to 270 attendees, leaving many would-be participants disappointed. As a self-sustaining conference, most costs were supported by registration fees. In real life, people need food and transportation. Travel to the conference was impacted by flight cancellations, traffic jams held up bus tours, and myriad details, from lanyards to luggage kept us awake at night. But based on the level of interaction and engagement, it seemed our attendees were hungry for the type of connection facilitated by in-person experiences.
Whatever the format, a successful conference requires top-notch content, a program structure that flows, excellent technology support, and robust channels for participant interchange. Ideally, our conferences will capture the best of both formats – a seamless user experience that is accessible and inclusive to all.
If only those apple cider doughnuts could be transported through the web!