LibGuides Usability Testing

Temple Libraries has over 500 LibGuides, or Research Guides. The purpose of the guides is to help library users with some aspect of the research process. Most guides fall into one of three categories – those that offer links or helpful information related to specific disciplinary resources (subject guides), those with course-related information or resources (course guides), or those that provide general information about the library and its services (“How Do I” guides). A typical course guide, for instance, might include links to databases, books, websites, or instructional videos intended to help students complete research or other course assignments. The guides, authored by a variety of library staff, make up a large percentage of the library’s online content.

Last year, the Code Rascals group consisting of myself, Jenifer Baldwin, Brian Boling, John Pyle, and Caitlin Shanley, turned its attention to user experience, and LibGuides seemed like one online space ripe for analysis. We decided to conduct usability testing to learn how and if the guides are working for undergraduates, a primary audience for many of our guides. Our goal was to identify usability issues and to address those issues through guide and system-level design improvements, better content curation, and better web writing. With so many content creators, we knew we would likely need a set of guidelines to accompany our findings. Another goal of the usability testing was to establish ongoing best practices for guide authors.

Preparation & Methods

We recruited five participants, all first- or second-year undergraduates with a variety of majors. During the sessions we asked participants to perform research tasks and to “think aloud,” or talk us through what they were thinking as they performed the tasks. We designed each task to give us insight into how users fared with the system homepage, guide navigation, and finding resources, such as databases for research, within a guide. Participants engaged with all three types of guides: course, subject and “How Do I’s”. We also asked participants to freely explore the Research Guides for a few minutes and give us their overall impressions.

Analysis & Findings

We recorded a screen and audio capture of each session for more thorough analysis later. In our analysis we reviewed each recorded session as a group, noting usability problems and generating a list of potential solutions or best practices along the way. Our full report details observations and the full list of best practices. Below is a sampling of our observations.

Homepage & Guide Discoverability Issues

Users have trouble selecting a guide that can help them with a broad topic, and they may not realize they’re on a Research Guide once they arrive.

When asked to select a guide to research the topic “public art,” participants expressed uncertainty in the absence of a subject link or guide explicitly titled “public art.” This indicated to us a need to improve discoverability of guides through a more prominent site search on the homepage and better metadata at the guide level. Even after selecting a guide to research public art, participants remained uncertain that they had landed on a research guide. One commented that she was not sure she was on a guide, and another asked if the guide she had selected was in fact a guide. To us, this demonstrated a need to better brand the guides as a tool for research.

Guide-Level Issues

Users choose databases that are familiar or at the top of a list. Users also spend time reading database descriptions.

In tasks where participants had to find books and articles, we observed that participants did spend time reading database descriptions; however, they opted to search in databases listed toward the top of a guide page or the top of a list of databases. Some participants mentioned selecting a specific database, such as JSTOR, because they had encountered it before in high school or previous courses. Our recommended best practice is for guide authors to list no more than three databases and to place lesser known databases towards the top to help build familiarity with resources students may not have previously used.

Extraneous information distracts users from finding what they need.

One task asked participants to find information in guides on how to cite a book in APA style. Some participants read explanatory text boxes and watched video tutorials on the guide’s homepage before moving on to the APA tab of the guide. Significant time was spent viewing this explanatory content before deeming it unhelpful. This indicated to us a need for How Do I guides to be oriented in a way that helps users complete specific tasks quickly. Also, we need to review video content to make sure it is up to date, short and to the point, and relevant to users’ needs.

Large headings help users scan the page to identify content that is most useful.

Though participants were sometimes distracted by content not immediately relevant to the task, we observed that large headings helped users quickly scan through guide content to locate what they needed. On the Audre Lorde Seminar course guide, which included large section headings at the time of testing, participants quickly scrolled to the appropriate area of the guide when selecting a database for research. We plan to increase the font size of box headings at the system level to create a better hierarchy of text on all guides and make the contents of the page easier to scan.

Novice users have difficulty with guides.

We observed that participants who were novice researchers struggled to find the resources and information they needed in LibGuides. This highlighted the need to design guides that work for audiences with a wide range of research skill levels. Guide authors might consider using language or visuals that instruct users on how and/or why to use a resource.

Future Steps

We’ve shared the final report and best practices generated from the first round of usability testing. Our overall study consists of three parts, two rounds of “think aloud” usability testing and one round of card sorting to learn more about the language and structure of guides. Card sorting was completed in the Spring, and we are currently analyzing the results. For the next round of usability testing, we plan to create two or three model guides based on findings so far and test the usability of those guides. At the conclusion of the usability study, we plan to create guide templates that reflect our best practices.

This entry was posted in usability. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.