Usability Testing: Round 1

The Code Rascals turned their attention to user experience in late Spring with the intention of conducting a usability study of the Libraries’ Research Guides over the next year. To prepare, we took a break from our more tech-focused tutorials and worked through a Lynda tutorial called Foundations of UX: Usability Testing and Treehouse’s Usability Foundations course.

Overview

The purpose of our usability study is to understand how undergraduate users execute research and discover resources in our Research Guides, address system-level usability issues, and to generate a list of best practices for guide creators. We chose to do two to three rounds of testing and study only five participants at a time. This decision was based on findings from the Nielsen Norman Group that “the best results come from testing no more than 5 users and running as many small tests as you can afford.” The idea is that the same usability issues pop up again and again with each participant, and you eventually stop discovering new problems. This approach also gives us the flexibility to make small changes to Research Guides along the way and to re-design each test to probe more deeply into usability issues. To design our study, we began by generating a list of research questions to guide the creation of usability test tasks and to determine specifically what we wanted to know about the way users interact with Research Guides.

Recruitment & Testing

In October, we concluded the first round of usability testing. We recruited in early Fall semester using fliers, social media posts, and digital signage. Interested undergrads submitted their names in a Google Form and Jackie coordinated final meeting times through email. Participants were given $50 Barnes and Noble Gift Cards for their time.

Five users participated in usability tests that ran from 35 minutes to 1 hour. Each session took place in a breakout room in the Libraries’ new Digital Scholarship Center. Our set-up included a Mac laptop, mouse, and external monitor for the facilitators to observe as the participant navigated through the usability tasks. The sessions were also broadcast to a library conference room where other library staff gathered to observe the live sessions. We asked observers in the conference room to note usability issues and gave them the opportunity to ask follow-up questions of the participant. Two Code Rascals facilitated the test in the DSC breakout room while at least two others facilitated observation in the conference room. Sessions were also recorded using Quicktime, so that we could conduct a more thorough analysis later. While each test highlighted a number of usability problems right away, we also wanted the ability to re-watch the tests later in co-viewing sessions as this method provides rich insights that may be overlooked during the live session.

Participants included

  • 1 Freshmen Biology major
  • 1 Sophomore Film major
  • 1 Sophomore, Actuarial Science major/Spanish minor
  • 1 Freshmen Secondary Education/English major
  • 1 Sophomore Psychology major/Italian minor

Overview of the test protocol

We began with a few basic questions including

  • What is your major and department?
  • Have you used the library website to conduct research before?
  • Have you attended any library workshops since you’ve been at Temple?

We then asked participants to complete a series of tasks using the “think-aloud” method or to “literally talk us through what you’re doing and what you’re thinking.” Each participant explored Research Guides freely for 2-3 minutes and gave us their general impressions, completed 7 research-based tasks in different scenarios, and completed an “XO” test where they circled things they liked and crossed out things they did not like on print outs of a subject guide and course guide in their major or minor. The full usability script is available here.

The Rascals are still in the midst of analyzing each recording. We’ll report some preliminary findings here, though look for a more thorough analysis and report at the end of the academic year. Next steps for us include designing a second round of usability testing for Spring semester and generating best practices for guide creators based on the findings by early Summer.

Leave a Reply