There is a good deal of talk about creating a user experience, but how would you assess that user experience to determine if its design is producing the desired outcome. Intel is a corporation that is developing extensive expertise in creating and evaluating user experiences. Intel is also taking the lead in using ethnographic research techniques to identify and understand how users relate to their products. So I wasn’t surprised to find that a recent issue of the Intel Technology Journal featured an article on the topic of “Assessing the Quality of User Experience.”
While there is some technical complexity to this article (I will need to read it a few more times), it offers some valuable insights into understanding the user experience. For example, for those who might confuse usability with user experience, the authors point out that usability focuses on task efficiency and effectiveness while user experience concerns itself with emotional and perceptual components across time. They define the user experience as emotions, attitudes, thoughts and perceptions felt by the users across the usage lifecycle. Having established what the user experience is, user experience quality is defined as (1) the degree to which a system meets the target user’s tacit and explicit expectations for experience; or (2) the measured level of quality of a particular user experience when compared to a specific target.
The article then proceeds to describe the research established to measure the quality of the user experience.Â Three quality assessment approaches are described. For example, one of the threeÂ involved observing people setting up home networkingÂ technology. The data collection routines were extensive, involving interviews, photographs, voice recordings and follow up-probes. After initialÂ in-house work with test subjects, their homes were visited as well.Â Data analysis seemed a bit complicated, but it was clear that the study was valuable to Intel in discovering “clear gaps in features” that will be “used to help prioritize future requests.”
Now, is it possible to take what appears to be a rigorous assessment process designed to determine if users are having a good experience and apply that to the assessment of a library user experience? That’s going to take more pondering. I do think it could be of value to explore the “emotions, attitudes, thoughts and perceptions felt byÂ [library] users across the usage lifecycle.” Of course, we need to get a better handle on what our product is and how that fits into the concept of a usage lifecycle. If our product is identified as “academic success of the student” or “lifelong learning for the community member” then the usage lifecycle could be the time during which the student moves from entry to exit (hopefully as a graduate) from the institution or the time during which a community member has access to the public library. I will be thinking more about this article to develop some better ideas about identifying how library could create and manage user expectations – and assess the library user experience as well. It should be more than just asking the users “how are we doing” on the occasional satisfaction survey.