Improving the Discoverability of our Digital Collections

Doreva Belfiore, Digital Projects Librarian, invests much of her time improving access to digital collections and making these collections findable on the open web. I talked with her recently about how her work helps to enhance the discoverability of Temple’s digital collections.

NT: So what is the goal of what you are doing?

DB: Ideally, we would like the digital collections to reach the most number of people in the most unhindered way possible. Recognizing how much time and energy goes into the preservation, cataloging and digitization of these collections, it would be a shame to not make them findable as broadly as possible.

NT: Findable to whom and how?

DB: We currently use Google Analytics to gain insight into the traffic on our library web site. Google Analytics is an imperfect tool, but it can tell us how many people access the collections through open searches on the web. It can tell us whether they visited our collections based on open search in a search Engine (Google) or through a direct link from a blog or other site.

We can learn how users found our site (whether a search was used, whether they typed in a web address, or whether another site, like Wikipedia, referred them to us).

We can identify our most popular items, and if there is a specific interest in particular sets of materials. However, we have limited information on how exactly our users navigate through the site, or even who they are, (i.e. age or gender).

Advanced researchers may find our materials when the archival finding aids produced by SCRC are indexed by Google. Theoretically, we could look at these sources as referrers.

So with small, incremental changes we are able to increase discovery and quantify the effect of these changes through Google Analytics. Although, we’re not always sure what is attributable to the changes we have made. Increases in site visits could also be a natural spike due to publicity, the growing of collections, or the inclusion in 3rd party sites and portals such as the Digital Public Library of America.

NT: Other than analytics – are there things we could do to assess discoverability?

DB: Colleagues in public services and special collections report back to us how users find our materials, although this is anecdotal. They report that some students have trouble finding digital materials from within SUMMON. We don’t know if Dublin Core mappings effect SUMMON indexing, but we’re continually looking at how we get our local digital collections shown higher in the search results.

NT: Recognizing that this is an ongoing initiative, what are some next steps?

DB: Well, as I said, this is an iterative process. Another venue for discovery is Wikipedia. So providing opportunities for library staff to learn about editing Wikipedia would be a positive step. We recently participated in a Wikipedia “edit-a-thon” here in Philadelphia and interested librarians were encouraged to participate in that. Here at Temple we’re in the process of getting an interest group off the ground.

NT: Any final words?

DB: Sure. Improving discoverability of our digital collections involves coming at it from many angles where we would expect to see small results growing incrementally, improving:

  • cataloging
  • workflow
  • discoverability
  • organization
  • communication

The important thing is to get different people together at the table, as discoverability requires skills from all over the library.

 

Posted in web analytics | Tagged | Leave a comment

Report from ARCS: Perspectives on Alternative Assessment Metrics

This month I had the good fortune to attend the inaugural Advancing Research Communication and Scholarship meeting here in Philadelphia. The best panels brought together researchers, publishers, service providers and librarians with contrasting, yet complementary perspectives on how best to communicate and share scholarship.

As an assessment librarian looking at ways of demonstrating the impact of scholarship, I was particularly excited by the conversations surrounding “altmetrics” – alternative assessment metrics. Coined by Jason Priem in 2010, the term describes metrics that are not traditional citation counts or impact factors – altmetrics counts “attention” on social media like Twitter.  Conference participants debated the limitations and value of these new measures for understanding impact as well as exploring their use for scholarly networking or tracking use of non-traditional research outputs like data sets and software.

Todd Carpenter (National Institute for Standards Organization) reported on the NISO initiative to draft standards for alternative metrics. Carpenter points out that altmetrics will not replace traditional methods for assessing impact, but he asks, “Would a researcher focus on only one data source or methodological approach?” In measuring impact of different formats, we need to insure that everyone is counting things in the same way.  The initiatives first data gathering phase has resulted in a whitepaper  synthesizing hundreds of stakeholder comments, and will suggest  potential next steps in developing a more standardized approach to alternative measurements.  For more information about the initiative and to review the whitepaper, visit the Project Site at: http://www.niso.org/topics/tl/altmetrics_initiative/

SUNY Stony Brook’s School of Medicine has been very aggressive in its approach to making researcher activity and impact visible. Andrew White (Associate CIO for Health Sciences, Senior Director for Research Computing) spoke about  the School’s use of altmetrics in building a faculty “scorecard, ” standardized profiles showing education, publications and grants activity. White sees altmetrics as a way of contributing to these faculty profiles, providing evidence of impact in areas not anticipated, like citations in policy documents. Through altmetrics they’ve unearthed additional media coverage (perhaps in the popular press) as well as evidence for the global reach of their faculty and clinicians.

Stefano Tonzani (Manager of Open Access Business Development at John Wiley) provided us with a publisher’s perspective. Wiley has embedded altmetrics data into 1600 of their online journals. altmetricThis can be a marketing tool for editors, who  “get to know their readership one by one” as well as attracting authors, who like to know who is reading, and tweeting about, their research. Tonzani suggests that authors use altmetrics to discover and network with researchers interested in their work, or more deeply understand how their paper influences the scientific community.

On the other hand, according to the NISO research, only 5% of authors even know about altmetrics! As librarians, we have a good deal to do in educating faculty and students about developing and attending to their online presence towards broadening the reach of their scholarly communication.

Posted in conference reports | Tagged , | Leave a comment

Library Assessment Goings-on in the Neighborhood

Philadelphia-area librarians practicing assessment have a rich assessment resource here in our own backyard. The PLAD (Philadelphia Library Assessment Discussion) is a network of librarians who meet in person and virtually  to connect with and learn from one another.

We met this month at Drexel’s Library Learning Terrace to share lightning (5 minute) presentations on a range of topics, from the using a rubric to assess learning spaces to conducting ethnographic research with students.

John Wiggins (Drexel) started us off with, Tuning our Ear to the Voice of the Customer.  Drexel gathers data from students in a variety of ways, through a link on the library website to regular meetings with student leadership groups. A strong feedback loop lets the library know what students are happy, concerned or confused about – like why the hours of the library changed. And provides an opportunity for library staff to let students know why those decisions are made. (The hours change was based on transaction data).

Merrill Stein (Villanova) spoke about the regular surveys of faculty and students and the challenge of writing questions that are clearly understood. And while asking the same questions over time allows for trend analysis, sometimes we need to ditch the questions that just don’t work.

For a school with a small staff, Bryn Mawr librarians are very active with assessment. Melissa Cresswell spoke about using mixed methods (participatory design activities with students, photo diaries) to gather feedback on how library space is used.  While they make changes to space incrementally, the rich qualitative data is used over and over and although the sample sizes are small, themes emerge. The overarching theme is that students want to be comfortable in the library.

Olivia Castello (Bryn Mawr) presented her research on the impact of the flipped classroom on library instruction. She wanted to know if this model, where students are provided with a tutorial as “homework” prior to their session at the library, had an impact on student success with an information literacy “quiz” after the session. While she’s found a correlation between  the tutorial and student success, she’ll need a more controlled study to demonstrate real causation.

Danuta Nitecki (Drexel) introduced us to a rubric designed by the Learning Space Collaboratory for assessing learning spaces – We talked about what behaviors signify active learning and the kind of space and furniture best facilities this behavior

My own presentation was on cultivating a culture of assessment – how challenges can be turned into opportunities. I used this blog is an example of one opportunity – Assessment on the Ground serves as a vehicle for sharing best practices and generating a conversation about library assessment in all its forms.

Marc Meola (Community College of Philadelphia) introduced us to Opt Out, the national movement against standardize testing. He suggested that for instruction assessment, frequent low stakes testing may serve as a better method, where students and teachers see the results of tests and can learn from those.

ACRL’s Assessment in Action program has a growing presence in our community, as evidenced by two presentations related to that initiative. Caitlin Shanley (Temple Libraries’  own Instruction Librarian and Team Leader) spoke to the goals of the program as helping librarians to build relationships with external campus partners, and becoming part of a cohort of librarians practicing assessment. Elise Ferer’s (Drexel) proposes
to improve our understanding of how Drexel’s co-op experience relates to workplace information literacy.  The proposal builds on the strong relationship between the Drexel’s Library and the career center.

The meeting also provided ample opportunity for small group conversations about the presentations. We discussed the iterative nature of assessment – from tweaking a survey from year-to-year to using mixed methods for a more robust picture of user experience. We are all challenged to design good surveys and have come to recognize the limitations, at times,  of pre-packaged surveys for informing local questions.  All of us struggle with the impact of increasing survey fatigue. Perhaps this is an opportunity to be more creative in how we do assessment. In generating and fostering that creativity, the PLAD group provides an excellent and fun way of supporting and learning from our colleagues.

Thanks to everyone who participated!

Posted in conference reports | Tagged , , , | Leave a comment

How Are Our Books Being Used?

Last week’s final Assessment in the Real World workshop began with a report of book circulation – ¼ million data points detailing the circulation of books published in English and purchased from 2003 to 2013. Fred Rowland (RIS librarian for classics, religion and philosophy) has been working with this data for some time. The workshop drew the largest crowd yet for the series (18 of us crowded into Room 309) with representation from Cataloging, Acquisitions, Access Services, HSL, SEL, SCRC, and RIS – we had a lively discussion on data, collection use and decision-making.

Fred’s initial interest was to learn about patterns of use in the collection. He wanted to know if the books we purchase, through Approval or Firm orders, getting checked out? He assumed that use would be a reasonable measure for success in his collection development decisions.

Working with Brian Schooler, Carla Cunningham and Angela Cirucci to develop the query, extract the report and clean up the data, Fred was able to get some answers to his questions. He calculated that for books purchased during the ten years between 2003 – 2013, over 55% had circulated at least once by the summer of 2014. Indeed, for books purchased during the 2003 – 2004 academic year, over 78% had circulated by the summer of 2014. He was able to determine that books have a use “life cycle” that varies by discipline. While most books peak in use during the first five years of purchase, art books have a use that is more sustained over time. This makes sense, given the nature of browsing in that discipline.

But as is often the case, the analysis led to more questions! While this data were interesting, we also discussed its limitations for really understanding patterns of collection use. For instance,

  • Is circulation even a good metric for determining the value of a research library’s collection?
  • How does use of e-books effect the circulation of print and does this effect vary by subject area?
  • Are there differences in use by patron type and by discipline?
  • Is there a relationship between use of interlibrary loan services and local collection use?
  • How does the availability of electronic book records in the catalog factor into how our print collection is used?
  • And the big question: How can we best spend our limited collection funds effectively, balancing electronic and print in ways that make sense?

We recognized that there are limitations in getting data from this one source and analyzing patterns in a vacuum. Interlibrary loan (Illiad) and PALCI use statistics should to be taken into account, as well as e-books use statistics and use of reserves (Blackboard systems). That’s four additional sets of numbers and a much more complex picture.

It’s clear that simple cost per use analysis is not sufficient for making decisions about collections. Here are Temple, librarians are using a variety of methods to inform themselves about collection use, from tracking on faculty research needs and aligning purchases with the curriculum to keeping an eye on sorting shelves for use trends.

The discussion proved a very useful opportunity for sharing information about what data was available, its limitations, and strategies library staff are using to make informed decisions as they purchase and retain information resources.

Posted in data-driven decision making | Tagged , , , | Leave a comment

Gathering Feedback on the Library’s Public Programs

The latest Assessment in the Real World discussion focused on the feedback survey that goes out to attendees at the Libraries’ Beyond the Page public programs. 11 library staff members participated in a discussion with Nicole Restaino (Manager, Library Communications & Public Programming) about the survey, the results, and the challenges of developing an assessment tool that collects feedback for actionable, data-driven decision-making.

Nicole has several things that she wants to learn about who attends the programs.

  • Who are they? Undergraduate or graduate student? Faculty? Staff? Community members?
  • Why do they come? Do they receive class credit? Is it a topic that interests them or have they heard of the speaker?
  • What time of day works best for their schedule?
  • How do they learn of the program? From their instructor? A listserv? A poster at the Library?

This last is important, since it helps Nicole make decisions about where to post announcements, what formats work for what audiences, and how to optimize her limited “marketing” budget. If students hear about a program through the radio, then it makes sense to pay for spots there.

The current data (1000+ responses going back to  Fall 2012) indicates that 62% of our attendees are undergraduates. 31% learn about a library program from their instructor – and 39% attend because they receive extra credit or are required to attend for class. This is not unexpected, since Nicole works with faculty, particularly in Gen Ed and at Tyler, to develop programs that support the curriculum.

public programming snip

In conducting the survey, a good return rate is a challenge. We talked of some ways to increase the percentage of survey response. The personal “touch” is effective at Blockson, where the short survey is passed out by hand. That space is also more confined and the audience is typically more community-based – factors that might also lead to a better feedback response. What if we added an online form as a way of gathering feedback? The response rate after instruction workshops is excellent, although students are in a classroom environment and may feel more incentive to complete the evaluation. Would we get a better response rate if the form was shorter?

We talked about what questions work best and which ones don’t lead to actionable data. And since learning is an important goal for these programs, we’ve added a question related to what was learned.

The compilation and categorization of the paper surveys is time consuming, but provides us with a story to tell that’s backed with numbers. Out of over 1000 respondents, over 96% would attend another program at Temple University Libraries. That’s popular programming.

 

Posted in data-driven decision making, surveys | Tagged , | Leave a comment

Assessment in the Real World

This post was made possible by the excellent notes and input from Laura Chance, temporary art librarian at Paley. Thanks, Laura!

Last Friday 12 librarians gathered to talk about feedback received from students participating in the Analytical Reading and Writing library workshops. Our “text” for analysis consisted of several thousand comments in response to questions, including:

  • What is one new thing you learned in today’s workshop?
  • Which part(s) of the research process are you still nervous about?

The session was the first in the series of “real world assessment” meetings for library staff, facilitated by Caitlin Shanley and Jackie Sipes, designed to practice a hands-on assessment exercise in a group setting.

The AR&W workshops provide multiple opportunities for learning assessment, from the clicker responses, to the worksheets students complete during the session, to the online feedback form students submit at the end of two instruction sessions. The data generated via this form was the topic of the Real World Assessment workshop with librarians.

At the beginning of the year, Caitlin and Jackie tasked themselves with evaluating the existing feedback form with an eye towards getting beyond the “Did you like us?” questions and probing at the more complex question of what learning takes place through the instruction workshops. They considered the essential learning outcome for the class, “How do you evaluate the credibility of a source?” – Ideally, the assessment process should help us to understand if students are grasping that concept.

The discussion touched on the challenges of learning assessment.

  • Is it meaningful when students can essentially regurgitate information shared with them during the session immediately prior?
  • Could there be a pre- and post- test?

Recognizing that the assessment of learning is difficult to do immediately after a session, we asked:

  • How do we measure the impact of our workshops outside the workshop itself?
  • What if we could look at the bibliographies they include as part of their papers? A challenge is that instructors evaluate the papers differently, depending on their own vision for the goals of the class. The first year adjuncts grade papers as part of a peer process – what if we could sit in on these sessions as they talk about normalizing the paper grades?
  • What are the ethics (privacy, sampling, providing differential services) for students who participate in studies like these?

Designing the Assessment Instrument

It also became clear that the assessment instrument dictates the kind of analysis that can be conducted. The feedback consisted of quantifiable data (limited-option questions) and more qualitative, free-text data. Analyzing free text can be more difficult and quantifying text, using a tool like Voyant, can be interesting but many not be meaningful as, again, students tend to repeat back the language provided by the instructor.

Our session generated more questions than answers, but the discussion brought home important issues for those of us engaged in designing an assessment of learning.  We learned in a practical way,  how:

1. Assessment is an iterative process

2. It can be hard to know how best to approach it until you gather the data and look at what you get.

3. Defining what you want to learn is essential before you begin, but that might be different depending on your role and/or how you plan to use the data that you collect.

One way or the other, it was a fun and useful meeting – and a great inaugural workshop for Assessment in the Real World series.

Posted in instruction and student learning | Tagged , , , | Leave a comment

There’s a Blizzard but Library Assessment Slogs On

It’s good to be back home from the ALA Midwinter conference.  Chicago was not only windy last weekend but also snowy and awfully cold! Well, it is winter…and librarians can not be stopped.

IMG_0631

The meetings are always informative and it’s great to catch up with what other libraries are thinking about and doing in the field of library assessment.

 ARL Library Assessment Forum

At the ARL-hosted  Library Assessment Forum there was lots of discussion, and some fretting, about the new procedure for submitting library statistics annually as part of the University-wide IPEDS (Integrated Post-Secondary Education System) statistics program.  The concern is over instructions for library data that don’t fully align with the standardized definitions used elsewhere (ARL, for example, or COUNTER for e-usage). There was positive discussion and practical advice about how best to handle this situation (thanks to David Larsen and Elizabeth Edwards at the University of Chicago and Bob Dugan at the University of Florida). Over time and with input from librarians to IPEDS, we will help to provide clearer definitions and more meaningful data to analyze trends using this widely used tool for college data. This is one way the Library Assessment forum provides for information sharing between professionals in the library data world.

So what should we be counting?

Related to this topic of collecting meaningful statistics, Martha Krillydou updated us on several current ARL initiatives. After conducting an extensive listening tour with library directors, new ARL head  Elliot Shore proposed that “libraries shift their assessment focus from description to prediction, from inputs to outputs, from quantity to quality.” Library directors suggested some interesting new measures that would support the case they make to their institutions for funding. How about a:

  • Collaboration index
  • Enterprise fit index
  • A cost-avoidance index (Temple Libraries’ open education resources (OER) program would fit in nicely here.)

Library Interest in Qualitative Methods of Assessment

To balance out the numbers-oriented approach to assessment, I also attended (and convened) the ACRL Assessment Discussion Group. There is currently a good deal of interest in the use of the personas method for understanding user needs.  Personas can be a way of putting a face on a user type (Ken, the tenured economics professor or Stacy, the undergraduate art major). Grounded in real data, personas may be developed through focus groups or interviews with users – that research is compiled into a set of “archetypes” or library user types. They can help the Library explore the user experience from multiple perspectives.

  • What path would Ken take when looking for a journal article on the library’s web site?
  • What path would Stacy take when searching for a book on architecture at the Library?

Libraries are using the persona method to develop new services and to tell compelling stories about how the Library is used. Cornell was one of the first libraries to use this method (http://ecommons.cornell.edu/bitstream/1813/8302/2/cul_personas_final2.pdf) in designing its web site, but it’s been used as well by the University of Washington, BYU, and DePaul. Exciting.

Related to wayfinding, Florida State recently gave students ProCams to use to document their search for materials located in the stacks. The recordings (both visual and sound) pretty quickly exposed problems students had with navigation. For staff, it was eye-opening to see for themselves the (sometimes) utter confusion students experienced between the catalog and shelf. That recognition of a problem is the first step in making improvements.

For more information on any of these items, do not hesitate to ask!

 

Posted in conference reports | Tagged , , , | Leave a comment

Faculty Seeking Course Content – A Qualitative Research Project

Many Temple Library Staff, particularly those in RIS, are already familiar with the qualitative research project that Jenifer Baldwin, Anne Harlow and Rick Lezenby have been working on for the last two years. (Yes, qualitative research, at this depth, takes time!) They have already presented on their work several times at Temple Libraries and conducted a workshop on their approach to interviewing at the Maryland Library Association conference.  Currently, they are wrapping up the data collection phase, which included interviews with faculty and co-viewing sessions with peer librarians. Using a grounded theory approach, they’ve been analyzing their data all along. Now they’re ready to talk about it.

NT: What was your question?

JB, AH, RL: Our first question related to how faculty used Blackboard to provide course content to students. As we interviewed participants our question evolved to a more general one of exploring how faculty choose and share course content.

This content is predominantly course readings, from published articles or book chapters to lecture and lab notes, or reflections. We became interested in how faculty understand their own expertise and how they model that to their students.

NT: What method did you use and why?

JB, AH, RL: We were inspired by the work of Nancy Foster at the University of Rochester – I (Jenifer) took a CLIR workshop on the ethnographic method at MIT along with Peter Hanley from Instructional Support. Then we hosted a CLIR workshop here at Temple, so Anne and Rick were able to take advantage of this training as well. We already had lots of quantitative information here at Temple, we wanted to see what the qualitative research process would look like. We wanted to use in-depth, semi-structured interviews with faculty about their work practice. We had a method and we needed to find a question to apply it to!

Once we developed our questions we went through the Institutional Review Board for approval to conduct the research. In the interview we asked the faculty member to show some recent examples of materials they used for a class. We would ask them to talk through the process of deciding on what content to use and demonstrate how they located the materials and organized them. We asked them what they expected the outcome would be for their students. Each interview was video taped and we then conducted co-viewing sessions with librarians.

We had the recordings transcribed and we coded these (using Atlas.ti) for themes that emerged from the interview texts. We employed a grounded theory approach – where the theory emerges from the research rather than applying the research to a pre-formed theory. The co-viewing process became part of the data and analysis as well. For instance, one of the products was a spreadsheet of suggestions of initiatives and projects that might be developed out of needs expressed in the interviews.

NT: Tell me about your results? What did you learn?

JB, AH, RL: Several themes emerged. One is the tacit knowledge that scholars bring to their work. There is a lot that they know about their discipline that they might have trouble articulating to a student new to that discipline. When asked, well how did you know to include this article they might say, “”Well I just know”. Often they find out about something in what seems to them to be serendipitous ways, but in fact they are predisposed to the literature, invisible college, attuned to the environment that relates to their work

So they might say to a student, “just go and browse the shelves” – this is a behavior that they might usefully undertake because they know what to look out for. But a student might have a very different, less successful, experience with this kind of “browsing.”  Faculty try many ways of modeling these “expert” behaviors for their students.

We heard lots and lots of stories about faculty experiencing serendipitous discovery. The use of video and popular culture is pretty ubiquitous, so faculty might get ideas for class by watching cartoons or movies, at a social event, wandering a bookstore, or reading the newspaper or magazines.

We learned some interesting things about expectations for reading.  There are differences between the kind of reading that is expected of undergraduates compared to graduates,  that was not what you’d intuit. Undergraduates are expected to read in a more transformative and analytical way – graduate students need to read more broadly. If something is really important it would be distributed in class and paper, perhaps even read together.

NT: Is there anything you’ll change based on your findings?

JB, AH, RL: We hope to have a full discussion of implications and ideas for service initiatives and outreach this spring as one of several products of the project.  The co-viewing process provided us with practical ideas for outreach and push notification, for example. Most faculty talked about new book announcements that had an impact on them. This led us to ask what other kinds of things could we push.

Faculty consult with their peers for ideas about content. If librarians have a well-grounded relationships, they will take our suggestions seriously as well.

Another example – we saw a need to create easily and rapidly accessible resources for students who are also practitioners in the field– resources that they can readily access when they themselves are in a classroom or other practitioner setting. We think we might be able to help out in this area.

NT: If you did this again, what would you do differently?

JB, AH, RL: Atlas.ti, for textual analysis and coding, was cumbersome to use, particularly in a networked environment. We’d like to interview a broader group of faculty, a less “self-selected” group of participants. And currently, our library staff is small and stretched so we don’t have all the time to focus on projects such as these. But the “rewards are worth it.”

Posted in qualitative research | Tagged , , | 1 Comment

Gathering Patron Feedback at the Charles L. Blockson Afro-American Collection

This month I met with Diane Turner, Curator of the Charles L. Blockson Afro-American Collection at Temple University Libraries. This post illustrates the idea that assessment doesn’t have to be complicated to be useful and it doesn’t need to take a lot of time. It can serve as a gauge of program success and audience engagement, as well as demonstrate learning and provide feedback for future planning.

The Blockson Collection hosted a two-day symposium as part of the city-wide festival of the Underground Railroad in Philadelphia held this October. The sessions at Blockson included lectures, panel discussions and musical performances

Prior to the symposium, I met with Diane to talk about ways she might assess the effectiveness of the program in terms of one of the Blockson Collection’s key goals: “To contribute to the education of the Temple University community and general public about African-American history and culture, particularly the Black experience in Philadelphia and Pennsylvania.” As curator of this significant collection, Diane also wanted to get feedback and suggestions for other types of programs and topics that would be of interest.

Assessing learning outcomes can be tricky, requiring pre- and post- tests. We did something a little less complicated. Diane designed a simple half-sheet feedback form to distribute to program participants and asked them directly about what they learned. We won’t be sending attendees a followup quiz, but their responses to this question provides excellent documentation of the key takeaways, what surprised participants and what they’d like to learn more about. It was clear from the enthusiastic survey responses that attendees gained new knowledge, and the program inspired many to learn more – as evidenced by these responses to the question, “What did you learn?”

“Dr. Blockson taught me a lot about the various people who were involved in the Underground Railroad but aren’t mentioned much in history.”

“Have to review my 10 pages of notes to answer this.”

“More than I can write. I have so much reading to do”

Participants had many suggestions for what they’d like to see in future programming. Workshops on genealogy came up more than once, as well as themes related to Dr. Blockson’s talk – the lesser-known history of African Americans, particularly early American history (18th-19th century) and Philadelphia’s role in the Underground Railroad.

The feedback tool was simple, straightforward, and because participants were particularly engaged with the program, 73 surveys were returned by 104 participants, yielding an excellent 71% return rate.

Still, we are always learning better ways to phrase questions. In this case, the question, “Where are you from?” did not yield the expected responses, i.e. institutional affiliation. The lesson learned is to be specific about what type of information is required. If geographic location is important (as it might be in a survey like this), asking for a zip code provides more useful information for understanding reach into the community.

Posted in surveys | Tagged , , , | Leave a comment

So What’s Up with that Assessment Committee?

The Assessment Committee at Temple University Libraries has been active now for almost six months. I’d like to update you on what the Committee has been doing since Joe Lucia charged this group with “providing advice, support, and the development of projects and initiatives related to measuring, evaluation, and the demonstration of value across all areas of the TUL enterprise.” It’s a big charge and a big group, with members from all units of the Libraries including Law, Ambler, Blockson, SCRC, Paley, SEL and Health Sciences/Podiatry.

One of our first orders of business was to conduct a “data audit” – identifying the (mostly) quantitative data that is currently being collected on a routine basis throughout the Libraries. The great thing is that we’re all collecting data, and lots of it. From statistics on electronic journal usage to interlibrary loans to use of library computer workstations to gate counts to feedback from instruction sessions to library web site traffic – the list goes on and on.

But unsurprisingly, these data sets are stored in different areas, neither centralized nor widely accessible. Similar data, like reference transactions, are collected in various ways – data entry via the web, manual record keeping, spreadsheets. The Assessment Committee is reviewing this environment with an eye towards making data collection, storage and access more standardized, systematic and easy to do.

We’ve engaged in several brainstorming or “visioning” sessions in which we work in small groups to address big questions:

  • If we had no restrictions on money or time, what would a library data repository look like and what would it do?
  • If we could know anything about our patrons and their use of the library, what would we want to know?

We’ve also explored the question of what metrics and assessment methods need to be in place to evaluate our effectiveness as an organization – using the strategic actions document as a starting place. This is a tough one. It’s easy to count circulation, but much harder to measure our impact on faculty awareness of new methods for scholarly communication.

As for actual assessment taking place, we’re piloting two patron surveys this month. A customer satisfaction survey is helping us learn how patrons perceive our service at public desks. The second survey is a follow-up to research consultations, in which patrons are sent and online survey one week after meeting with a librarian.  This is to learn if students are retaining the research skills used during the consultation.

Upcoming blog posts will profile additional assessment underway here at Temple, including:

  • SEL’s focus group session with engineering students to learn how the Library contributes to their success here at Temple
  • Caitlin Shanley’s ACRL Assessment in Action project on the effectiveness of library instruction towards student learning
  • RIS’s qualitative research project on faculty research assignments
  • DLI’s assessment of tools for improved discoverability of digital library resources

The Assessment Committee has been instrumental in cultivating a culture of assessment here at TUL. As important, is the participation and engagement of all staff in our efforts. If you have an idea, a comment or question, please do contact me or one of our members!

Nancy Turner, Chair

Jenifer Baldwin, RIS

Steven Bell, RIS

Lauri Fennell, HSL

Leanne Finnigan, CAMS

Eugene Hsue, Law

Doreva Belfiore, DLI

Jessica Lydon, SCRC

Nicole Restaino, Communication/Events

Brian Schoolar, Collections

Cynthia Schwarz, LTS/HSL

Caitlin Shanley, RIS

Gretchen Sneff, SEL

Diane Turner, Blockson Collection

Sandi Thompson, Ambler

John Oram, Access

 

Posted in organization culture and assessment | Tagged , , | Leave a comment