Skip to content

3. Administrative Strategies for Libraries

Josh Welker edited this page Jun 18, 2015 · 20 revisions

Summary

This section is where we will discuss management strategies libraries can use to build effective websites.

Development Cycles and Agile Methodology

Sources


Bordac, Sarah, and Jean Rainwater. 2008. "User-Centered Design in Practice: The Brown University Experience." Journal Of Web Librarianship 2, no. 2/3: 109-138. Library, Information Science & Technology Abstracts with Full Text, EBSCOhost (accessed June 9, 2015).


Chang, May. 2008. An Agile Approach to Library IT Innovations. Library Hi Tech 28, no. 4: 672-689.


Critchlow, M., Friedman, L., & Suchy, D. (2010). Using an Agile-based Approach to Develop a Library Mobile Website. Code4lib Journal, (12), 1-8.


Cunningham, Ward. 2001. Agile Manifesto. Retrieved from http://agilemanifesto.org/


Ellis, S., & Callahan, M. (2012). Prototyping as a Process for Improved User Experience with Library and Archives Websites. Code4lib Journal, (18), 1-14.


Forsman, D. (2014). Introducing agile principles and management to a library organization. IATUL Annual Conference Proceedings, (35), 1-11.


Loranger, Hoa. 2015. Radical Redesign or Incremental Change? Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/radical-incremental-redesign/

Use Cases and Personas

Sources


Becker, Danielle A., and Lauren Yannotta. "Modeling a Library Website Redesign Process: Developing a User-Centered Website through Usability Testing." Information Technology and Libraries (Online) 32.1 (2013): 6-22.


Bedford, Aurora. 2014. Segment Analytics Data Using Personas. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/analytics-persona-segment/


Bedford, Aurora. 2015. Personas Make Users Memorable for Product Team Members. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/persona/


Butow, Eric. 2007. Analyzing Your Users. In User Interface Design for Mere Mortals: a Hands-On Guide to user Inteface Design Software-Independent Approach! Boston: Addison-Wesley.

The entire five-step Goal-Directed Design Process combines ethnography... research, modeling, and design into five phases, in the following order:

  1. Research - This phase uses observational and contextual testing as well as interviews to learn more about potential and actual users of the product. One of the primary outcomes of research is the discovery of usage patterns, which suggest the goals and motivations for using the product...
  2. Modeling - After the research is completed, the modeling phase analyzes the research for user and workflow patterns, and from that creates user models based on those patterns. Those models are based on groupings of user goals, motivations, and behavioral patterns. From these user models, or personas, the project team determines how much influence each persona will have on the interface design...
  3. Requirements - In this phase, the project team creates requirements that meet the needs of one or more of the personas you identified in the modeling phase...
  4. Framework - Designers create an interaction framework that produces a structure for the program so they can add the remainder of the code later. This framework melds general interaction design principles with interaction design patterns to create a flow and behavior for the product...
  5. Refinement - This phase refines the framework and includes detailed documentation of the design as well as a form and behavior specification. This phase defines what the design should do to meet the goals of each persona identified in the Modeling phase as well as the business that employs the persona.

p144-145

Personas connect three different dimensions of information into one cohesive persona:

  • Demographics - This segments some of the persona features. For example, demographic data shows such data as the user's gender, location, and income.
  • Psychographics - This segments some of the persona needs and determines questions that each persona may ask. For example, a spontaneous type and a competitive type will ask different questions and will want different types of information.
  • Topology - This allows you to segment by determining how complex the persuasion process is; that complexity is based on a customer's perceptions and experiences.

p148


Krug, Steve. (2014). Don't Make Me Think, Revisited: a Common Sense Approach to Web Usability. Berkeley, CA: New Riders.

One of the very few well-documented facts about Web use is that people tend to spend very little time reading most Web pages. Instead, we scan (or skim) them, looking for words or phrases that catch our eye... We're usually on a mission. Most Web use involves trying to get something done, and usually done quickly.

p22


Krug, Steve. (2009). Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems. Berkeley, CA: New Riders.

“Do-it-yourself” usability tests are definitely qualitative. The purpose isn’t to prove anything; it’s to get insights that enable you to improve what you’re building.

As a result, do-it-yourself tests can be much more informal and, well, unscientific. This means you can test fewer users (as long as you get the insights you need), and you can even change the protocol mid-test. For instance, if the first participant can’t complete a particular task and the reason why is obvious, you can alter the task—or even skip it—for the remaining participants. You can’t do that in a quantitative test because it would invalidate the results.

Basically, a facilitator sits in a room with the participant, gives him some tasks to do, and asks him to think out loud while he does them.

There’s no data gathering involved. Instead, members of the development team, stakeholders, and any other interested parties observe the session from another room, using screen sharing software. After the tests are finished, the observers have a debriefing session where they compare notes and decide what problems should be fixed and how to fix them.

Ch1, section "So, what's 'Do-It-Yourself Usability Testing'?

Here's the best advice I can give you about when to test: Start earlier than you think makes sense.

Your natural instinct will be to wait, which is the worst thing you can do. There’s an inherent paradox: the worse shape it’s in, the less you want to show it—and the more you can benefit if you do.

Ch4, "What do you test, and when do you test it?: Why the Hardest Part is Starting Early Enough"

I’m not saying that you shouldn’t try to recruit people who are like your actual users. When you do need “actual users,” by all means get them. I’m just saying don’t obsess about it. For some sites you’ll have no problem finding actual users, but for others it can make the process much more time-consuming and costly—and it’s not always necessary.

Yes, there are things you can learn only by watching a target audience use the site. But there are many things you can learn by watching almost anyone use it. When you begin doing usability testing, your site will probably contain a lot of serious problems that “almost anybody” will encounter, so you can recruit much more loosely in the beginning. As time goes on, you’ll want to lean more in the direction of actual users. But even then I would try to recruit one “ringer” in each round.

I also find that people who aren’t from your target audience will sometimes reveal things about your site that you won’t learn from watching “real” users, just because they have an outsider’s perspective—the emperor’s new clothes effect. And I’d rather have one articulate outsider with reasonable common sense who’s comfortable talking than ten “real” users who are tense, quirky, etc.

I've had a motto about recruiting for years: Recruit loosely and grade on a curve.

What this means is try to find users who reflect your audience, but don’t get hung up about it. Instead, try to make allowances for the differences between the people you test with and your real users.

When a participant has a problem, just ask yourself: Would our users have that problem? Or was it only a problem because the participant wasn’t familiar with the jargon or didn’t know the subject matter—a problem we’re sure our actual users wouldn’t have?

Ch5, Section "Who do you test with?"

The first step is to jot down a list of the most important tasks that people need to be able to do on your site.

Ch6, Section "First, come up with a list of tasks"

Once you’ve decided which tasks people are going to do, you have a writing job ahead of you: converting the simple description of the task into a script that the user can read, understand, and follow.

The scenario is like a card you might be handed for an improvisation exercise in an acting class: it gives you your character, your motivation, what you need to do, and a few details.

...

A scenario provides some context (“You are...,” “You need to...”) and supplies information the user needs to know, but doesn’t (e.g., username and password for a test account). Don’t go overboard: trim any detail that doesn’t contribute.

There’s really only one thing that’s hard about this: Not giving clues in the scenario.

You have to phrase it so that it’s clear, unambiguous, and easy to understand, and you have to do it without using uncommon or unique words that appear on the screen. If you do, you turn the task into a simple game of word-finding.

Ch6, section "Make the tasks into scenarios"


Nielsen, Jakob. (2001). Success Rate: The Simplest Usability Metric. In Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/success-rate-the-simplest-usability-metric/.


Nielsen Norman Group. (2014). Availability in the Cross-Channel Experience. In Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/available-cross-channel/.

Usability Testing

Sources


Becker, D. A., & Yannotta, L. 2013. Modeling a Library Website Redesign Process: Developing a User-Centered Website Through Usability Testing. Information Technology & Libraries, 32(1), 6-22.


Boag, Paul. (2015). All You Need to Know About Customer Journey Mapping. In Smashing Magazine. Retrieved from http://www.smashingmagazine.com/2015/01/15/all-about-customer-journey-mapping/.


Bordac, Sarah, and Jean Rainwater. 2008. "User-Centered Design in Practice: The Brown University Experience." Journal Of Web Librarianship 2, no. 2/3: 109-138. Library, Information Science & Technology Abstracts with Full Text, EBSCOhost (accessed June 9, 2015).


Chopra, Paras. (2010). The Ultimate Guide to A/B Testing. In Smashing Magazine. Retrieved from http://www.smashingmagazine.com/2010/06/24/the-ultimate-guide-to-a-b-testing/.

You have two designs of a website: A and B. Typically, A is the existing design (called the control), and B is the new design. You split your website traffic between these two versions and measure their performance using metrics that you care about (conversion rate, sales, bounce rate, etc.). In the end, you select the version that performs best.

Even though every A/B test is unique, certain elements are usually tested:

  • The call to action’s (i.e. the button’s) wording, size, color and placement,
  • Headline or product description,
  • Form’s length and types of fields,
  • Layout and style of website,
  • Product pricing and promotional offers,
  • Images on landing and product pages,
  • Amount of text on the page (short vs. long).

When doing A/B testing, never ever wait to test the variation until after you’ve tested the control. Always test both versions simultaneously. If you test one version one week and the second the next, you’re doing it wrong. It’s possible that version B was actually worse but you just happened to have better sales while testing it. Always split traffic between two versions.


Ellis, S., & Callahan, M. 2012. Prototyping as a Process for Improved User Experience with Library and Archives Websites. Code4lib Journal, (18), 1-14.


Gallant, J. W., & Wright, L. B. (2014). Planning for Iteration-Focused User Experience Testing in an Academic Library. Internet Reference Services Quarterly, 19(1), 49-64.


Krug, Steve. (2014). Don't Make Me Think, Revisited: a Common Sense Approach to Web Usability. Berkeley, CA: New Riders.

Focus groups can be great for determining what your audience wants, needs, and likes--in the abstract... But they're not good for learning about whether your site works and how to improve it.

p113

I think every Web development team should spend one morning a month doing usability testing.

p118

I think the ideal number of participants for each round of do-it-yourself testing is three.

p119

Try to find users who reflect your audience, but don't get hung up about it... In fact, I'm in favor of always using some participants who aren't from your target audience.

p120


Loranger, Hoa. 2015. Practical Advice for Testing Content on Websites. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/testing-content-websites/


Nielsen, Jakob. 2000. Why You Only Need to Test with 5 Users. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/


Nielsen, Jakob. 2004. Card Sorting: How Many Users to Test. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/card-sorting-how-many-users-to-test/.


Nielsen, Jakob. 2009. Card Sorting: Pushing Users Beyond Terminology Matches. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/card-sorting-terminology-matches/.


Nielsen, Jakob. 2012. Usability 101: Introduction to Usability. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/usability-101-introduction-to-usability/


Nielsen Norman Group. 2003. How to Recruit Participants for Usability Studies. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/reports/how-to-recruit-participants-usability-studies/.


Pernice, Kara. 2014. Talking with Participants During a Usability Study. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/talking-to-users/


Rohrer, Chris. 2014. When to Use Which User-Experience Research Methods. Nielson Norman Group: Evidence-Based User Experience Research, Training, and Consulting. Retrieved from http://www.nngroup.com/articles/which-ux-research-methods/

Management and Decision-making

Sources


Bordac, Sarah, and Jean Rainwater. 2008. "User-Centered Design in Practice: The Brown University Experience." Journal Of Web Librarianship 2, no. 2/3: 109-138. Library, Information Science & Technology Abstracts with Full Text, EBSCOhost (accessed June 9, 2015).


Critchlow, M., Friedman, L., & Suchy, D. (2010). Using an Agile-based Approach to Develop a Library Mobile Website. Code4lib Journal, (12), 1-8.


IDEO. (2014). Design Thinking for Libraries: A Toolkit for Patron-Centered Design. Retrieved from http://designthinkingforlibraries.com/terms.html


Krug, Steve. (2014). Don't Make Me Think, Revisited: a Common Sense Approach to Web Usability. Berkeley, CA: New Riders.

I usually call these endless discussions [about web design] 'religious debates,' because they have a lot in common with most discussions of religion and politics: They consist largely of people expressing strongly held personal beliefs about things that can't be proven--supposedly in the interest of agreeing on the best way to do something important.

p105

We tend to think that most users are like us.

p105

The point is, it's not productive to ask questions like 'Do most people like pull-down menus?' The right kind of question to ask is 'Does this pull-down, with these items and this wording in this context on this page create a good experience for most people who are likely to use this site?' And there's really only one way to answer that kind of question: testing.

p109


Paul, A., & Erdelez, S. 2013. Implementation and Use of Web Analytics for Academic Library Websites. World Digital Libraries, 6(2), 115-132.

Content Strategy

Sources


Blakiston, R. b. (2013). Developing a Content Strategy for an Academic Library Website. Journal Of Electronic Resources Librarianship, 25(3), 175-191.


Costello, Deirdre. (2014). "UI Content Strategy: Writing Content for Academic Library Users." EBSCO Information Services, Ipswich, MA. Nov 11 2014. Webinar. http://vimeo.com/111773475


Krug, Steve. (2014). Don't Make Me Think, Revisited: a Common Sense Approach to Web Usability. Berkeley, CA: New Riders.

Removing half of the words is actually a realistic goal; I find I have no trouble getting rid of half the words on most Web pages without losing anything of value... It reduces the noise level of the page. It makes the useful content more prominent. It makes the pages shorter, allowing users to see more of each page at a glance without scrolling.

p49


Nielsen, Jakob and Loranger, Hoa. (2006). Prioritizing Web Usability. Berkeley, CA: New Riders.

Out of respect for your users’ time and reading skills, keep your writing simple and concise. Using sophisticated words won’t make you appear smarter or earn points with your users. Most people prefer a conversational tone to a formal tone because it’s more personal and direct. Match your writing to their reading level to ensure maximum readability.

Don’t overwrite. Superfluous verbiage makes people work unnecessarily hard to find the information they need, and convoluted language and fancy words alienate users; choose short words over long ones. For example, rather than use the term “carcinogenic,” you might choose a simpler yet descriptive phrase such as “causes cancer.”

Ch8, Section "Writing for Your Reader"

People prefer factual language and are turned off by anything that sounds overly promotional or exaggerated. Credibility is important on the Web, and organizations need to work hard to earn and keep it. Highly self-congratulatory statements come across as self-serving, and people are repelled by them.

Ch8, Section "Writing for Your Reader"

Ask yourself whether somebody reading the first two sentences on your page will take away the information you want to convey.

Ch8, Section "Writing for Your Reader"