CMSI 370 Interaction Design, Fall 2017
This assignment aims to give you some firsthand experience with collecting usability measurements and assessing how well a device or system complies with its associated guidelines document.
If the course texts are available to you, the following readings will shore up the current material.
- Norman Chapter 1
- Shneiderman/Plaisant Chapter 2
Perform a small-scale usability study comparing two systems of similar utility but with differing user interfaces (usability). Document your study, its results, and your analysis in a report within this assignment’s repository. Use Markdown (
.md) format for the report. A report template is included in the repository so that you have a starting point for the document.
Determine Systems and Organize Groups
We will first organize ourselves into teams of up to five (5) students, then designate a type of system or application with at least two functionally comparable products to each team.
We are using a Google Sheet to get organized, with one tab per section. Specify the system types across the first row and put an
x on the corresponding column across your name to indicate your group. We are targeting groups of five (5) or less; students may belong to only one group. Here is the Google Sheet link:
It is private, shared only with you via the same Google account used to share the course’s screengrab videos.
Select Three Tasks
Select three (3) concrete tasks that for your test subjects to perform (e.g., “place a call to (424) 555-1978”). These form the basis of your measurement activities.
Document these tasks in an initial draft of your report and acquire my explicit approval for them. The approval will be recorded as a commit comment on your repository.
Record usability metrics for your assigned system for an appropriately chosen cohort of users. You may take measurements from as many people as you like, classmates or otherwise. Due to our limited time and resources, we don’t require a particular number of users, but aim for at least ten (10)—that seems to be a fairly reachable number given that there are up to five members in the team.
Choose three (3) metrics from this subset—yes, that essentially means eliminating one metric that you don’t think will work well for your group or your chosen system:
Learnability—Remember that this only applies to subjects who are not familiar with the particular system they are about to use (interface knowledge), but understand the tasks in your list (domain knowledge). This metric is the time to accomplish tasks without prior training.
Efficiency—For this metric, use subjects who are familiar with the system they are about to use (the more proficient, the better). If such subjects are hard to find, you can opt to give them some training and/or practice time in order to gain some level of expertise.
Errors—Remember that in IxD, an error is not a bug, exception, nor crash, but an incident where the user does something whose result is not what he or she expected.
Satisfaction—We will stay very simple here. Just ask your subjects to rate, on a scale of 1 to 10, how much they enjoyed performing each task on their respective devices or systems.
Quick Self-Check: What metric is not included in this list and what might be a reason for its omission?
In your submission, report the results of your studies and make a judgment call on which device or system you feel performed best. State and explain the priorities you gave to each metric.
Explore why you think your assigned systems performed the way they did. Base your discussion on one or more of the following:
Mental model of the system from its developers’ and users’ perspectives.
Guidelines documents that correspond or apply to your assigned devices or systems (i.e., how well [or badly] your assigned device or system complies with those guidelines).
A well-chosen subset of interaction design principles or theories.
The effectiveness or appropriateness of the predominant interaction style(s) used by the systems.
Statement of Work
End your report with a brief summary of what each group member did for the overall study and final report. It goes without saying that you should distribute the work evenly. Allocate a clear, equitable proportion of the overall study and final report to each group member. If you didn’t know it yet, you’re being told now:
git can measure and track everyone’s contribution by commits and lines of text. Group members that contribute inequitably to the overall work will detract from the final score.
Be as concrete and grounded as possible. For example, you can provide screenshots from the actual system to illustrate your points. Refer to specific guideline statements or principles. Connect statements about mental models to specific artifacts of the system image (screenshots again). Et cetera.
And of course, write clearly, with the appropriate style and voice. Proofread a lot—your score will reflect both what you say and how effectively you say it.
Specific Point Allocations
Writing assignments are scored under the overall categories of Content and Writing. Content pertains to the outcomes listed in the syllabus for writing assignments: you want to demonstrate outcomes 1a, 1b, 2a, and 2b, up to the concepts that have been covered in class to this point. Writing pertains to how well these ideas are expressed, and how cleanly. There is also a certain maturity and professionalism of voice that makes your points more compelling and authoritative. The recommended course texts provide models for how this “voice” should sound.