Skip to content

Latest commit

 

History

History
96 lines (59 loc) · 10.7 KB

PeerTestingSesions.md

File metadata and controls

96 lines (59 loc) · 10.7 KB

Peer Testing Sessions

The purpose is to get user feedback on your system. To keep things simple, your users will be classmates in COSC 499 and you will run heuristic evaluation with them.

Schedule

The week before it happens, you are to complete signing up. Use this shared document (https://docs.google.com/spreadsheets/d/1x47qqVVYdIxAcvLycnP_cBjBshTFPmU4lnXhCG2P6XU/edit?usp=sharing) to organize these sessions. On this docuemnt, you will find your project, your team number, the day of the sessions, the administrator in each activity, and the participant in each activity. In each session, there will be two different peer testing activities (User feedback and thinkaloud feedback), in each session there will be a administrator (a member of the team) and a participant (any student of the class). Each peer testing activity will take up to 40 minutes. You will need to organize with your team and write who in the document who will be the administrator (the person who run the activity) and the participant in each activity.

Number of Sessions

Each member in each team has to run ONE user feedback session and ONE thinkaloud user feedback session. These sessions will be done during the class. Those students that cannot be in person on Wednesday or Friday will need to comminicate this to other students and they will need to record the screen and audio session during these activities, and submit the links into their report.

Participation Requirements

Each member in each team must participate in TWO sessions. Individuals can participate in either user or thinkaloud activities, depending on what's needed. The only constraint is that each team must have its members participate in sessions that covers ALL the projects in the course, (except for MIM and TMI teams).

Your grade this week will be a function of your readiness and ability to conduct the two testing sessions and your individual participation. It is important that you are prepared and have the system ready to go when your participant arrives. Be respectful of people's time. Don't waste your participants' time and make them sit there and watch you setup. Rehearse your sessions and make sure it can be done in a timely manner, because if you run overtime, your participants are NOT obligated to stay beyond the 40 minutes planned.

What if things go wrong during the sessions?

  1. The person running the session you signed up for did not show up or is late. What should I do?
  • If you have done everything you can, then just wait for the duration of the slot. If you have waited 15 minutes and the person is still not there, then you can leave. You are not obligated to make up for the session. The person running it who missed the session will lose participation mark.
  1. You were in a session but the system stopped working properly. What should I do?
  • Continue with the session (or the screen and audio recording if you are remote). Do your best to complete the tasks during the activity as originally planned, even if it means re-starting your system. Document clearly in the report what you did and where things went wrong. Explain which data point(s) you could not collect for this reason. As long as we can see from the recording that every effort was made to remedy the issue, no marks will be deducted. Note - problems need to be fixed for future sessions.
  1. (Only for remote activities) Everything in the session seems to be working but the Internet connection is causing problems. What should I do?
  • Continue with the the screen and audio recording if you are remote. Try to repeat what you say slowly so the other party can hear. Also, you can try typing into the chat to facilitate the communication. Do your best to complete the tasks in the session as originally planned, and explain which data point(s) you could not collect for this reason. As long as we can see from the recording that every effort was made to remedy the issue, no marks will be deducted.

What's Needed to Run the Heuristic Evaluation

Recall from COSC 341 that a heuristic evaluation is a type of usability evaluation that allows you to find the majority of the issues with your system from just a few participants. So our goal in running these sessions is to help you identify such issues from "real" users. In order for your participants to become familiar with your system and give feedback on it, you will develop a list of tasks that your participant can complete during the session. While the participant completes each task, you can document the observations and comments made as part of the qualitative feedback. For example, if your participant gets stuck, even if nothing is said, you can observe that the UI is unintuitive for that task. You can later after the task ask the participant what was wrong or how to redesign it to make it better. Another example is if your participants tells you they are trying to find a way to do something but can't seem to see anything obvious for doing it. You can engage in that conversation and help your participant through the task completion. However, you should also note the difficulty that your participant had and work towards a better design after.

If your system involves multiple users (e.g., admin, average user), you must make sure that you have tasks covered for all these different users by asking your participate: "Now, consider yourself as the administrator of this system. Complete these tasks listed." And you can adapt the instruction for each user that your participant needs to consider.

After all the tasks are completed, you can get your participant to complete a quick questionnaire to collect quanitative data. You do this using Nielsen's 10 usability heuristics and setting them on a 5-point Likert scale. See this Google's form template. Please adhere to the wording and scale used. You may think an alternative wording is about the same thing but it may not be. So just use the questionnaire as is for data collection purposes.

After you collected all your data, you will have qualitative feedback as well as quantitative. Using this data, identify all the issues your system has. For each issue:

  • Provide a clear description of the problem. Make sure it's self explanatory for someone reading it who was not in the session. Include a screenshot if necessary.
  • Assign them to one of the usability heuristic. If it is a defect and does not fit the heuristics, then assign it as a defect.
  • Assign them a priority of high, medium, or low.
  • Suggest a feasible solution.

Session Checklist:

  • Your system is setup and ready to go.
  • You have a description of the system ready for telling your participant.
  • You have a list of tasks (use cases) ready for your participant.
  • Note: You may have one list for each user group for your system.
  • Your tasks last about 40 minutes total.
  • You have a notepad ready to document observations you find during the evaluation.
  • You have a set of online questions ready to complete.
  • Your online questionnaire may include additional usability questions or open-ended questions.

User vs. Thinkaloud

Earlier, I indicated that each person runs TWO sessions: one user and one thinkaloud. If you are not running the session in class, both the test session administrator (the person running the session) and the participant will be in different locations connected via a zoom meeting. As a shorthand, let's refer to the test administrator A and the participant P. The zoom meeting will be organized by A, and joined by P at the time of the mutually agreed upon day and time of the meeting.

A user session is run where P navigate the system alone.

For the remote sessions, P will request remote access to A's desktop. To do this successfully, make sure A is running the session on a desktop or laptop (it will not work on a phone or tablet device). You will want to test this with your teammates in advance before the real session begins. Suppose A is running the session on zoom. You should be able to just give remote access to P. Another technology to support a remote session is to use the Chrome Remote Desktop option (https://remotedesktop.google.com/support/) and depending on which role you are (P wants to gain access, and A wants to give access) you follow the steps required. For giving access, you download the desktop application and generate a one time use code. Then give the code to P. For getting access, you simply input the code given.

A thinkaloud session is run where P talks as much as possible, including thoughts, saying "I am stuck and don't know what to click on", "I want to click on the red button", "I need to move my mouse to the bottom of the screen", etc. While this is happening, A acts as P's navigational aid to complete the session.

EVERYTHING ELSE in the test session is done exactly the same. It is only the navigation protocol that are different.

Prototype Video Demo (25%)

In order for us to know what you are evaluating, you need to create a short video to demo your prototype. The demo should include all working features to date, so we can see what the target users are expected to do/see in your system. The video should not exceed 5 minutes.

(5%) Professionalism and timeliness

(5%) Overall speech and clarity in explaining the prototype

(5%) Who your target user groups are

(5%) What your users are expected to be able to do and see

(5%) Clear identification of the implemented features to date

Peer Testing Report Expectations (75%)

Based on your findings from the peer testing session, write up a report with the following information and submit it before the deadline:

(10%) Brief description of your system and its current set of features available in the testing session

(10%) Identify the number of participants that completed your study. In particular, include a table of participant's name, status (completed, partially completed, no show, type of evaluation (user vs. thinkaloud)). For incomplete and no show statuses, provide an explanation. Indicate who ran each session and who the participant was. (only for those that run the sessions remote) Links to all the videos that were recorded.

(15%) The user group and associated list of tasks (use cases) you asked your participants to complete

(20%) List of issues discovered, prioritized and presented in order of high, medium, then low.

(20%) Average the quantitative scores you received from the participants in a bar graph with the 5 points on the y-axis and the 10 heuristics on the x-axis. Label your axes. Indicate the number of data points used in the graph. Be sure to explain the graph using the issues identified and any quoted comments you obtained from the participants.

Submit the report in PDF format