Skip to content

Latest commit

 

History

History
58 lines (33 loc) · 8.15 KB

PeerTestingSesions2.md

File metadata and controls

58 lines (33 loc) · 8.15 KB

Peer Testing Sessions 2

The purpose of these second peer testing sessions is to get user feedback on your system. Again, your users will be your classmates and you will run heuristic evaluation with them.

Schedule

Each peer testing activity will be held in the classroom and will take up to 40 minutes. The schedule for each session is availabe here (https://docs.google.com/spreadsheets/d/1jOD-FS-CZXmYrhu2pVnt639Eda8HOW_Vf_OUELs8T-U/edit#gid=0). On this document, you will find the time of the sessions, the day of the sessions, your name as administrator in two of the activities, and your name as an participan in other two activities. In each session, there will be two different peer testing activities (User feedback and thinkaloud feedback), in each session there will be a administrator and a participant. Those students that cannot be in person for the peer testing activities will not apear in the schedule. Instead, I will email them with the names of their participants for each peer testing activity.

Number of Sessions

Each member in each team has to run ONE user feedback session and ONE thinkaloud user feedback session. These sessions will be done during the class. Those students that cannot be in person will need to record the screen and audio session during these activities, and submit the links into their report. Please, make sure that you record the screen where the participant is interacting with the system. Do not record the screen with the administrator video. The students will lose points if I am not able to see the participant interacting with the system.

Participation Requirements

Each member in each team must participate in TWO sessions. Individuals that do not show up or do not participate in two activities will lose participation marks. Also, your grade this week will be a function of your readiness and ability to conduct the two testing sessions and your individual participation. It is important that you are prepared and have the system ready to go when your participant arrives. Be respectful of people's time. Don't waste your participants' time and make them sit there and watch you setup. Rehearse your sessions and make sure it can be done in a timely manner, because if you run overtime, your participants are NOT obligated to stay beyond the 40 minutes planned.

What if things go wrong during the sessions?

  1. The participat/administrator of the session did not show up or is late. What should I do?
  • Please, wait 15 minute for the participant/administrator. If you have waited 15 minutes and the person is still not there, then you are not obligated to make up for the session. The person missing will lose participation mark.
  1. You were in a session but the system stopped working properly. What should I do?
  • Continue with the session (or the screen and audio recording if you are remote). Do your best to complete the tasks during the activity as originally planned, even if it means re-starting your system. Document clearly in the report what you did and where things went wrong. Explain which data point(s) you could not collect for this reason. As long as we can see from the recording that every effort was made to remedy the issue, no marks will be deducted. Note - problems need to be fixed for future sessions.
  1. (Only for remote activities) Everything in the session seems to be working but the Internet connection is causing problems. What should I do?
  • Continue with the the screen and audio recording if you are remote. Try to repeat what you say slowly so the other party can hear. Also, you can try typing into the chat to facilitate the communication. Do your best to complete the tasks in the session as originally planned, and explain which data point(s) you could not collect for this reason. As long as we can see from the recording that every effort was made to remedy the issue, no marks will be deducted.

What's Needed to Run the Heuristic Evaluation

Recall from COSC 341 that a heuristic evaluation is a type of usability evaluation that allows you to find the majority of the issues with your system from just a few participants. So our goal in running these sessions is to help you identify such issues from "real" users. In order for your participants to become familiar with your system and give feedback on it, you will develop a list of tasks that your participant can complete during the session. While the participant completes each task, you can document the observations and comments made as part of the qualitative feedback. For example, if your participant gets stuck, even if nothing is said, you can observe that the UI is unintuitive for that task. You can later after the task ask the participant what was wrong or how to redesign it to make it better. Another example is if your participants tells you they are trying to find a way to do something but can't seem to see anything obvious for doing it. You can engage in that conversation and help your participant through the task completion. However, you should also note the difficulty that your participant had and work towards a better design after.

If your system involves multiple users (e.g., admin, average user), you must make sure that you have tasks covered for all these different users by asking your participate: "Now, consider yourself as the administrator of this system. Complete these tasks listed." And you can adapt the instruction for each user that your participant needs to consider.

After all the tasks are completed, you can get your participant to complete a quick questionnaire to collect quanitative data. You do this using Nielsen's 10 usability heuristics and setting them on a 5-point Likert scale. See this Google's form template. Please adhere to the wording and scale used. You may think an alternative wording is about the same thing but it may not be. So just use the questionnaire as is for data collection purposes.

After you collected all your data, you will have qualitative feedback as well as quantitative. Using this data, identify all the issues your system has. In the peer evaluation report, you MUST write the following for each issue:

  • Provide a clear description of the problem. Make sure it's self explanatory for someone reading it who was not in the session. Include a screenshot if necessary.
  • Assign them to one of the usability heuristic. If it is a defect and does not fit the heuristics, then assign it as a defect.
  • Assign them a priority of high, medium, or low.
  • Suggest a feasible solution.

User vs. Thinkaloud

Earlier, I indicated that each person runs TWO sessions: one user and one thinkaloud. As a shorthand, let's refer to the test administrator A and the participant P. (Only for online sessons: The zoom meeting will be organized by A, and joined by P at the time of the mutually agreed upon day and time of the meeting.)

A user session is run where P navigate the system alone.

For the remote sessions, P will request remote access to A's desktop. To do this successfully, make sure A is running the session on a desktop or laptop (it will not work on a phone or tablet device). You will want to test this with your teammates in advance before the real session begins. Suppose A is running the session on zoom. You should be able to just give remote access to P. Another technology to support a remote session is to use the Chrome Remote Desktop option (https://remotedesktop.google.com/support/) and depending on which role you are (P wants to gain access, and A wants to give access) you follow the steps required. For giving access, you download the desktop application and generate a one time use code. Then give the code to P. For getting access, you simply input the code given.

A thinkaloud session is run where P talks as much as possible, including thoughts, saying "I am stuck and don't know what to click on", "I want to click on the red button", "I need to move my mouse to the bottom of the screen", etc. While this is happening, A acts as P's navigational aid to complete the session.

EVERYTHING ELSE in the test session is done exactly the same. It is only the navigation protocol that are different.