From fe137e94f9aaab9d7900d7ef0a69fd057ce30d61 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 30 May 2016 11:41:30 -0400 Subject: [PATCH 01/53] cs349: add notes for 5-30 --- cs349/5-1.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 cs349/5-1.md diff --git a/cs349/5-1.md b/cs349/5-1.md new file mode 100644 index 0000000..af9ecb3 --- /dev/null +++ b/cs349/5-1.md @@ -0,0 +1,24 @@ +# Design - Principles from Everyday Things + +CS 349 - User interfaces, LEC 001 + +5-30-2016 + +Elvin Yung + +## Usefulness vs. Usability +* **Usefulness** is the ability of an interface to support different tasks and practical use cases. +* **Usability** is how effectively and efficiently users can achieve tasks with it. +* The goal is to make interfaces both useful and usable. + +* Over time, devices have become more and more capable, but humans have stayed at roughly the same level of capability + +!["Simplicity is the ultimate sophistication."](http://archive.computerhistory.org/resources/text/Apple/Apple.II.1977.102637933.fc.lg.jpg) + +## The Design of Everyday Things +A few takeaways from *The Design of Everyday Things* by Don Norman (cofounder of the Nielsen Norman group, previously VP at Apple ATC): +* Form the correct mental models. +* Frequently-used functionalities should have explicit controls. +* Appearance should reflect/suggest use +* Rarely-used functions shouldn't have the same level of support/emphasis as more frequently-used ones. +* Give feedback of operations in progress (i.e. it should be easy to figure out what it's doing) From ebf7d40d866781fb1b65190d3e0dc12f0babcf68 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 30 May 2016 11:41:51 -0400 Subject: [PATCH 02/53] Add readmes --- README.md | 9 +++++++++ cs349/README.md | 3 +++ 2 files changed, 12 insertions(+) create mode 100644 README.md create mode 100644 cs349/README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..35f726a --- /dev/null +++ b/README.md @@ -0,0 +1,9 @@ +# 1151notes +### by [Elvin Yung](https://github.com/elvinyung) + +Notes for my 1165 term (i.e. Spring 2016) at the University of Waterloo. + +In descending order of how likely I am to attend lectures for the course: +* [CS 341](cs341) - Algorithms +* [CS 349](cs349) - User Interfaces +* [CS 350](cs350) - Operating Systems diff --git a/cs349/README.md b/cs349/README.md new file mode 100644 index 0000000..8feb75b --- /dev/null +++ b/cs349/README.md @@ -0,0 +1,3 @@ +# CS 349 - User Interfaces + +The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are pretty good, so these notes mostly serve as supplementary summaries. From 4cf2eec03a49a8593c465f3a5dcf07e04801ac0c Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 1 Jun 2016 11:32:54 -0400 Subject: [PATCH 03/53] cs349: finish note point --- cs349/5-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/5-1.md b/cs349/5-1.md index af9ecb3..8df16a2 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -11,7 +11,7 @@ Elvin Yung * **Usability** is how effectively and efficiently users can achieve tasks with it. * The goal is to make interfaces both useful and usable. -* Over time, devices have become more and more capable, but humans have stayed at roughly the same level of capability +* Over time, devices have become more and more capable, but humans have stayed at roughly the same level of capability. As technology becomes more *useful*, it becomes a challenge to make them *usable.* !["Simplicity is the ultimate sophistication."](http://archive.computerhistory.org/resources/text/Apple/Apple.II.1977.102637933.fc.lg.jpg) From 0519ce070f9c5ba47b1bc5b756f7e406fc3c5b78 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 1 Jun 2016 12:19:11 -0400 Subject: [PATCH 04/53] cs349: add notes for 6-1-2016 --- cs349/5-1.md | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) diff --git a/cs349/5-1.md b/cs349/5-1.md index 8df16a2..cb4f749 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -22,3 +22,89 @@ A few takeaways from *The Design of Everyday Things* by Don Norman (cofounder of * Appearance should reflect/suggest use * Rarely-used functions shouldn't have the same level of support/emphasis as more frequently-used ones. * Give feedback of operations in progress (i.e. it should be easy to figure out what it's doing) + +6-1-2016 + +* Your understanding of the system is driven by the *mental* model - how the user thinks it works. +* This is in contrast to the *system* model - how it actually works. +* These two things can have huge discrepancies. + +## Mental Model +The *mental model* of an interface is described by statements of this form: + +> If I do this, then the system will do that. + +Often the mental model is very different from the system model. + +For example, with refrigerators: +* Users are inclined to believe that there are two independent temperature controls controlling two different cooling units. +* In fact, there's just one cooling unit - how you manipulate one control actually affects the temperature. + +## Developer's Model +* There's a third model - how the *developer* thinks the system works. +* The developer and the user communicate through the system. +* Therefore, ideally, all three models should be the same. + +## Model of Interaction +To use a tool, users consider: +* The goal +* What is done in the world +* The world itself +* The check of the world + +Essentially, you're doing something, and then checking it. Doing something is called *execution*. Checking it is called *evaluation*. + +## Gulfs +* The *gulf of execution* represents the difficulty in translating user intention into system action. +* The *gulf of evaluation* represents problems in checking the state of the system. + +Obviously, we want to minimize these gulfs. + +## Central Tension + +## UI Design Principles +* *Perceived affordance* is what the user thinks you can do with an object ("affordance"), based on how it looks. +* e.g. Push and pull handles on a door, buttons that are slightly beveled to make it look like you can click it. + +## Mappings +* A *mapping* is the relationship between two things, that usually indicate some sort of intuitive effect. +* Usually, they can be separated into three categories: *layout*, *behavior*, and *meaning* (or *convention*). + +## Consistency +* Interface elements should be *consistent*. +* The user should be able to expect the same behavior across the system. +* e.g. I should be able to right click for a context menu, mouse hover for a tooltip, etc. + +## Constraints +* In general, interfaces have constrains of these type: *physical*, *semantic*, *cultural*, and *logical*. +* Physical: Greyed out buttons means that the action is disabled. +* Semantic: +* Cultural: e.g. the checkmark on the Dropbox folder icon +* Logical: In a language that reads from left to right, the back button faces left, and the right button faces right. + +## Visibility +* Controls that are relevant should be more visible. + +## Feedback +* Every user action should result in some sort of feedback. +* If something will take a long time to do, the UI should indicate progress. +s* This shouldn't be excessive - for example, you shouldn't have a dialog box pop up for every action the user performs. +* In a GUI, widgets should effectively communicate state. +* Example of good feedback: search/replace in Sublime text +* Example of bad feedback: creating a symlink in Unix. + +## Metaphors +* In a GUI, simplify interaction by borrowing concepts from another domain. +* e.g. the desktop metaphor in GUI windowing systems. +* Common language: e.g. window, recycle bin/trash, folders, files +* One of the earliest examples of a GUI with a desktop metaphor is the Apple Lisa. + +!["Apple Lisa"](http://www.mac-history.de/wp-content/uploads/2008/09/apple_lisa_screenshot.gif) + +* Metaphors can go too far. e.g. Microsoft Bob + +!["Microsoft Bob"](http://toastytech.com/guis/bobhome1p.png) + +## Putting it all together +* The designers of a system sometimes publish a style guide, which dictates how they want the look and feel of the system to be like. +* If you use a GUI builder, it will often apply the design guidelines for you. From b80bf0f331ae2b3c7bc1e190ddabd7915d0d4a43 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 1 Jun 2016 12:50:04 -0400 Subject: [PATCH 05/53] cs349: add detail for useful/usable --- cs349/5-1.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/cs349/5-1.md b/cs349/5-1.md index cb4f749..62f2c5b 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -11,7 +11,8 @@ Elvin Yung * **Usability** is how effectively and efficiently users can achieve tasks with it. * The goal is to make interfaces both useful and usable. -* Over time, devices have become more and more capable, but humans have stayed at roughly the same level of capability. As technology becomes more *useful*, it becomes a challenge to make them *usable.* +* Over time, devices have become more and more capable, but humans have stayed at roughly the same level of capability. +* As technology becomes more *useful*, the interface becomes more and more complicated. It becomes a challenge to make them *usable.* !["Simplicity is the ultimate sophistication."](http://archive.computerhistory.org/resources/text/Apple/Apple.II.1977.102637933.fc.lg.jpg) From 3fecc3dca2d0957e8254bd98d4d1935ee7aa34c4 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 1 Jun 2016 12:54:13 -0400 Subject: [PATCH 06/53] cs349: add magic cap example --- cs349/5-1.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/cs349/5-1.md b/cs349/5-1.md index 62f2c5b..7ae69b2 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -106,6 +106,10 @@ s* This shouldn't be excessive - for example, you shouldn't have a dialog box po !["Microsoft Bob"](http://toastytech.com/guis/bobhome1p.png) +* An interesting example: General Magic's Magic Cap OS, which used the desktop metaphor for an early touchscreen interface. + +!["Magic Cap"](https://upload.wikimedia.org/wikipedia/en/6/69/Magic_Cap_OS.gif) + ## Putting it all together * The designers of a system sometimes publish a style guide, which dictates how they want the look and feel of the system to be like. * If you use a GUI builder, it will often apply the design guidelines for you. From 42d40713da85a4713f6e7fb25cf13738011e9615 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 3 Jun 2016 14:12:18 -0400 Subject: [PATCH 07/53] cs349: Add 5-2 notes --- cs349/5-2.md | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 cs349/5-2.md diff --git a/cs349/5-2.md b/cs349/5-2.md new file mode 100644 index 0000000..defa99c --- /dev/null +++ b/cs349/5-2.md @@ -0,0 +1,54 @@ +# Design Process + +CS 349 - User interfaces, LEC 001 + +5-30-2016 + +Elvin Yung + +## User Centered Design + +How do you build software for people who aren't you? + +You have to talk to the customer, figure out: +* what they need +* what problems they have with current solutions + +You need to think about the people who use your software, and to test your ideas with them. + +*Developers are not people.* You are not the user - you can't just build it to your specifications. + +### History + +Started with the original Macintosh team at Apple. + +The original process: bring in users, show them stuff, and iterate. + +(citation needed. I can't seem to find any source that actually supports this claim; the closest I could get to was [Andy Hertzfeld's story](http://www.folklore.org/StoryView.py?project=Macintosh&story=Shut_Up.txt) about demoing a Mac prototype to Microsoft.) + +### Principles +* Understanding users' needs +* Design the UI first - not the architecture. Choose the technology to fit the needs, not [the other way around](http://www.mongodb-is-web-scale.com/). (This isn't exactly a hard and fast rule - sometimes you need to figure out if something is technically feasible) +* Iterate! +* Use your own software - "Eat your own dogfood." +* Look at other people using it - and in real life, not some sort of lab. + +### Iteration Cycle + +TODO: add flowchart here + +(Caveat: this diagram assumes you won't change the architecture, which isn't always true in real life.) + +### Understanding the User +* *Observe existing solutions.* [It's important to see how the user uses the software.](http://javlaskitsystem.se/2012/02/whats-the-waiter-doing-with-the-computer-screen/) You need to design the scenario that you're working with, and meeting the requirements that they want. +* *List scenarios.* e.g. for an email client, figure out what the flow for sending an email should be like, replying to an email, forwarding an email, creating a mailbox, etc. Scenarios are great because they're a natural place to talk about what the user is doing, in the context that they're doing it. Collect data from a bunch of users - catch a wide range of use cases. +* *List and prioritize functions.* Figure out what scenarios need what, and prioritize. +* *List functions by frequency and commonality.* Common functionalities should be accessible with few clicks, more obscure use cases hidden in menus. + +### Designing the UI +* *Temporal distribution*: steps in the flow +* *Spatial distribution*: where components appear on a section + +* Use **storyboards** to mock up the basic flow for (typically) scenarios. +* Describe interaction sequences: plan interaction paths like a flowchart +* Testing the design: quickly prototype, test with users, and iterate. From 02bc610e02cd1990024b7e3559164ffbb2aec571 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 12:20:42 -0400 Subject: [PATCH 08/53] cs349: Add design process slides 6-6 --- cs349/5-2.md | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/cs349/5-2.md b/cs349/5-2.md index defa99c..d53257a 100644 --- a/cs349/5-2.md +++ b/cs349/5-2.md @@ -52,3 +52,48 @@ TODO: add flowchart here * Use **storyboards** to mock up the basic flow for (typically) scenarios. * Describe interaction sequences: plan interaction paths like a flowchart * Testing the design: quickly prototype, test with users, and iterate. + +6-6-2016 + +### Testing with Users +* AKA: Prototyping +* Iterate, refine. +* Specifically, let people interact with the design, and explore its sustainability. +* Goal: [Maximum feedback for minimum effort.](http://theleanstartup.com/principles#develop_mvp) +* Example: testing the first PDA prototype by making a wooden model of it, and then writing on it in front of people, and testing reactions +* Build software *last*! It's super expensive and time consuming. + +#### Prototype Objectives +* Needs to answer this question: Does it work for this scenario? + +#### What to prototype? +* You can prototype anything: concepts, navigation, mental models, layout, technical feasibility, usefulness, etc. +* This is the same thing as an experiment! + +#### Prototype Fidelity +* The level of detail/care you put into the prototype. +* *Low* fidelity: Prototype doesn't look a lot like the real thing; the operations might be simulated or slower. +* *High* fidelity: Prototype looks and acts like the real thing. +* Even if you're confident in your design, *you want to build a low-fi prototype first*! +* People are way too nice to give real feedback. If you show people a high-fi prototype first, people will feel bad giving you more work. +* If you start with a low-fi prototype, you get more objective/higher level feedback because they know you haven't invested any time or effort into it. + + +#### Paper Prototyping +* Building a paper version of the interface, with a person playing the computer. +* A really good (and probably overkill) example: [Hanmail](https://www.youtube.com/watch?v=GrV2SZuRPv0). +* This is a concept called the *Wizard of Oz technique* or *Wizard of Oz experiment*, where you have a software interface, but a human acting as the backend to simulate the response. +* [Stripe](http://paulgraham.com/ds.html) does this! + +* The interface should be as minimal as possible - just enough to get feedback. Speed is of the essence! +* Get feedback immediately. + +#### When and How to Prototype + +TODO: add chart here + +#### Breadth vs. Depth +* You probably won't have enough time to test every flow - be able to cherrypick and prioritize which scenarios you want to test. +* Your interactive prototype will probably have spotty coverage in terms of features/scenarios. + +## MIDTERM MATERIAL ENDS HERE From 748912941474af82127f0976e9d370d2835ac42c Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 12:20:58 -0400 Subject: [PATCH 09/53] cs349: Add Visual Design slides --- cs349/5-3.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 cs349/5-3.md diff --git a/cs349/5-3.md b/cs349/5-3.md new file mode 100644 index 0000000..8d5a538 --- /dev/null +++ b/cs349/5-3.md @@ -0,0 +1,21 @@ +# Visual Design + +CS 349 - User interfaces, LEC 001 + +6-6-2016 + +Elvin Yung + +## Why Discuss Visual Design? +* You need to know how to present your interface to the user. +* People *shouldn't have to think* when they use your interface - just do! + +## Objectives +* Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. +* *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of +* Keep things *simple*! +* (But not [*too* simple](http://rhymeswithorange.com/comics/may-23-2013/). You want the user to still be able to *do* the things they want to do.) + +![An example of a bad interface](https://diyivorytower.files.wordpress.com/2011/01/2011_01_12-bulk-rename-utility.jpg) + +## Gestalt Principles From d6f4d60b6accd47b5ff602e064dd19ba75121cc6 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:16:15 -0400 Subject: [PATCH 10/53] cs349: proper module number for visual design --- cs349/{5-3.md => 6-1.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename cs349/{5-3.md => 6-1.md} (100%) diff --git a/cs349/5-3.md b/cs349/6-1.md similarity index 100% rename from cs349/5-3.md rename to cs349/6-1.md From 0210b006b1df30a115d3273c8048a11f10f3dfec Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:17:28 -0400 Subject: [PATCH 11/53] cs349: 6-1 bad interface examples section --- cs349/6-1.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/cs349/6-1.md b/cs349/6-1.md index 8d5a538..34b8148 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -10,12 +10,14 @@ Elvin Yung * You need to know how to present your interface to the user. * People *shouldn't have to think* when they use your interface - just do! +### Bad interfaces +![An example of a bad interface](https://diyivorytower.files.wordpress.com/2011/01/2011_01_12-bulk-rename-utility.jpg) +![At least it's not phallic.](http://www.piedpiper.com/app/themes/pied-piper/dist/images/interface_large.jpg) + ## Objectives * Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. * *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of * Keep things *simple*! * (But not [*too* simple](http://rhymeswithorange.com/comics/may-23-2013/). You want the user to still be able to *do* the things they want to do.) -![An example of a bad interface](https://diyivorytower.files.wordpress.com/2011/01/2011_01_12-bulk-rename-utility.jpg) - ## Gestalt Principles From dfee8969b1c4c2c49b4bd4c0c3b6e5ee9663eace Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:18:27 -0400 Subject: [PATCH 12/53] cs349: 6-1 clarify point --- cs349/6-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/6-1.md b/cs349/6-1.md index 34b8148..57ffa91 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -8,7 +8,7 @@ Elvin Yung ## Why Discuss Visual Design? * You need to know how to present your interface to the user. -* People *shouldn't have to think* when they use your interface - just do! +* People *shouldn't have to think* when they use your interface! ### Bad interfaces ![An example of a bad interface](https://diyivorytower.files.wordpress.com/2011/01/2011_01_12-bulk-rename-utility.jpg) From 89d53705f3e4aebb565fa92e24735f53d8ca0d16 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:21:09 -0400 Subject: [PATCH 13/53] cs349: better comic embed --- cs349/6-1.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/cs349/6-1.md b/cs349/6-1.md index 57ffa91..20a9322 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -18,6 +18,10 @@ Elvin Yung * Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. * *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of * Keep things *simple*! -* (But not [*too* simple](http://rhymeswithorange.com/comics/may-23-2013/). You want the user to still be able to *do* the things they want to do.) +* (But not too simple. You want the user to still be able to *do* the things they want to do.) + +![Ultimate sophistication](https://safr.kingfeatures.com/idn/cnfeed/zone/js/content.php?file=aHR0cDovL3NhZnIua2luZ2ZlYXR1cmVzLmNvbS9SaHltZXNXaXRoT3JhbmdlLzIwMTMvMDUvUmh5bWVzX3dpdGhfT3JhbmdlLjIwMTMwNTIzXzkwMC5naWY=) + +(Source: http://rhymeswithorange.com/comics/may-23-2013/) ## Gestalt Principles From e2f4bd88756b329041d86842160a11eb20455545 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:24:37 -0400 Subject: [PATCH 14/53] cs349: Add slide link --- cs349/5-1.md | 2 ++ cs349/5-2.md | 2 ++ cs349/6-1.md | 2 ++ 3 files changed, 6 insertions(+) diff --git a/cs349/5-1.md b/cs349/5-1.md index 7ae69b2..513b77c 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -6,6 +6,8 @@ CS 349 - User interfaces, LEC 001 Elvin Yung +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/5.1-design_principles.pdf) + ## Usefulness vs. Usability * **Usefulness** is the ability of an interface to support different tasks and practical use cases. * **Usability** is how effectively and efficiently users can achieve tasks with it. diff --git a/cs349/5-2.md b/cs349/5-2.md index d53257a..accac81 100644 --- a/cs349/5-2.md +++ b/cs349/5-2.md @@ -6,6 +6,8 @@ CS 349 - User interfaces, LEC 001 Elvin Yung +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/5.2-design_process.pdf) + ## User Centered Design How do you build software for people who aren't you? diff --git a/cs349/6-1.md b/cs349/6-1.md index 20a9322..6df5d8a 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -6,6 +6,8 @@ CS 349 - User interfaces, LEC 001 Elvin Yung +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/6.1-visual_design.pdf) + ## Why Discuss Visual Design? * You need to know how to present your interface to the user. * People *shouldn't have to think* when they use your interface! From 37972388967d1cff53cf3c1daf2d2ed1bd1f3635 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:27:07 -0400 Subject: [PATCH 15/53] cs349: add design process charts --- cs349/5-2.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cs349/5-2.md b/cs349/5-2.md index accac81..7f3a26e 100644 --- a/cs349/5-2.md +++ b/cs349/5-2.md @@ -37,7 +37,7 @@ The original process: bring in users, show them stuff, and iterate. ### Iteration Cycle -TODO: add flowchart here +![](https://i.imgur.com/KeBkFB6.png) (Caveat: this diagram assumes you won't change the architecture, which isn't always true in real life.) @@ -92,7 +92,7 @@ TODO: add flowchart here #### When and How to Prototype -TODO: add chart here +![](https://i.imgur.com/iVH26yx.png) #### Breadth vs. Depth * You probably won't have enough time to test every flow - be able to cherrypick and prioritize which scenarios you want to test. From dfd7b19704acf2e07b5a588b3b4d2a0d33a5f43a Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:29:34 -0400 Subject: [PATCH 16/53] cs349: user testing addendum --- cs349/5-2.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/cs349/5-2.md b/cs349/5-2.md index 7f3a26e..4b849de 100644 --- a/cs349/5-2.md +++ b/cs349/5-2.md @@ -28,6 +28,8 @@ The original process: bring in users, show them stuff, and iterate. (citation needed. I can't seem to find any source that actually supports this claim; the closest I could get to was [Andy Hertzfeld's story](http://www.folklore.org/StoryView.py?project=Macintosh&story=Shut_Up.txt) about demoing a Mac prototype to Microsoft.) +(Addendum: This [seems to have been](http://www.folklore.org/StoryView.py?story=Do_It.txt) something developed at Xerox PARC, and then adapted for the Lisa project when Larry Tesler went to Apple.) + ### Principles * Understanding users' needs * Design the UI first - not the architecture. Choose the technology to fit the needs, not [the other way around](http://www.mongodb-is-web-scale.com/). (This isn't exactly a hard and fast rule - sometimes you need to figure out if something is technically feasible) From 9a32610259a6fdc52495d0eaa5d7ef1ffbc3f1dc Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Mon, 6 Jun 2016 15:31:58 -0400 Subject: [PATCH 17/53] cs349: Add style guide example --- cs349/5-1.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/cs349/5-1.md b/cs349/5-1.md index 513b77c..cca71cf 100644 --- a/cs349/5-1.md +++ b/cs349/5-1.md @@ -115,3 +115,7 @@ s* This shouldn't be excessive - for example, you shouldn't have a dialog box po ## Putting it all together * The designers of a system sometimes publish a style guide, which dictates how they want the look and feel of the system to be like. * If you use a GUI builder, it will often apply the design guidelines for you. + +![Inside Macintosh](http://www.folklore.org/images/Macintosh/inside_mac.gif) + +* Early example of a design style guide: The [Macintosh User Interface Guidelines](http://www.folklore.org/StoryView.py?project=Macintosh&story=Inside_Macintosh.txt) From b4536b418dd78af58e90a37f2a8c676a1d55e124 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 8 Jun 2016 12:18:31 -0400 Subject: [PATCH 18/53] cs349: 6-1 add gestalt principles --- cs349/6-1.md | 78 +++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/cs349/6-1.md b/cs349/6-1.md index 6df5d8a..a339233 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -21,9 +21,85 @@ Elvin Yung * *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of * Keep things *simple*! * (But not too simple. You want the user to still be able to *do* the things they want to do.) +* Basically, remember also that *essential* can conflict with *simple* - expert users need specialized interfaces. ![Ultimate sophistication](https://safr.kingfeatures.com/idn/cnfeed/zone/js/content.php?file=aHR0cDovL3NhZnIua2luZ2ZlYXR1cmVzLmNvbS9SaHltZXNXaXRoT3JhbmdlLzIwMTMvMDUvUmh5bWVzX3dpdGhfT3JhbmdlLjIwMTMwNTIzXzkwMC5naWY=) (Source: http://rhymeswithorange.com/comics/may-23-2013/) -## Gestalt Principles +## Organization and Structure: Gestalt Principles +* Ways that we look at the world and find patterns. +* Our brains are wired to look for patterns. +* *Gestalt principles* describe some of the ways we do this real world. +* The idea is that you can build an interface that takes advantage of how our minds group things. + +### Proximity +* We associate things more strongly when they are close to each other. +* e.g. items that are spaced more closely vertically looks like columns, more closely horizontally looks like rows. + +* Example: sign at Big Bend National Park, Texas + +![Bad proximity](https://www.nps.gov/common/uploads/photogallery/imr/park/bibe/60014F41-155D-451F-67C8B8DC3E90D16A/60014F41-155D-451F-67C8B8DC3E90D16A-large.JPG) + +### Similarity +* We group things based on visual characteristics, like **shape**, **size**, **color**, **texture**, **orientation**. +* e.g. in a group of similar-sized squares and circles, we group by shape. In a group of large and small squares and circles, we group by size. In a group of green and white squares and circles, we group by color. (TODO: add image) +* (We see size first - it's more obvious to us.) +* When things look like one another, we tend to think of them as belongingin a common set. + +### Good Continuation +* We have a tendency to find flow in things. +* Your eyes will track and follow a line or curve. +* Things arranged in such pattern tend to get associated with each other. +* e.g. we tend to follow a menu, in a straight line. +* Arranging things like this can get people to look at more things, even if they were only looking for one thing. + +The last three principles dealt with how we group object. The next few will deal with how we fill in missing or ambiguous information. + +### Closure +* We like to see a complete figure even if some parts are missing. +* For example, a dotted circle looks like a circle because it has a circular in shape, even if large parts are missing. +* In UIs, for example, windows overlap, but we infer the fact that there's a window behind the current window. + +### Figure/Ground (aka Area) +* We like to separate or visual field into things that are in the foreground (the *figure*), and things that are in the background (the *ground*). +* Things in the foreground, or *figure*, are interpreted as the object of interest. +* *Ground* is everything else. + +#### Ambiguity +* Visual cues can help solve this. +* Figure has a definite shape, but ground seems shapeless. +* (In the absence of a horizon, it's hard to tell.) + +### Law of Prägnanz +* We like to perceive shapes in the simplest possible way. +* In some cases we use depth to do this - a stack of partially overlapping squares are interpreted as three squares, even though it doesn't really look like it. +* Symmetry is great as well - we like to parse symmetry. + +### Uniform Connectedness +* The interface can force a grouping on the user. They can do this by creating regions or connecting lines. +* You can define regions to force people to perceive things similarly. +* This isn't nearly as effective as proximity, but it's an option. +* For a long time, Microsoft used uniform connectedness in their UIs. It works, but makes their interfaces very cluttered. + +### Alignment (?) +* Is alignment a Gestalt principle? +* Basically, we see things similarly when we group things in line. +* It's a powerful organizaing tool. +* It's kind of like continuation - continuation tends to imply alignment. + +## Pleasing Layouts + +## Applying Concepts +* Avoid haphazard layouts +* Align stuff + +## Testing Your Interface +* Show it to someone else - don't ask if they like it, try to get first impressions +* You want to figure out, the *first* time that they see it, if everything is clear and usable. +* Squint test - when you squint and look at your interface, does it still make sense? + +## Summary +* Strive for simplicity! +* Know your target. +* Don't leave your visual design up to chance! Think about your design, and test it out. From 2f7f0a80db1c138c929f048771c0ad20ba860139 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 8 Jun 2016 19:51:44 -0400 Subject: [PATCH 19/53] cs349: 5-2 add example for design vs technical feasibility --- cs349/5-2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/5-2.md b/cs349/5-2.md index 4b849de..435a0e6 100644 --- a/cs349/5-2.md +++ b/cs349/5-2.md @@ -32,7 +32,7 @@ The original process: bring in users, show them stuff, and iterate. ### Principles * Understanding users' needs -* Design the UI first - not the architecture. Choose the technology to fit the needs, not [the other way around](http://www.mongodb-is-web-scale.com/). (This isn't exactly a hard and fast rule - sometimes you need to figure out if something is technically feasible) +* Design the UI first - not the architecture. Choose the technology to fit the needs, not [the other way around](http://www.mongodb-is-web-scale.com/). (This isn't exactly a hard and fast rule - sometimes you need to figure out [if something is technically feasible](https://en.wikipedia.org/wiki/IPhone_4#Antenna)) * Iterate! * Use your own software - "Eat your own dogfood." * Look at other people using it - and in real life, not some sort of lab. From ca16d00863f30b6a50111dd805c0348e85688a8d Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 9 Jun 2016 03:29:52 -0400 Subject: [PATCH 20/53] cs349: add 1-1 --- cs349/1-1.md | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 cs349/1-1.md diff --git a/cs349/1-1.md b/cs349/1-1.md new file mode 100644 index 0000000..1b147ed --- /dev/null +++ b/cs349/1-1.md @@ -0,0 +1,90 @@ +# Course Introduction + +CS 349 - User interfaces, LEC 001 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/1.1-introduction.pdf) + +## What is a User Interface? +Some definitions that might work: +* how humans see the computer +* where humans and computers meet + +A real definition: +* A *user interface* is where a a person can express *intention* to the device, and the device can present *feedback*. + +UIs don't just refer to how you interact with computers: Microwaves, refrigerators, door bells, hammers, jets... + +## A Brief History of Computer UIs +### Pre 1970s +* Computers had *batch* interfaces. +* They were rudimentary and mostly non-interactive. +* The hot new computers of the day: ENIAC, +* Giving a computer instructions involved [punching holes in cards](https://en.wikipedia.org/wiki/Punched_card)... + +![The Punch Bowl](https://upload.wikimedia.org/wikipedia/commons/5/58/FortranCardPROJ039.agr.jpg) + +### 1970s - early 1980s +* We got *conversational* interfaces, i.e. command lines. +* We mainly saw them in two different places: *microcomputers* (the first personal computers), and *terminals* ("dumb" video display clients connected to mainframes). + +![Apple II](https://upload.wikimedia.org/wikipedia/commons/8/82/Apple_II_tranparent_800.png) + +![IBM 3278](http://www.corestore.org/3278-3.jpg) + +* Some kid named Bill Gates bought a [Quick and Dirty Operating System](https://en.wikipedia.org/wiki/DOS) to use in IBM PCs (and IBM clones), for just $50,000... + +![IBM PC](https://upload.wikimedia.org/wikipedia/commons/f/f1/Ibm_pc_5150.jpg) + +### late 1980s - now? +* Xerox's Palo Alto Research Park (PARC) developed amazing technologies around this time, things like Ethernet networking and object-oriented programming. +* The one that got the most attention was the bitmapped display. It let you show completely *graphical* user interfaces in your software, controlled with a device called a *mouse*. +* Xerox was too focused on their photocopier products, and never really capitalized on their innovations. They made some very expensive workstations based on the GUI and Ethernet, the Alto (1973) and the Star (1981), which never really sold well. +* In exchange for a small stake in Apple, Xerox let Steve Jobs [visit](https://www.youtube.com/watch?v=2u70CgBr-OI) PARC. ["Xerox grabbed defeat from the greatest victory in the computer industry."](https://www.youtube.com/watch?v=_1rXqD6M614) +* Apple ended up using the technology in the Lisa (which was a huge failure), and then the [Macintosh](http://www.folklore.org/ProjectView.py?project=Macintosh) (which started off a hit, but ended up also a huge failure). + +![It sure is great to get out of that bag.](http://radio-weblogs.com/0102482/images/2005/06/06/hello-mac.jpg) + +* Some [other company](https://www.youtube.com/watch?v=sforhbLiwLA) that [has no taste](https://www.youtube.com/watch?v=EJWWtV1w5fw) took the ideas for the GUI and [ran with it](https://en.wikipedia.org/wiki/Windows_1.0). And [bad things](https://en.wikipedia.org/wiki/Apple_Computer,_Inc._v._Microsoft_Corp.) happened. + +![They just have no taste.](http://zdnet3.cbsistatic.com/hub/i/r/2015/07/23/db9b07b8-1bd3-4451-9365-7bd336f4d7dd/resize/1170x878/6a5511eafc6e9a454add33945466f8ed/cmwindows1-0jul15a.jpg) + +### 1990s - +* 1989: Tim Berners-Lee came up with a hypertext format to be sent over network connection. This has made a lot of people very angry and been widely regarded as a bad move. Tim decided to call his neat invention the WorldWideWeb. +* 1993: Marc Andreesen makes a web browser called Mosaic. It add graphics to web pages. This has made a lot of people very angry and been widely regarded as a bad move. Mosaic eventually became Netscape. +* People started putting `.com` at the end of their company name. It was a weird time. + +### now? - future? +* Touchscreens + +![No more Eat Up Martha.](https://i.ytimg.com/vi/e7EfxMOElBE/maxresdefault.jpg) + +* Voice + +![Echo echo](http://gazettereview.com/wp-content/uploads/2015/12/Amazon-Echo-1.jpg) + + +### How Has Computing Changed? +* The introduction of the GUI *fundamentally* changed how we use technology. +* Computers went from being a specialist tool to being used by everyone, without having to be an expert or build their own software. + +## Interactive System Architecture +* The user has a *mental model* - how it thinks the device works. +* The device has a *system model* - how it actually works. +* Interaction: the user expresses an intention to the device, and the device presents feedback about that intention. +* An *event*, to the user, is an observable occurrence or phenomenon. To the system, it's a message saying that something happened. + +## Interface vs. interaction +* *Interface* is how the device presents itself to the user. These include controls, and visual, physical, and auditory cues. +* *Interaction* is the user's actions to perform a task, and how the device responds. + +## Designing Interactions +* Designing good interaction is hard because users, and the things they want to do, are all different. +* Can you anticipate all scenarios? +* There's no single right way to build an interface - it can always be improved. + +## Why Study Interaction? +* The right computer is a [bicycle for the mind](https://www.youtube.com/watch?v=ob_GX50Za6c&t=25s). +* A well designed tool with a good interface can radically improve our productivity and let us do things that we couldn't dream of before. +* New technology becomes widespread not when it becomes more powerful, but when it becomes easy to use. From b3d24dbcf3fbda56a6a779e40fa6c710f244458a Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 15 Jun 2016 11:30:46 -0400 Subject: [PATCH 21/53] cs349: add 6-2 --- cs349/6-2.md | 204 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 204 insertions(+) create mode 100644 cs349/6-2.md diff --git a/cs349/6-2.md b/cs349/6-2.md new file mode 100644 index 0000000..d5a0718 --- /dev/null +++ b/cs349/6-2.md @@ -0,0 +1,204 @@ +# Responsiveness + +CS 349 - User interfaces, LEC 001 + +6-10-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/6.2-responsiveness.pdf) + +### Elevator story +* A bunch of people were sick of the waiting times for the elevator in a building. +* Adding another elevator was unfeasible because it was too expensive. +* They installed mirrors inside and near the elevators - and the complaints were drastically decreased. +* Basically, it had to do with people's *perception* of the delay, not the delay itself. +* Now that there were mirrors, people would look at themselves and fix themselves using the mirror - it was no longer just pure *waiting* time. + +## Responsive Applications +You can make your software faster by doing these things: +* making the UI to fit human *deadline* requirements, and +* actually making it faster. + +## Responsiveness +* If we know how long an operation takes to be perceived, we can design our UI around that. +* What affects this? + +* *User expectations*: Your expectations change how you think about the software. +* Example: You expect the ATM to be slow, to process your card, update you account, etc. +* Another example: We already expect desktop apps to be fast, but no one's surprised when it takes web apps longer to load. +* The system itself also matters. If something is slow, you still want to give an indication that something is being done. +* Responsiveness is the single most important factor in satisfying users with your UI. + +## How do we make a system responsive? +* If something is slow, we still want to *provide feedback*. +* Confirmation feedback: Let the user know that their input is being processed. +* Progress feedback: Show what's currently being done, and what the progress is. +* The operation should be *asynchronous*: The user should not be blocked if the operation takes an hour. + +A system has poor responsiveness if: +* There's *delayed* feedback. People can't stand that! +* Ignoring user input: You never, ever, want to do that. +* The wait-cursor effect: You don't want to completely ignore user input when something is loading/waiting. +* Things like jerky animation is bad - the animation should be easy to follow. + +## Human deadlines +// TODO: add table + +* Interesting: Humans can detect sound changes the fastest. We can detect a gap of silence that's 0.001s long! +* *Saccade*: When your eye moves, you actually can't see well for a short time. + +### 0.14 second +* 0.14 second is the most important time. It's the basic perception threshold for our our eyes. +* After 0.14s without displaying feedback to a user action, the interface no longer feels responsive. +* If an operation takes longer than that, you need a progress indicator. + +// TODO: add others from slides + +## Responsive UI tricks +* e.g. the hourglass on Windows, the beach ball on Macs +* Busy indicators are *bad*. They block the entire system! +* You would rather provide a *progress* indicator: let the user see an estimate of how much time (and probably how much work) is left to perform the task. + +### Progress Indicator best practices: +* Show work remaining +* Show *total progress*: Don't say that you're done 90% of step 3, say that you're done 2% of all the updates. +* Start at 1%, not 0%. People will think you've done nothing if it stays at 0% for extended periods of time. +* Don't show 100% for a long time - if it's done, it should be *done*. +* Show smooth, linear progress +* Use human scale precision - use human times, like 4 minutes instead of 240 seconds. + * 4 minutes is easier conceptually. + * Also: for most operations with progress dialogs, the application isn't actually really good at how long it's actually going to take. 4 minutes seems like a rougher estimate) + * It's hard to estimate the time to copy a file! + +![](https://imgs.xkcd.com/comics/estimation.png) + +## More tricks +* Render or display more important information first. +* For example, PDFs can load and display pages sequentially, rather than all one burst. +* Load images *progressively*: Show a lower resolution image first, and then re-render at higher resolution. This is [pretty common](https://jmperezperez.com/medium-image-progressive-loading-placeholder/) on the web. + +* Fake heavyweight computations during hand-eye coordination tasks. +* Example: Scaling something in Photoshop: when the user is dragging the corner, Photoshop only scales out the border, and snaps to the new size when the user is done. + +* Working ahead: Use downtime to do things that you know you'll do later. +* Example: A browser that prefetches pages that are linked from the current page. +* Weirder example: ML optimizations that guesses what you're probably going to do, and then precomputes it. + +## Responsiveness in a Java App +### Handling long-running tasks +* People expect feedback from user interfaces in 10-100 ms, but this isn't always easy to do. How do you manage this? +* Our goal is to maintain the interactions that we have while the application is still doing some intensive work. +* While the task is running, users should still be able to work with other elements of the UI. +* They should also be able to pause or cancel the task. + +### Slow things +Usually, tasks like these are slow: + +* Grabbing things from the internet +* Searching a large data structure +* Image/video processing +* Factoring big numbers +* etc. + +Completely uninteractive ("hanging") applications are really bad! + +### TL;DR for the rest of this section +* When you have to do intensive work in a Swing app, you basically have two real choices on where to do it: on the Swing event dispatch thread, or on a separate background thread. +* Putting it on the event thread means that you don't have to deal with synchronization problems, but it could potentially block the Swing event loop unpredictably (making the app choppy), and interfere with the UI. +* Putting it on a separate thread means that you don't block the Swing event loop, but comes with all the standard drawbacks of using multiple threads (risk of race conditions, deadlocks, etc.) + +### Demos +* Consider a program that calculates all the prime numbers within some interval. +* The application is implemented using a basic MVC pattern. +* The UI looks like this: + +![](https://i.imgur.com/fNWcJgn.png) + +* We'll change the model and view iteratively to progressively improve the program's responsiveness. + +#### `Demo1` +* When the user clicks the button, we calculate all the primes at once. +* The calculate loop takes multiple seconds to run, and *doesn't update any view* until it's finished. +* It's completely unoptimized. +* Clearly, that's not great. + +##### What's wrong? +* The Swing event loop is basically a single *thread*. It runs your code, performs event dispatch, runs your code again, and so on... +* You can think of it like this: There's a *queue* where each item represents a task to be processed, and a loop that keeps running tasks from the queue. It can only do one thing at a time. +* It *doesn't* preempt anything you're executing. +* This means that when you're doing a lot of processing in a single task, it *blocks* the thread and doesn't let anything happen. +* The golden rule of event loops: **Don't block the event loop.** + +#### Threads +* We could move calculation into multiple threads. In Java, multiple threads can run concurrently (unlike CPython or MRI). +* This seems great because it lets multiple things happen at the same time - you can use more than one core. +* But it could also get messy because all the threads share the same addressing space, which means that you might have to deal with things like deadlocks and race conditions. +* Concurrency is hard! + +In Java, there are three "types" of threads: +* Initial/main thread +* Event Dispatch Thread (or UI Thread, Swing thread) +* Worker/background threads + +For our prime calculation program, we basically have two strategies: +* Strategy A: Split up our work into multiple tasks, so that it doesn't have to be run all at once in the event loop. +* Strategy B: Have worker threads that do the work, and communicate with the UI thread. + +#### `Demo2` +* `Demo2` implements strategy A. +* We split up calculation into multiple tasks (`Runnable` objects) that get added to the Swing event queue. +* We periodically add tasks to the event queue, until all the processing's done. +* Each task is responsible for adding the next task to the event queue, unless we're done or the user cancelled. + +* This demo is jittery. +* The problem is that we're technically still breaking the golden rule. Each task still blocks the event loop when it runs. +* We can manually try to tune the size of each task, but it's still pretty clunky. +* Also, it's hard to break up certain tasks, like blocking I/O. +* (Some event loop implementations, e.g. Node.js or EventMachine, have evented/non-blocking equivalents of all the I/O operations, which lets the event loop handle other tasks while waiting for I/O. Swing doesn't do this, unfortunately.) + +#### `Demo3` +* Strategy B - Delegate actual work to background worker threads. +* We still have a Swing thread that does all the UI updates, but we added a worker thread that does the actual calculations. +* Using more than one thread lets us scale to more than one core. +* As expected, `Demo3` is much more responsive than any of the previous methods. + +#### `Demo4` +* Why do we use `SwingUtilities.invokeLater()` to update the UI, instead of updating it directly? +* `Demo4` has two worker threads: One that uses `invokeLater`, and one that updates it directly. +* The problem is that since Swing isn't thread-safe, which means that calling the Swing methods directly could lead to lots of synchronization issues. +* [Race conditions are bad.](https://en.wikipedia.org/wiki/Therac-25) + +#### More stuff +* `SwingWorker` was added to Java SE 6, and is now the standard way to create worker threads. +* For Android, there's `AsyncTask`. + +* Java 8 added lambda functions. Lambdas are `Runnable`, which means that we can more concisely run asynchronous tasks, like this: + +```java +SwingUtilities.invokeLater(() -> { + // ... +}); +``` + +## Responsiveness in a Web App +### Loading data efficiently +* The real problem with webpages is that we need to manage web latency. + +### A Brief History of Web Architectures +#### Gen 1 +* [Thin clients.](https://en.wikipedia.org/wiki/Thin_client) +* Your browser makes a request to the server, which returns a complete HTML page - everything is computed and rendered serverside. +* Whenever we make an interaction, the webserver sends back a completely new page for the browser to load. + +#### Gen 2 +* [Fat clients.](https://en.wikipedia.org/wiki/Fat_client) +* The server is only responsible for delivering the data. It exposes some API over HTTP, probably in some JSON or XML based format. +* The browser gets this data [asynchronously](https://en.wikipedia.org/wiki/Ajax_(programming)), and renders it dynamically on the client side. +* This is called a *single-page app*, where clients are *dynamic*, and can do a lot on the frontend using JavaScript. + +#### Gen 3 +* Previously, our backend webserver was a *monolith* - everything was on one process. +* This is hard to scale, since different features do different things, and can usually be scaled up differently. +* We can break up our logic into multiple *services*. +* The webserver that talks to the frontend can transparently grab the data from all the services it needs to. From a5bccacc3b33a937c18ce0e860381290db34d9f8 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 07:57:08 -0400 Subject: [PATCH 22/53] cs349: add 7.1 notes --- cs349/7-1.md | 129 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 cs349/7-1.md diff --git a/cs349/7-1.md b/cs349/7-1.md new file mode 100644 index 0000000..19b6214 --- /dev/null +++ b/cs349/7-1.md @@ -0,0 +1,129 @@ +# Undo + +CS 349 - User interfaces, LEC 001 + +6-15-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/7.1-undo.pdf) + +### Checkpointing +* Conceptually, undo is similar to something called **checkpointing**. +* Checkpointing is a way of manually undoing something - like saving and and reloading in a video game. +* Undo/redo is basically an automated way of checkpointing. +* Version control systems like Git are a way of checkpointing. + +### Benefits +* There's quite a bit of research done in this area, about how people use GUIs. +* People rely on undo-redo not just to revert mistakes, but also to experiment with the UI, *without fear of commitment*. +* Fast undo-redo lets you compare two different states very easily. +* This is actually one of the features that you basically have to include, for a usable GUI. + +### Design Decisions +* You have to make a bunch of design decision + +* *What's undoable?* Not everything should be. For example, if I resize a window, it isn't normally undoable. Usually it's data changes that are undoable. +* *Destructive* operations are difficult to undo, e.g. quitting an application with unsaved data, emptying the trash. + * When users initiate a destructive action, the UI usually does something to make sure the user is suer they want to do it, like showing a "are you sure?" dialog box. +* Some things can't be undone. e.g. printing. + +#### Suggestions +* Every change to the model should be undoable. +* +* Ask for confirmation. + +### Representing an Undo +* How does the UI reflect the undo/redo? +* e.g. should you highlight the thing that the undo restored? + +#### Suggestions +* They should be, since before the delete, the text/icon/etc. was selected. +* It's probably meaningful for the scroll to move to the restored text. e.g. Sublime Text does this. + +### Granularity +* What should be considered as an undoable step? +* It's pretty obvious in most cases, but there are some ambiguities. +* For example, should typing a character be considered a discrete action, or should the smallest unit be an entire word? +* Different text editing software handle this differently. +* Another example: A paint-like app, where you use the mouse to draw strokes. +* In this case, a mousedown-mousedrag-mouseup sequence (i.e. drawing an entire line segment) should be an entire action, instead of each mouse drag. + +#### Suggestions +* Ignore +* An interface event should be one step - e.g. find and replace all. In other words, if it takes one click to make that change, it should be considered a single change. +* Delimit on user input breaks - like spaces + +### Scope +* At what level does undo operate - at system, application, document, or widget level? +* Obviously, when you do something in Photoshop, and then do something in Word, when you go back to Photoshop and undo something, it shouldn't affect Word. +* It gets tricker when you involve a software that can have multiple documents at the same time. +* Usually in this case, you maintain a separate undo +* In general: It should be at the application level, unless you can open more than one document in the app. Then it should be document level. + +## Implementation +### sdfsdf +#### Option 1: Forward Undo +* Save the state periodically, and log all the changes (or *diffs*) to the current state since that change. +* To undo, get the new state by replaying the last n-1 change records from the last saved state. + +#### Option 2: Reverse Undo +* + +### Change Record Implementation +#### Option 1: Memento Pattern +* Save *every* different state in the history. +* When you undo, you can just go back to the last state. +* Clearly, this is pretty inefficienct in terms of state use. + +#### Option 2: Command Pattern +* Every time you make a change, add a +* Generally, this is partially implemented with checkpointing, +* [Square root decomposition](http://www.infoarena.ro/blog/square-root-trick) + +##### Two Stacks +* Keep track of two stacks: an *undo stack* and a *redo stack*. +* Every time you do something, you put it on the undo stack. +* Every time you undo something, you pop from the undo stack, and push it to the redo stack. +* If undo something and then do something else, clear the redo stack. +* Some software might do some more complicated stuff like keeping a tree, but this is how it's basically implemented. + +### Undo in Swing +* Swing has an undo manager, provided in `javax.swing.undo.*`. +* It exposes an `UndoManager` class, which you usually put in your model. +* When you implement the method that performs some change, you add an `AbstractUndoableEdit` object to the `UndoManager`. +* The `UndoManager` has `undo` and `redo` methods to do the actual operations, which you typically bind to your accelerator keys. + +### "Destructive" commands +Consider a situation like this: + +![](https://i.imgur.com/EIuz2dl.png) + +You have a few options of resolving this: +* Use forward command undo. +* Use Memento. When it's difficult to reconstruct the state from scratch, you're better off taking a snapshot. + +## Transferring Data +* We consider drag-and-drop as the same thing as cut-paste, since they're both basically removing data from somewhere, and putting it somewhere. +* Why do we care about this? +* It's the primary mechanism to transfer information. + +* A clipboard where any application can access is a huge security risk. People copy some pretty sensitive things. +* We don't address this in this course, but it's something to consider. + +* Copy-and-paste is essentially IPC - you need a common format for it to work between different programs. +* It's straightforward for text, but how do you do it for things like graphics, where some formats might be proprietary, or not universally supported? + +### Implementation +* In OSX, The application indicates what formats it supports. +* When you copy something, the data is usually copied *lazily*, i.e. isn't copied onto the clipboard. Usually it only adds a placeholder or a reference. +* This is better because sometimes copied data might never be pasted. + +* Every OS/windowing system/toolkit does it differently. +* In Java, look at `java.awt.datatransfer`. +* Java has two kinds of clipboard: a system clipboard, and also a local clipboard, which is a clipboard that only your application can access. +* Don't use the local clipboard - the main point of copy-and-paste is to be able to transfer data between different programs. + +### Drag-and-Drop +* As we said, drag-and-drop is identical to copy-and-paste. +* The only difference is that you need listeners on the source and destination widgets to move the data. From 92802c6b03f73cc10b1cf6c26dde697782f408fd Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 07:57:15 -0400 Subject: [PATCH 23/53] cs349: add 7.3 notes --- cs349/7-3.md | 92 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 cs349/7-3.md diff --git a/cs349/7-3.md b/cs349/7-3.md new file mode 100644 index 0000000..32f437b --- /dev/null +++ b/cs349/7-3.md @@ -0,0 +1,92 @@ +# History + +CS 349 - User interfaces, LEC 001 + +6-15-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/7.3-history.pdf) + +(The numbering is not a mistake. Module 7.2, Visual Perception, seems to have been removed.) + +## A Brief History of Interaction + +* Recall the [first lecture](1-1.md) of the course. +* Early "computers" were literally a bunch of people calculating things like +* There were some early mechanical calculators, like the [Analytical Engine](https://en.wikipedia.org/wiki/Analytical_Engine). +* A company called [International Business Machines](https://en.wikipedia.org/wiki/IBM) made the ASCC, which weighted about 11 tons - and were controlled with hundreds of dials. + +### Batch Interfaces +* - mid 1960s ish +* Feed instructions to computers using *punch cards*. +* No real interaction - the machine provides feedback in a matter of hours or days. +* The cost of getting something wrong was huge - it takes a very long time to iterate. + +### Conversational Interfaces +* 1965 - ~1985 +* User types a command in a prompt, the system evaluates the command, and then provides feedback. +* You basically still had to be an expert to use them. +* e.g. Zork, Bash + +![Zork](https://static1.squarespace.com/static/55182b4ee4b0c6d76a9c1eb3/t/552f3011e4b07f0b392386cf/1429155859834/) + +* Highly flexible +* The interaction is usually well-suited to the machine, but not to the task. +* You had to learn a lot of technical concepts before understanding how to use the system. +* Requires *recall* rather than *recognition* - i.e. the interface isn't intuitive enough for a beginner to be able to figure it out. You literally had to know the command syntax to use it. + +### Visionaries +#### Vannevar Bush +* In 1945, Vannevar Bush authored [As We May Think](http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/). +* In it, he suggested the idea of a device called a [memex](https://en.wikipedia.org/wiki/Memex) - a tool to organize information with *links* between annotated pieces of content. +* [Sound familiar?](https://en.wikipedia.org/wiki/Hyperlink) +* It was a futuristic vision - the technology was definitely nowhere near. + +> Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. + +#### Ivan Sutherland +* Ivan Sutherland came up with Sketchpad, a device controlled with a light pen that let users directly manipulate shapes with a proto-graphical user interface. +* Under the hood, the graphics were manipulated similarly to a constraint solver. +* He was interested in building tools not for experts, but for people like artists and draftsmen. (i.e. task-driven) +* Sketchpad's software was the first to use the concept of a *window*. + +> A display connected to a digital computer gives us a chance to gain familiarity with concepts not realizable in the physical world. It is a looking glass into a mathematical wonderland. + +#### Douglas Engelbart +* Career spanned 50s - 90s ish +* Led a team of researchers at the Stanford Research Institute (SRI) +* His researchers developed the beginnings of some extremely advanced technologies: mouse, hypertext, collaborative software, etc. +* [Mother of All Demos](https://www.youtube.com/watch?v=yJDv-zdhzMY) - hour and a half long demo in 1968 demonstrating those technologies. + +> An advantage of being online is that it keeps track of who you are and what you’re doing all the time. + +![Relevant xkcd](https://imgs.xkcd.com/comics/douglas_engelbart_1925_2013.png) + +#### Alan Kay +* Xerox PARC - worked on the Xerox Star and the Xerox Alto, the earliest personal computers with a GUI and Ethernet +* Dynabook - conceptual prototype for laptops/tablets +* Helped develop object-oriented programming (Smalltalk), Ethernet, the graphical user interfaces ... + +> The best way to predict the future is to invent it. + +* The Star cost $75k for a basic system, $16k for each additional workstation +* This is why you haven't heard of it. + +(Offhand mention that Alan Kay did an [AMA on Hacker News](https://news.ycombinator.com/item?id=11939851) very recently.) + +#### Apple +* Steve Jobs "[steals](https://www.youtube.com/watch?v=_1rXqD6M614)" Xerox PARC research +* GUI technology gets used in the Macintosh and the Lisa +* And the rest is history! + +* Better feedback +* Metaphors + +### The future? +* Touchscreen/[pens](https://www.engadget.com/2010/04/08/jobs-if-you-see-a-stylus-or-a-task-manager-they-blew-it/) +* Natural language processing +* Virtual/augmented reality +* Brain (machine|computer) interface + +* [Microsoft: Productivity Future Vision](https://www.youtube.com/watch?v=w-tFdreZB94) From 84f11a1c8267a48717239114643b805fb864b90a Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 08:00:33 -0400 Subject: [PATCH 24/53] cs349: add 8.1 notes --- cs349/8-1.md | 49 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) create mode 100644 cs349/8-1.md diff --git a/cs349/8-1.md b/cs349/8-1.md new file mode 100644 index 0000000..58b8685 --- /dev/null +++ b/cs349/8-1.md @@ -0,0 +1,49 @@ +# Android + +CS 349 - User interfaces, LEC 001 + +6-20-2016 + +Elvin Yung + +* Developing for Android is pretty similar to developing Swing apps for desktop - there's just some architectural differences to understand. +* Java doesn't tend to have great documentation - Sun wrote some documentation in the 90s and didn't really touch it since then. +* Android's the opposite. There's generally pretty great docs. + +* Android apps run on the Dalvik virtual machine. +* Every process runs in its own VM and address space. + +## Design Goals +* Multiple entry points for an app +* Different "activities" that you need to explicitly pass data between +* Applications need to be dynamic - need to handle many different types of devices, in different screen sizes and orientations + +* Dealing with being a mobile device - limited memory, cpu, battery, etc. +* The system aggressively constrain processing - e.g. background threads are hard +* Small screen, multiple orientations, multi-touch + +## Activities +* An **activity** is a screen that basically runs independently and has its own lifecycle, almost like a separate mini-app. +* Interesting lifecycle model - activities can have the states of *start*, *paused*, and *stopped*. +* Android pauses an application that's running in the background. +* If you start running out of memory, Android reserves the right to kill it. +* You're responsible for managing state, and implementing the `onStop`, `onCreate`, `onPause`, etc. callbacks to maintain data integrity. + +* Data is passed between activities using an **intent**. +* A **fragment** is basically a portion of a UI that has its own state. Activities can contain multiple fragments. +* Since switching activities has an overhead, fragments were introduced as an alternative. + +## Building UIs +* `android.view.ViewGroup` - like an `JPanel` with a Layout associated. +* `android.view.View` - base class for widgets, like `Button`, `ImageView`, etc. + +## Managing layout +* You can write code to do this, but you're better off using XML to describe your layout, and telling the app to dynamically load them. +* The good thing about this is that you can define views for separate orientations, and Android is smart enough to switch between them automatically. + +## Tools +* In this course, we're standardizing on Android Studio. +* Update stuff +* An AVD manager is provided to emulate different Android virtual devices. In this course we're standardizing on Nexus 7, on Marshmallow with API 23, ABI x86. +* (check the slides for the rest of this) +* Basically tl;dr follow the rules, don't try to use your own configs From 7863a0481d38c6ac05f645ac270acec7a4fb61e8 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 08:01:09 -0400 Subject: [PATCH 25/53] cs349: Add 8.2 notes --- cs349/8-2.md | 115 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 cs349/8-2.md diff --git a/cs349/8-2.md b/cs349/8-2.md new file mode 100644 index 0000000..057c2fa --- /dev/null +++ b/cs349/8-2.md @@ -0,0 +1,115 @@ +# Touch Interfaces + +CS 349 - User interfaces, LEC 001 + +6-20-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/8.2-touch_interfaces.pdf) + +## Prelude: Interaction Models +* We can describe interaction in terms of **look** and **feel**. +* **Look** is how things are presented in the interface - like the layout. +* **Feel** is how responsive the interface is, how users express intent and get feedback, etc. + +* We've talked about the execution-evaluation model, but that's at a different level of abstraction. +* We're more concerned with the actual interaction with the application. + + +## Instrumental Interaction +* We're going to think of UIs as things that we physically interact with. +* A **domain object** is a thing that you're trying to manipulate. +* The **interaction instruments** are the things that you use to manipulate the domain object. +* We can use these concepts to decompose a UI. +* You can think of an interface as a bunch of tools that you can use to manipulate an object. + +TODO: insert diagram from slide 7 + +### Activation +* The key thing when you're talking about instruments is **activation**, i.e. how do you trigger it? +* For example, with a scrollbar, how do you activate it? You click on it and drag to cause it to move, or click it in the gutter to get it to move in different directions. +* Another example: Making a rectangle in Powerpoint is basically a multiple step process: You have to find the widget to open the shape menu, click it, find and click the rectangle, and click-drag it on the canvas. + +* You can think of instruments as being activated **spatially** or **temporally**. +* **Spatial** activation means that you directly cause it, usually by directly manipulating some part of the interface. e.g. the scrollbar +* **Temporal** activation means that there's a sequence of actions to activate it. e.g. Everything inside the shape menu, like making a rectangle. +* Most things are spatially activated to an extent, but have different *costs*, i.e. the amount of work it takes to perform that action. +* Usually, something that involves an object being selected is at least somewhat temporal, since it has the selection as a precondition. +* Dialogs are, by definition, temporal. You always have to do something to bring up the dialog. + +TODO: add slide 10 diagram + +#### Costs +* There are **cost** involved with different activations. +* For spatial activations, the cost is the space used, and the time it takes to activate it. +* The cost of activating something temporally is the action time of everything you have to go through to activate it. +* In general, spatial activation costs less than temporal activation, so (as you might expect) the most frequently used features should be spatially activated. + +#### Tradeoffs +* If you don't want to incur temporal cost, you can always add more spatially-activated components. +* But that's also bad, because input devices clutter up the UI. +* The best way is to properly make tradeoffs and manage costs appropriately. + +### Evaluating Instruments +#### Indirection +* The degree of **indirection** is how much you have to do to the instrument to perform the action. +* For spatial activation, the cost is higher when +* For temporal activation, the cost is higher + +#### Integration +* The degree of **integration** is the similarity between the degrees of freedom of your input device and the instrument on the screen. +* For example, a scrollbar has a DOF of 1, and a mouse has a DOF of 2. The scroll wheel, however, *also* has a DOF of 1, so it is a better fit. +* Another example: Rotating a shape in 3D involves a DOF of 3, but a mouse still only has a DOF of 2. However, we can combine the mouse *with* a scroll wheel to get a DOF of 3, for a better fit. + +#### Compatibility +* The degree of **compatibility** is how similar the physical action on the instrument is to the response from the interface. +* For example, dragging has a high degree of compatibility because the feedback directly maps to the motions to move the mouse. +* Scrolling has medium compatibility. +* A dialog box generally has low compatibility. When you change a font, you're doing a bunch of things that don't really feel compatible to what you do with your mouse. + +# NUI +* An effort by a bunch of researchers to promote **natural** user interfaces. +* We're not going to talk about this at all - they're not really novel since they're basically touch interfaces. + +## Touch Interfaces +* Up until now, we've been mostly talking about standard mice/keyboard driven GUIs. +* With the rise of multitouch-enabled devices like smartphones and tablets, touch-based interfaces are increasingly important. + +### History +* How old is multi-touch? +* We'll go back on a common theme in technology: It takes a long time to go from a vision to a viable product. +* Bill Buxton started doing research with multitouch in the mid 1980s, but it was only in 2007 that the iPhone - the first popular consumer multitouch product - was released. + +### Technology +#### Resistive Touchscreens +* Was literally two closely-pressed conductive layers, when a point is pressed together it registers the point +* Didn't handle multitouch + +#### Capacitive Touchscreens +* Emitters at the 4 corners of the screen +* Indirectively measure change in capacitance to figure out where the finger tapped. +* Handles multitouch! +* This is the touchscreen technology that's in every device now. + +#### Mutual Capacitance +* Has an array of sensors, and measure the capacitance change at every place. +* Big screens *might* need this. +* We don't really talk about this in this class. + +#### Direct Touch Technology +##### Inductive +* Magnetized stylus induces an electromagnetic field in the sensing layer. +* Expensive, and fairly rare. +* We might have active styluses that use bluetooth to talk to the device. + +##### Optical +* Literally have a camera watch the interface +* Flood an entire surface with infrared sensors to detect where people are touching it. +* Not super precise, but like mutual capacitance, cheaper at bigger sizes. + +### Input + +### Interaction + +### Design From d80f79a2577d6debdc594b8f59272003d350064a Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 08:06:54 -0400 Subject: [PATCH 26/53] cs349: fix 6.1 notes typos --- cs349/6-1.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/cs349/6-1.md b/cs349/6-1.md index a339233..1685b25 100644 --- a/cs349/6-1.md +++ b/cs349/6-1.md @@ -17,7 +17,7 @@ Elvin Yung ![At least it's not phallic.](http://www.piedpiper.com/app/themes/pied-piper/dist/images/interface_large.jpg) ## Objectives -* Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. +* Your interface should be easy to understand - design with the human's conscious and unconscious capabilities in mind. * *Pre-attentive processing* happen at a lower level than conscious thought. We unconsciously process a lot of * Keep things *simple*! * (But not too simple. You want the user to still be able to *do* the things they want to do.) @@ -45,7 +45,7 @@ Elvin Yung * We group things based on visual characteristics, like **shape**, **size**, **color**, **texture**, **orientation**. * e.g. in a group of similar-sized squares and circles, we group by shape. In a group of large and small squares and circles, we group by size. In a group of green and white squares and circles, we group by color. (TODO: add image) * (We see size first - it's more obvious to us.) -* When things look like one another, we tend to think of them as belongingin a common set. +* When things look like one another, we tend to think of them as belonging in a common set. ### Good Continuation * We have a tendency to find flow in things. @@ -54,7 +54,7 @@ Elvin Yung * e.g. we tend to follow a menu, in a straight line. * Arranging things like this can get people to look at more things, even if they were only looking for one thing. -The last three principles dealt with how we group object. The next few will deal with how we fill in missing or ambiguous information. +The last three principles dealt with how we group objects. The next few will deal with how we fill in missing or ambiguous information. ### Closure * We like to see a complete figure even if some parts are missing. @@ -85,7 +85,7 @@ The last three principles dealt with how we group object. The next few will deal ### Alignment (?) * Is alignment a Gestalt principle? * Basically, we see things similarly when we group things in line. -* It's a powerful organizaing tool. +* It's a powerful organizing tool. * It's kind of like continuation - continuation tends to imply alignment. ## Pleasing Layouts From 4118f4e48a61b9f09fe89ec86629dcff40918359 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 08:12:20 -0400 Subject: [PATCH 27/53] cs349: Add a table of contents --- cs349/README.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/cs349/README.md b/cs349/README.md index 8feb75b..ad4ee67 100644 --- a/cs349/README.md +++ b/cs349/README.md @@ -1,3 +1,17 @@ # CS 349 - User Interfaces The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are pretty good, so these notes mostly serve as supplementary summaries. + +## Table of Contents +* [1.1 - Introduction](1-1.md) - brief history of computing interfaces, *why* this is important. + +// TODO: fill in gap + +* [5.1 - Design Principles](5-1.md) - design principles from everyday things, usefulness vs. usability, mental models, metaphors +* [5.2 - Design Process](5-2.md) - User Centered Design, understanding the user, prototyping protips +* [6.1 - Visual Design](6-1.md) - UI design principles, Gestalt Principles +* [6.2 - Responsiveness](6-2.md) - feedback, dealing with latency in general, Swing, and Web. not the [other](https://en.wikipedia.org/wiki/Responsive_web_design) kind of responsiveness. +* [7.1 - Undo](7-1.md) - design decisions involved, various implementation techniques +* [7.3 - History](7-3.md) - a brief history of interaction, visionaries, speculations on the future +* [8.1 - Android](8-1.md) - intro to Android, architecture, activities, layouting with XML +* [8.2 - Touch Interfaces](8-2.md) - look and feel, interaction instruments, temporal and spatial activation, degrees of indirection, integration, and compatibility From 37a84089c29e7f0a8e56628ca3da38f4939c6e1f Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 23 Jun 2016 08:19:57 -0400 Subject: [PATCH 28/53] cs349: Reorganize 7.3 notes GUI section --- cs349/7-3.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/cs349/7-3.md b/cs349/7-3.md index 32f437b..95a0331 100644 --- a/cs349/7-3.md +++ b/cs349/7-3.md @@ -70,18 +70,23 @@ Elvin Yung > The best way to predict the future is to invent it. -* The Star cost $75k for a basic system, $16k for each additional workstation -* This is why you haven't heard of it. - (Offhand mention that Alan Kay did an [AMA on Hacker News](https://news.ycombinator.com/item?id=11939851) very recently.) -#### Apple -* Steve Jobs "[steals](https://www.youtube.com/watch?v=_1rXqD6M614)" Xerox PARC research +##### Apple +* The Xerox Star cost $75k for a basic system, $16k for each additional workstation +* This is why you haven't heard of it. + +* Steve Jobs "[steals](https://www.youtube.com/watch?v=_1rXqD6M614)" Xerox PARC research in exchange for pre-IPO investment in Apple * GUI technology gets used in the Macintosh and the Lisa * And the rest is history! +![Out of the bag](http://www.folklore.org/images/Macintosh/out_of_the_bag.jpg) + +### Graphical User Interface +* Utilizes *recognition* rather than *recall* * Better feedback -* Metaphors +* Metaphors - interactions are more like the task domain, rather than computerese +* The GUI puts computers not just in the right hands, but in everyone's hands! ### The future? * Touchscreen/[pens](https://www.engadget.com/2010/04/08/jobs-if-you-see-a-stylus-or-a-task-manager-they-blew-it/) From 8fc55c5bd9b068126ea9849c32c0cb81fc4e3aa6 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 24 Jun 2016 16:39:54 -0400 Subject: [PATCH 29/53] cs349: Update stuff on touch technology --- cs349/8-2.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/cs349/8-2.md b/cs349/8-2.md index 057c2fa..7150615 100644 --- a/cs349/8-2.md +++ b/cs349/8-2.md @@ -83,25 +83,28 @@ TODO: add slide 10 diagram ### Technology #### Resistive Touchscreens -* Was literally two closely-pressed conductive layers, when a point is pressed together it registers the point +* Was literally two closely-pressed ("sandwiched" together) conductive layers, when a point is pressed together it registers the point * Didn't handle multitouch #### Capacitive Touchscreens * Emitters at the 4 corners of the screen * Indirectively measure change in capacitance to figure out where the finger tapped. -* Handles multitouch! -* This is the touchscreen technology that's in every device now. +* Can also only handle one point. #### Mutual Capacitance * Has an array of sensors, and measure the capacitance change at every place. -* Big screens *might* need this. -* We don't really talk about this in this class. +* You need some conductive material touching it to actually trigger it. +* Handles multitouch! +* This is the touchscreen technology that's in every device now. +* Generally, modern phones have a ridiculous amount of sensors that we don't usually take advantage of completely. #### Direct Touch Technology ##### Inductive +* Magnetic layer at the back of the screen. * Magnetized stylus induces an electromagnetic field in the sensing layer. -* Expensive, and fairly rare. * We might have active styluses that use bluetooth to talk to the device. +* Can combine with a capacitive touch screen, so that the device can process (and distinguish) touch and pen input. +* Expensive, and fairly rare. The magnetic layer is pretty expensive to produce. ##### Optical * Literally have a camera watch the interface From a0b381d2bb796a7586e5865ad400f154972bcf9f Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 24 Jun 2016 17:21:24 -0400 Subject: [PATCH 30/53] cs349: add 6/24 lecture notes --- cs349/8-2.md | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) diff --git a/cs349/8-2.md b/cs349/8-2.md index 7150615..b3f138a 100644 --- a/cs349/8-2.md +++ b/cs349/8-2.md @@ -112,7 +112,72 @@ TODO: add slide 10 diagram * Not super precise, but like mutual capacitance, cheaper at bigger sizes. ### Input +* How do you actually get input information into the system? +* Up until now, we've only really talked about **indirect** input devices, e.g. mice, touchpads, joysticks, where you don't *directly* control the display. +* Now we'll talk about **direct** input devices, where we basically unify the input and output devices. A perfect example: a touchscreen! + +#### Stylus vs. Finger +* Direct touchscreens are generally more common. +* For some very specific use cases, e.g. writing, it makes sense to have a stylus for input. + +![](https://i.imgur.com/u0sa2ae.png) + +> I would rather draw with my fingers on the dusted windshield of my car than draw with the phone. + +#### Design Considerations + +#### Input States +* One more difference between touch and mouse is that there are less input *states*. +* With a mouse, you have three states: out of range, tracking, and dragging. In other words, the mouse button is a factor. +* With touchscreens, you only get two states: touching, and not touching. +* With pens, *maybe* you can have the stylus have two different states (active or passive) that makes it kind of like the mouse. + +* Some touch-based inputs incorporate pressure, i.e. how much force the user exerts on the touch input. +* Apple has force touch, which lets developers do different things based on different pressured touches. +* On Android, there's no real pressure sensor. Instead, it uses contact area sensing, which uses the size of the contact area to approximate the pressure. +* Real pressure input is just around the corner! + +#### Challenges +##### The "Fat Finger" Problem +* The human finger is massive compared to a pen or mouse cursor! +* If your finger is bigger than the target with, it makes it hard to see what you're doing. +* In some touchscreen interfaces (e.g. text select/copy/paste on most modern phones), it usually shows a preview of what you're doing above the touch area. +* Another interesting thing: [LucidTouch](https://www.youtube.com/watch?v=qbMQ7urAvuc) - sense touches from the *back* of the device, and indicate contact on the screen. Pseudo-transparency! + +* It also becomes hard to give users meaningful feedback when the finger is covering things. +* A standard industry solution: make everything huge. Apple recommends 15mm as the minimum width for a button. +* Ideally, you want to have some part of the widget visible under your finger. + +#### Ambiguous Feedback +* Normally, with a mouse, users feel a "click" when they do something. +* With touchscreen, this sort of haptic feedback is harder. +* When something isn't working with a touchscreen interface, it's hard for the user to tell what happened if there isn't obvious feedback. +* Usually phones provide some sort of audio or haptic feedback when you tap. + +#### Lack of Hover State +* With a mouse, when the mouse button isn't pressed down, you get a "hover" state, which gives you things like tooltips. +* With a touchscreen, you don't get this. + +#### Multi-touch Capture +* With a mouse, if you mousedown and mouseup with the cursor in the same widget, you register it as a click. +* What do you when you have multiple points of contact, not all of which might be on the widget? +* Various ways: + * Microsoft Surface: Only generate an event when the last contact point is lifted from the screen. + * DiamondSpin: Register each contact independently. But then you have to be careful with your touches. + +#### Physical Constraints +* When you're working with a touchscreen, you're generally supposed to be interacting with the object on screen directly. +* In other words, the touchscreen relies on *direct manipulation*. +* Google and Apple realized that these aren't enough - in a lot of cases, feedback is lacking. +* They solve this in a few ways. +* For example, when you scroll to the end and try to scroll further, the screen keeps scrolling, and snaps back elastically when you release your touch. ### Interaction +#### WIMP +* We've talked about WIMP (windows, icons, menus and pointing), which is the traditional interaction model for GUI interfaces. +* We don't do that with touchscreens. +* Instead, we use something called **direct manipulate** (DM). +* Not exclusive to touchscreens - for example, drag and drop, which is common in GUIs, is a DM interaction. +* ### Design From 601c11f4c38f0bf25eeec39e65fd20f3da0d89d6 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 29 Jun 2016 16:33:15 -0400 Subject: [PATCH 31/53] cs349: Fix 8.1 notes date --- cs349/8-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/8-1.md b/cs349/8-1.md index 58b8685..11733b0 100644 --- a/cs349/8-1.md +++ b/cs349/8-1.md @@ -2,7 +2,7 @@ CS 349 - User interfaces, LEC 001 -6-20-2016 +6-22-2016 Elvin Yung From 2894e4b811309f2c497f777acfe0a81c26571ac9 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 29 Jun 2016 22:31:22 -0400 Subject: [PATCH 32/53] cs349: add 6/27 notes for 8.2 --- cs349/8-2.md | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 85 insertions(+), 2 deletions(-) diff --git a/cs349/8-2.md b/cs349/8-2.md index b3f138a..5c83bcb 100644 --- a/cs349/8-2.md +++ b/cs349/8-2.md @@ -2,7 +2,7 @@ CS 349 - User interfaces, LEC 001 -6-20-2016 +6-25-2016 Elvin Yung @@ -177,7 +177,90 @@ TODO: add slide 10 diagram * We've talked about WIMP (windows, icons, menus and pointing), which is the traditional interaction model for GUI interfaces. * We don't do that with touchscreens. * Instead, we use something called **direct manipulate** (DM). + +#### Direct Manipulation * Not exclusive to touchscreens - for example, drag and drop, which is common in GUIs, is a DM interaction. -* +* Direct manipulation relies on the concept of affordances (which we've already talked about), which is the idea that you should be able to figure out what you can do to something based on how it looks like. +* e.g. drag and drop, changing the size of a shape by dragging the corners, etc. + +(A bit of nomenclature here: *direct touch* is short for *direct manipulation on a touch interface*.) + +* It's a bit nebulous when you can apply DM and when you can't - it's easy to apply to things like painting, but not so much things like word processing. You need the right task for it to work well. + +##### Principles +* Basically, WYSIWYG. + +// TODO: copy points from slides + +##### Benefits +* Interactions should be natural and learnable. + +##### Challenges +* Accessibility is hard + * Visually impaired users can't see the graphics + * People who use screen readers find it hard to have a linear flow + * Gestures are hard for some physically impaired users + * etc. +* DM assumes that there's an obvious suggested way to interact with the system, but this isn't easy to do. +* Not all interactions are valid! What does it mean to resize the trashcan? + +#### Gestures +* Touchscreens support 10+ points of contact +* Android supports only touch gestures like touch, long press, pinch open/close, etc. +* This is actually a pretty limited set of interactions. +* This is mostly because it's hard to remember lots of gestures, and these are basically the *most* intuitive gestures. +* Apple patented many more, but they're all pretty complicated. + +* Another question? Are gestures *really* natural and intuitive? +* We think that we do things in the real world that are easily mappable on a touchscreen. +* For things like copy-and-paste and filling a shape with color, it's hard to figure out the *right* way to do it on a touchscreen. +* Unless you train someone on how to use a touchscreen, it's not explorable. + + +##### Desgining Gestures +* So what do we do? +* Right now, we basically copy whatever Apple and Google do. + +* A *guessability* study is a way to help design gestures. Basically, a researcher gives you an application, ask them to perform something, and see how they think the interaction should be like. + +* A famous study was one that Wobbrock et al. did. +* Overwhelmingly, people prefer single-finger gestures. +* One hand gestures got much more consensus than two hand gestures. +* Interestingly, we're looking for gestures that are similar to what we did with the mouse. + +#### DM on Tabletop +(not related to D&D...) + +* Fat body part problem +* Content orientation +* Hard to reach some parts of the display ### Design + +#### Desktop vs. Mobile +* Obviously, you can't just take a desktop application and just scale it down for mobile. +* So usually what happens is that you provide a more minimal UI. +* The rest of the features, generally you support it but hide it so that it doesn't block other things in the UI. + +#### Mobile Interaction +##### Navigation +* You generally only run a single application at a time on a phone +* The display fills up the entire screen. + +##### Responsiveness +* On a computer, you can usually assume that the user is at a desk, with good lighting, paying full attention, etc. +* With a phone, you can't assume that - the user can be doing lots more things that you need to account for. + +##### Minimal help +* e.g. can't give tooltips, since you have no space. +* Tutorials are bad - Your UI needs to be intuitive. + +#### Tradeoffs +Here are some tradeoffs that mobile developers make: +* Show different keyboards based on different types of input, e.g. numpad for phone numbers, etc. +* Predict input - autocomplete, prepopulated lists, etc. +* Make frequent operations easy to access +* Make actions obvious - have few large buttons, rather than many small buttons. +* Affordances - Collapse content and controls together. +* Make some controls expandable/hideable, etc. +* Hide things you don't need, like message metadata, etc. delete buttons From 56019535d855c2d6cb84ee7cf8251e48cc71587d Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Wed, 29 Jun 2016 22:31:34 -0400 Subject: [PATCH 33/53] cs349: add 6/29 notes for 9.1 --- cs349/9-1.md | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) create mode 100644 cs349/9-1.md diff --git a/cs349/9-1.md b/cs349/9-1.md new file mode 100644 index 0000000..d3e1f29 --- /dev/null +++ b/cs349/9-1.md @@ -0,0 +1,70 @@ +# Touchless Interfaces + +CS 349 - User interfaces, LEC 001 + +6-27-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/9.1-touchless_interfaces.pdf) + + +* In many ways, computing has changed a lot in the last few years. A major change is that now we put sensors in everything. +* In the phones on our pocket, there are now not only cameras and microphones, but there might now be things like accelerometers, gyroscopes, and magnetometers. +* There are challenges to working with the data. +* You need to figure out what the data means - how to calibrate and interpret the data. +* Data might also be noisy - if you plot something like gyroscope data from someone's phone, you'll probably get a very jittery graph. +* So you need to somehow normalize the raw data. + +## Types of Data +* This is more of a wishlist. +* A lot of the stuff that *do* currently exist in everyday devices, we don't do a lot with. There's a lot of research on it, but they haven't been applied to real life yet. + +![](https://i.imgur.com/35r5iv9.png) + +## Using Data +* We'll be talking about the first two items the most. + +![](https://i.imgur.com/jOv98pn.png) + +* More on the third point: authentication is important. +* More on the fifth point: It seems like science fiction, but we can actually do a lot of this already, and there are a lot of real and tangible uses. + +## Design Considerations +* Why don't we do stuff like this all the time? + * Because it's expensive. + * Things like computer vision are really computationally intensive, and still can't be easily done in real time. + +* How do you use all this data? + +## Implementing Signal-to-Command Systems +* We probably already all know this is called **machine learning**. +* The objective is basically to get your source data into something manageable, and then to build mathematical models around it. + +* First, you **preprocess** your data. (compress, smooth, downsample, etc.) +* Then, you **select features** - extract exactly what you need from the data. + * For example, for face detection, I don't actually need the entire picture, just things like the distances between the eyes, etc. +* Finally you perform **classification** - figure out which category the specific input belongs to. + +## Example: Lights +* Based on [this study](http://research.microsoft.com/en-us/um/redmond/groups/coet/homes/interact2001/paper.pdf) by Burmitt et al. +* Situation: there are a bunch of lights in the room. What is the best way to control it? +* Turns out that most people prefer to use voice, but it's too imprecise. + +## Challenges in Building Touchless Interfaces +* Consider **explicit interaction** - explicit commands that the user makes to the system, and expects to get direct feedback. +* Consider **implicit interaction** - the system is always on, monitors the relevant data, and reacts to changes. Basically, a system that watches you and learns from you. + +### Errors +You need to consider: +* False positives: When the system interprets something as a positive input, when it shouldn't have been. Triggered by *high* sensitivity. +* False negatives: when the user thinks they did something, but the system doesn't detect it. Triggered by *low* sensitivity. + +* Users want to feel that they have control. [Do What I Mean](https://en.wikipedia.org/wiki/DWIM) +* Users are intolerant of errors. + +#### Strategies to Deal with Errors +* Graceful degradation: work with the closest possible interpretation of input +* Given ambiguity, prompt with "did you mean this?" etc. +* Let users query the system to figure out what exactly happened when something goes wrong +* The problem with a lot of this is that since the whole point is the user doesn't interact with the GUI, a lot of this is actually pretty hard to do. From 9a7ba3be0d1858b0adaa7cd48f2e5e641bb951dc Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Tue, 5 Jul 2016 01:21:12 -0400 Subject: [PATCH 34/53] cs349: add 9.1 notes for 7/4 --- cs349/9-1.md | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/cs349/9-1.md b/cs349/9-1.md index d3e1f29..4a63b66 100644 --- a/cs349/9-1.md +++ b/cs349/9-1.md @@ -68,3 +68,61 @@ You need to consider: * Given ambiguity, prompt with "did you mean this?" etc. * Let users query the system to figure out what exactly happened when something goes wrong * The problem with a lot of this is that since the whole point is the user doesn't interact with the GUI, a lot of this is actually pretty hard to do. + +### The Live Mic Problem +* [We begin bombing in five minutes](https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes) +* When you have a system that's always monitoring for information, it's *always monitoring for information*. +* For something like an in-air gesture system, it's easy for the system to misinterpret the user's movements when they don't mean to be input. + +#### Solution +##### Reserved Actions +* Choose a specific set of actions/gestures that are globally used for navigation or commands. +* The obvious problem with this? You might trigger the command even if you didn't want to. +* [Scriboli](http://www.patrickbaudisch.com/projects/scriboli/) - experimental pen input delimiter gesture + +* Another example: cheating with an inductive pen input to allow for a hover state ([Hover Widgets](https://www.youtube.com/watch?v=WPbiPn1b1zQ)) +* The problem is that every time you move your cursor in an L-shape when it's not touching the screen, +* It's hard for reserved actions not to interfere with other things + +##### Delimeters +* A way of segregating input from your commands. +* i.e. some "trigger" gesture to indicate that you want to start or stop recognizing a gesture +* It's also called a *clutch* (but don't use that, because it's also an HCI term for a mouse movement) +* Basically, come up with a completely new gesture that people are unlikely to do in normal situation. + +##### Multi-modal Input +* Why does an iPhone have a button in the front? +* It's sort of a way of serving as a delimiter for the iPhone. +* Notice that in the iPhone, other system functions (i.e. volume) are also assigned to hardware buttons. +* The application completely owns the screen and can assign whatever gestures it wants, without being overridden by the system. +* This is the concept of a **multi-modal** input - i.e. having multiple sources of input. +* MIT 1979 - [Put that There](https://www.youtube.com/watch?v=RyBEUyEtxQo) + +### Feedback is Useful +* Feedback is brutally hard to do in a gestural system, but it's also super important, because the user needs feedback about their input. +* You *need* to provide feedback - it's the only way a user can be productive. +* [Jump](http://hci.cs.uwaterloo.ca/sites/default/files/jump_GI_2007.pdf) - early AR system that lets user get feedback on the display itself +* In the HCI Lab (DC 3540) there's a bunch of gesture stuff + +### More on Speech Interfaces +* Terminology: SUI stands for *speech user interface*. +* Some early research: [SpeechActs](https://www.youtube.com/watch?v=OzNRsGaXyUA) (1995) - figured out that these systems are pretty terrible, even if the recognition rates were high + +#### Challenges +* How to context switch? For example, if you're browsing through messages, how do you open a message, reply to it, and then go back? +* How to parse natural language well? We can pull out the words, but still can't figure out what every combination of words means. +* How to navigate a dialogue? Can't just replace dialog boxes with an equivalent voice prompt. + +#### Recognition Errors +* User speaks before the system is ready to listen - e.g. instead of saying, "Siri what's on my calendar?", you have to say "Siri what's on my calendar?" +* System might pick up background noise, and get garbled input +* It's hard to have a rich detailed conversation with a SUI. + +There are basically two main types of errors in an SUI: +* **Rejection errors**: the computer doesn't recognize commands +* **Substitution errors**: the computer misinterprets commands + +#### Examples +Some examples of new, pretty good systems: +* [Viv](https://www.youtube.com/watch?v=M1ONXea0mXg) - from the creators of Siri +* [Hound](https://www.youtube.com/watch?v=M1ONXea0mXg) From 6b7db216b3b0f629c256b1217272953b90d14eaf Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 8 Jul 2016 16:29:01 -0400 Subject: [PATCH 35/53] cs349: update table of contents --- cs349/README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/cs349/README.md b/cs349/README.md index ad4ee67..99bebf5 100644 --- a/cs349/README.md +++ b/cs349/README.md @@ -15,3 +15,6 @@ The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are * [7.3 - History](7-3.md) - a brief history of interaction, visionaries, speculations on the future * [8.1 - Android](8-1.md) - intro to Android, architecture, activities, layouting with XML * [8.2 - Touch Interfaces](8-2.md) - look and feel, interaction instruments, temporal and spatial activation, degrees of indirection, integration, and compatibility +* [9.2 - Touchless Interfaces](9-1.md) - voice, in-air gestures, classifying/interpreting ambiguous command data +* [10.1 - Wearables](10-1.md) - Smartwatches, ubiquitous computing, augmented reality +* [10.2 - Input](10-2.md) - Different types of input (text, positional, gestural) From e9fec049a0ffa0b648020aaa247899faef9e5306 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 8 Jul 2016 16:29:45 -0400 Subject: [PATCH 36/53] cs349: Add 10.1 notes --- cs349/10-1.md | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 cs349/10-1.md diff --git a/cs349/10-1.md b/cs349/10-1.md new file mode 100644 index 0000000..30968d1 --- /dev/null +++ b/cs349/10-1.md @@ -0,0 +1,86 @@ +# Wearables + +CS 349 - User interfaces, LEC 001 + +7-4-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/10.1-wearable_computing.pdf) + +## Smartwatches +![](https://xkcd.com/1420/) + +* We're deliberately not going to talk about things like Fitbits and Pebbles, because they're more specialized. +* We'll focus on the Apple Watch and Android Wear, which are generalized. + +### Design Challenges +* The first issue is that these things are tiny! You're constrained by the user's hand. + * Things like the fat finger problem are much worse. + * Physical buttons are important - you need buttons because there's no practical way a touchscreen works. +* Limited attention + * A smartwatch is not intended to be the device of choice for complicated use cases - you're not going to be manipulating spreadsheets + * Instead, smartwatches are for quick tasks on the go - things you want to + be able to do without having to pull out their phone. + +#### Guidelines from Google +* The watch is mostly an output device, not an input device. +* Google suggests all computation to be done on the phone, and be sent to the watch - in other words, the watch should just be a dumb terminal. +* The entire task on the watch should take <5 seconds - if it takes more, a watch is not the right device. +* The watch is *secondary* - it's only auxiliary to the phone, designed for quick interactions. + +#### Guidelines from Apple +* Apple emphasizes personal communication on the Apple Watch. They emphasize initiating communicating, but it's not a very compelling use case. +* There are dedicated apps (but no one uses them). +* Interaction mostly via gestures, but there's also force touch, the "crown" dial, and the side buttons. +* Emphasize coordination with smartphone - should be able to tap to answer a call from the watch, and then the control is transferred to the phone. + +### The Big Question +*Why doesn't everyone have a smartwatch?* + +* No "killer app" or other compelling use cases + * Probably not good enough as a proxy for phone + * Fitness tracking isn't sufficient for most people + * Healthcare, monitoring blood pressure, heart rate, etc. - maybe? + * Identification - Apple Pay, Android Pay, computer authentication etc. - maybe eventually replace passwords +* Price +* Battery sucks +* etc. + +### Utilitarian vs. Fashionable Devices +* Is a smartwatch a piece of jewelry or a utility device? + +## Ubiquitous Computing +* Introduced by Mark Weiser, 1996 +* Basically a very old term for Internet of Things +* Instead of having discrete devices that you carry, instrument the world around you to do things for you. +* For Ubicomp to really work, you need: + * Computation embedded into the environment + * Something that ties the person to the environment - a device that helps identify the person. Can a smartwatch + +## Augmented Reality +* Examples: Google Glass, Hololens + +### Design Principles +* Don't get in the way of what the user is doing +* Only give information that's relevant to what the user is currently doing. Don't always put the temperature in the corner! +* Avoid showing things + +### Results +Google Glass didn't get wide adoption. What happened? + +* Technology was not super feasible - 2 hour battery life? +* Principles of Ubicomp +* Google Glass was considered rude or awkward - [Glassholes](https://nypost.com/2014/07/14/is-google-glass-cool-or-just-plain-creepy/) + * There were cameras mounted on them, and when someone is walking around with Google Glass on, there's no indication that they're not recording you + * Is Glass a fashion device? Google tried to make it like that, but + +AR definitely still has potential, though! + +## More Generally for Wearables +And also other new technology. + +* Why do you need a wearable? + +* A better mousetrap is not good enough - it needs to be solving a problem - be a 10x improvement +* New technology takes time to mature! Remember old tablets and PDAs? From 404dfc2bd88ff6353b256568c66fce4ca90563d8 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Fri, 8 Jul 2016 17:21:51 -0400 Subject: [PATCH 37/53] cs349: add 10.2 notes --- cs349/10-2.md | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 cs349/10-2.md diff --git a/cs349/10-2.md b/cs349/10-2.md new file mode 100644 index 0000000..755d0b9 --- /dev/null +++ b/cs349/10-2.md @@ -0,0 +1,99 @@ +# Input + +CS 349 - User interfaces, LEC 001 + +7-6-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/10.2-input.pdf) + +* The iPod was the perfect input method for a device where most of the UI elements were list-based. +* But it's not + +## Classifying Computer Input +* Sensing method + * Mechanical - switch, potentiometer + * Motion - accelerometer, gyroscope + * Contact - capacitive touch, pressure sensor + * Signal processing +* Continuous vs discrete +* There are different input devices for different purposes, but we mostly use the mouse and the keyboard. + +## Text Input +### QWERTY +* The QWERTY keyboard layout was first introduced in the Remington Model I typewriter in 1873. +* They were trying to design a keyboard that wouldn't jam, which happened when you pressed two adjacent keys at once. +* So the intention was to space out the key presses, so that the user would alternate between left and write hands in typing. +* So of course, when we added keyboards to computers, we stole this layout from typewriters, because that's what people were already used to. + +* The optimal way to use a QWERTY keyboard is to keep your hands on home row, and moving your fingers to move +* Except it doesn't actually work that well: + * Awkward key combinations, like `tr`, + * Sometimes have to jump over the home row, e.g. `br`` + * +* Because of letter frequency, most of the typing is actually done with the left hand. Because most people are right-handed, this can slow people down. +* Statistics on key presses: + * 16% on lower row + * 52% on top row + * 32% on the home row + +#### Other layouts +* Since QWERTY has so many issues, there are a few remapped layouts. + +* Example: Dvorak + * Letters should be typed by alternating between hands + * 70% of letters are on home row + * Bias towards right-handed typing, since most people are right-handed + +* **Studies are inconclusive on whether there's any actual productivity difference when using a non-QWERTY keyboard layout.** +* An interesting point: it's really useful to be able to sit down on any computer and be able to + +### Mechanical Keyboards +* If the keys are downsized (e.g. on a BlackBerry), it interferes with typing. + +### Soft Keyboards +* on touchscreens, etc. +* You no longer get any sort of tactile feedback. You have to either get really good at touch typing, or hope that autocomplete works well enough. +* We're basically trading a physical keyboard to get a bigger screen. +* Soft keyboards are good on devices where you don't have to do a lot of typing. e.g. an iPad can be used mostly as a movie watching device + +### Other variants +* Thumb keyboards - so that you could hold on the device and type reasonably well with just your thumbs +* Frogpad - one-handed keyboard, only 15 keys, plus some meta keys, and different combinations of meta keys lets you type different letters +* Chording keyboards - Douglas Engelbart proposed this - basically, a keyboard that only has 5 keys, and you type different combinations of keys. + * Successor: [the Twiddler](http://twiddler.tekgear.com/) + +### Predictive Text Input +* T9 +* Autocomplete/autocorrect + +### Others +* Palm Pilot's [Graffiti](https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)) - had decent accuracy, but you needed to memorize this entire scheme. +* Natural handwriting recognition, e.g. +* [ShapeWriter](https://en.wikipedia.org/wiki/ShapeWriter) - original inspiration for Swype, let people type on a touchscreen without lifting their finger + * IJQwerty - study that found people were much more productive when i and j were swapped on ShapeWrite +* [8pen](http://www.8pen.com/) - enter words by drawing loops + * Seems like it'd be error prone, but + +## Positional Input +* Ur-example: Etch-A-Sketch + +### Properties +#### Sensing +* Force or **isometric**: Input data is in the form of direction and magnitude of force, e.g. joystick +* Displacement or **isotonic**: Input data is in the form of position difference, e.g. mouse + +#### Position vs Rate Control +* Rate: joystick +* Position: mouse + +#### Absolute vs Relative +* This describes how the input device is mapped to the display. +* **Absolute**: where you touch is directly mapped onto the display. + * Example: A drawing tablet +* Normally, however, on a desktop we use **relative** input. + * Example: moving the mouse moves the cursor proportionally, but doesn't teleport it to some absolute location. + +#### Control-Display Gain +* The **gain** is the ratio of how fast the pointer moves to how fast the input device moves. From cd964cdadc3e5bfe65704141696a79290126fde2 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sat, 6 Aug 2016 20:35:42 -0400 Subject: [PATCH 38/53] cs349: Add 11.1 - input performance --- cs349/11-1.md | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 cs349/11-1.md diff --git a/cs349/11-1.md b/cs349/11-1.md new file mode 100644 index 0000000..a6d7e4f --- /dev/null +++ b/cs349/11-1.md @@ -0,0 +1,121 @@ +# Input Performance + +CS 349 - User interfaces, LEC 001 + +7-11-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/11.1-input_performance.pdf) + +## Models +* It's hard to look at a UI and determine how fast it can be used. +* The obvious solution: built all the possible solutions, and try all of them! +* Wait, don't do that. Building a high-fidelity demo from the ground up is costly in terms of time and money. +* What you want to do is come up with a model that lets you estimate how productive users can be with the interface. +* It's important to have an intuitive grasp of how long it takes to do something. + +* We're going to focus on two things: + * Time - how long it takes you to do something with the UI + * Error rates - how often you make mistakes using the UI + +### Keystroke Level Model (KLM) +* KLM is a simple model to estimate how long some task on a user interface takes. +* Break down the task into tiny little steps that are measurable in some atomic unit of cost, so that you get to measure +* Actions like keystroke, pointing, and mouse button presses are pretty well-defined in terms of the time it takes the average person to do it. +* KLM makes the assumption that there is no delay from the UI, which is one of the problems with it. +* KLM also assumes that the user doesn't make an error. + +#### Operators + +![](https://i.imgur.com/NMXvOmT.png) + +* As you might expect, a key press has a wide range of cost. + * The best typists type at 0.08 seconds per character, and the worst typists type at 1.2 seconds per character. + * So you need to figure out *who* you're modeling to be accurate. +* The assumptions are really important - if the user starts with their hand on the mouse or off the mouse +* Expect to see KLM estimation problems in the final. + +#### Example +* Let's say we have three different widgets to enter a date. + +* The first widget is just a simple text field where you enter a date in the format of `MM/DD/YYYY`. +* To enter a date in this text field, the average user would do something like this: + 1. Move the mouse cursor to the text field (1.1s) + 2. Click the mouse at the text field (0.1s) + 3. Move your hand from the mouse to the keyboard (0.4s) + 4. Type in 10 characters (10 * 0.3s = 3.0s) +* So in total time that this would take is about 4.6 seconds. + +* The second widget are three dropdown menus, respectively for month, day, and year. +* The naive way to interact with this widget would be to individually select the items in each dropdown menu with your mouse - that takes 7.2 seconds in total. +* But a lot of users also focus on each dropdown, type in the keys for the item, and tab to the next widget. That takes significantly less time - about 3.3 seconds. + +#### Mental Operations +* The examples we've so far don't consider the cost of a **mental operation**, which is basically any time the user has to stop and think. +* On average, when a user needs to think about what they want to do, it takes about 1.2 seconds according to the KLM model. +* As they're performing the task, they might need to think about things more. +* Basically, add an `M` operation whenever the user needs to think. +* It gets tricky to figure out when the average user would be thinking. +* In general, there's usually an initial phase before doing anything that the user thinks about the entire operation. + +* KLM is simple to model, but it doesn't model pointing very well. +* It uses a constant 1.1s to estimate pointing, but it doesn't account for what device the user is using. +* More importantly, not every pointing action is the same! Obviously moving a longer distance takes more time. + +### Fitts' Law +* A predictive model for 2D pointing +* Basically, it wants to answer this question: how long would it take an average user to go from point A to point B? +* Take any kind of pointing task in 2D space with any kind of pointing device +* Originally published in 1954, for modeling physical movement +* It's now the general go-to for estimating pointing + +* Fitts' Law models the performance of some positional input device, +* Essentially, the time it takes to do move the pointer grows with the distance, and shrinks with the size of the target. +* So the relationship is modeled as this: `MT α D / S ` + +* More precisely, the formula is this: `MT = a + b log_2(D/W + 1)` +* We break the formula into two parts: + * The **index of performance** `b` + * The **index of difficulty** `log_2(D/W + 1)` + +* Observation: If you have a button or widget, people generally target the center of the widget. + +* Observation: something on the edge of viewport is considered as having infinite height/width (depending on the axis), because you essentially don't have to aim for the widget vertically/horizontally (also depending on the axis). + +### Menus +#### Context Menus +![] (https://swisnl.github.io/jQuery-contextMenu/screenshots/jquery-contextMenu.subs.png) + +* In terms of a Fitts' Law task, selecting an item on a dropdown context menu isn't great. +* The user has to think about where they want to go (i.e. an O(n) search). +* When there are a lot of items, the distance becomes unwieldy. + +#### Pie Menus +![](http://guidesmedia.ign.com/guides/14258472/images/590/sims2al-12.jpg) + +* Another idea: arrange the menu in a "pie" centered around the pointer, where each item is a slice of the pie. +* Nicely solves the distance problem - every item is the exact same distance from the pointer +* Problem: if you have lots of items, the slices get really small. +* At about 11 items, a pie menu becomes worse than context menus in terms of performance. + +### Visual vs. Motor Space +* OSX has a feature where dock items magnify when you hover near them. (Side anecdote about how [Bas Ording] got hired by Steve Jobs just by showing him this) +* It expands the object in *visual* or *screen* space, but not in *motor* space. +* According to Fitts' Law, expanding the size of the object dynamically doesn't change how easy it is to select the item. + +* To make the UI "sticky" in the motor space but not the visual space, you could make the cursor move slower over the widget. +* You don't see this often, because it's actually pretty annoying. + +### Steering Law +* Fitts' Law is great, but assumes that you always take the straight path from point A to point B. +* How we we account for a meandering/constrained path? +* For example, select something on an inner context menu. + +![](https://i.imgur.com/vvh9EV1.png) + +* We model the path as a series of "goals. +* The total cost is then the sum of moving from every `i`th goal to the `i+1`th goal. + +### tl;dr +* Make things bigger and closer to make it easier for people to get to. From d369b14b85245812b2aa3e184cd06eb5e2720efc Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sat, 6 Aug 2016 20:36:44 -0400 Subject: [PATCH 39/53] cs349: Add 11.2 - accessibility --- cs349/11-2.md | 103 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 103 insertions(+) create mode 100644 cs349/11-2.md diff --git a/cs349/11-2.md b/cs349/11-2.md new file mode 100644 index 0000000..204215c --- /dev/null +++ b/cs349/11-2.md @@ -0,0 +1,103 @@ +# Accessibility + +CS 349 - User interfaces, LEC 001 + +7-13-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/11.2-accessibility.pdf) + + + +* Curb cuts: make it easy for people people on wheelchairs to get through a curb +* We make accommodations for people with different abilities in real life. +* It should also be done in software. + +* Accessibility isn't just about being on a wheelchair or being blind. We should to accommodate +* We want to design for the "average" person, but there's no average person. +* Every time you design something, you're at risk of alienating certain groups of people from your product. +* We *all* have temporary or situational disabilities. + * Obvious: ones being sick, being injured, etc. + * Driving: limited attentional bandwidth + * Underwater diving: impaired sight, hearing, mobility, etc. + * Using an ATM in the middle of the night in Kitchener + * Walking down the street and texting + +## Walking + Pointing Performance +* Experiment to measure performance on a tapping task on a phone in different situations +* Situations include: sitting, treadmill (different speeds), obstacle course +* Result: performance seated and walking are fairly similar, but in an obstacle course, the task took more time and had a higher error rate. +* Obvious in hindsight - obstacle course is the only thing that needs attention outside the phone + +* Takeaway: it's better if you can focus on a single task. +* This is why texting and driving is bad! + +* Another experiment: reading comprehension +* When walking, people were slower to read, and had higher error rates. + +* When you're walking, you're most concerned about the attention split, + +## Designing for Walking +* Sitting UI: small menu items, small buttons +* Standing UI: make everything bigger, reduces cognitive load +* This is also one of the reasons why mobile UIs are better: on the go, you're going to get a better experience if you have less cognitive load. + +## Aging +* Natural effects of aging: + * Worse coordination + * Visual coordination - coordination starts to fade by the 40s, and start to need reading glasses by the 50s + * Hearing impairments + * Memory loss + +* Baby boomers: huge spike of birth rate after WWII +* They're all getting old now! If you were born in 1951 you are now 65, i.e. retiring. +* As a designer, it might be an opportunity to build usable interfaces for this demographic. + +[Video: MIT AGNES](https://youtu.be/czuww9rp5f4) - a suit for designers to understand the usability challenges with aging + +* *We should design technologies to be inclusive. They often end up helping everyone!* + +## Statistics on Impairments + +TODO: Copy from slides + +## OS Support +* Any recent version of Windows, OSX, etc. have a range of tools for accessibility issues. +* This is awesome. +* There are all kinds of things to manage motor/visual/audial issues. +* It's a decent solution, but not perfect. Users end up having to memorize lots of keyboard shortcuts, be a good touch typer, etc. + +## Colorblindness +* Not being able to distinguish two colors +* Color-coded UIs are often bad for this + +## Motor Impairments +* Sticky keys +* Filter keys +* Repeat rate + +### Various tools to help with motor impairments +* [Integramouse](integramouse.com) - straw-like mouse for people with no arm movement +* Voice dictation/transcription +* Human-brain interface stuff + * Would be awesome... if it worked! + +* [Angle Mouse](https://depts.washington.edu/aimgroup/proj/angle/) + +### Cognitive Impairments + +* [Phosphor](http://patrickbaudisch.com/projects/phosphor/index.html) - highlight changes in the UI, for people who have trouble keeping track of where they were in the UI + +## The "Curb Cut" Phenomenon +* A accessibility-minded design that ends up helping everyone + +* Example: cassette tapes, developed as an alternate to reel-to-reel tapes for visually impaired people +* Another example: closed captioning, originally intended for which ended up being used to many more purposes + +## Reasons to Design for Accessibility +* You're legally motivated to make your software accessible. +* If you plan on selling software to a US government body, it needs to make accessibility accommodations. +* [Class action lawsuit against Target](https://en.wikipedia.org/wiki/National_Federation_of_the_Blind_v._Target_Corp.) + +* Web accessibility is essential for equal opportunity. From 9a9768367aa796eef956213b8528221e6ceb8a2b Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sat, 6 Aug 2016 21:00:34 -0400 Subject: [PATCH 40/53] cs349: add 12.1 - visual perception --- cs349/12-1.md | 143 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 cs349/12-1.md diff --git a/cs349/12-1.md b/cs349/12-1.md new file mode 100644 index 0000000..67b5ad2 --- /dev/null +++ b/cs349/12-1.md @@ -0,0 +1,143 @@ +# Visual Perception + +CS 349 - User interfaces, LEC 001 + +7-18-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/12.1-visual_perception.pdf) + + +* We want to talk about physical limitations of your body, particularly in your visual acuity, etc. +* We care about this the most because most UIs are visual. + +## Psychophysics +* Psychophysics is the idea that you are separate from the world. +* Seems philosophical, but the essentially it means that your perception of the physical world isn't perfect. +* Because your visual/audial perception isn't precise enough, we can do things like lossy compression, which lets us throw away data without noticing it. + +## Temporal Resolution +* aka how you perceive the flickering of light +* At about 45 Hz, an *intermittent* flickering light starts to seem continuous. This is called the **critical flicker frequency** (CFF). +* This lets us do animation. Film is 24 fps, NTSC video is 60 fps, etc. +* The Hobbit films were the first to be shot at 48 fps. Some people complained that it made the special effects look more fake. + +## The Human Eye +* At the edges of the retina, you don't actually have very good resolution. +* At the *macula* you have the best color depth, etc. + +## Spatial Resolution +### Measuring Acuity +* 20/20 vision: You can perceive lines at 1 arc minute distance +* Implication: A dot pitch (i.e. pixel size) of 0.23 mm is sufficient for most people. + +## Color +* Objective color: the actual math +* Subjective color: what color we think something is +* We agree on *most* subjective colors, but not all. + * Remember the dress meme? + +![](https://www.wired.com/wp-content/uploads/2015/02/Untitled-12-1024x518.jpg) + +### the Visible Spectrum + +### Additive Color Model +* We mostly use this +* Start with "nothing" (i.e. black), combine colored lights to combine white +* Works particularly well for monitors +* Variants: we use RGB for displays, HSV/HSB for color, YUV for human perception + * YUV is biased towards things that the human eye can perceive well + +### Subtractive Color Model +* Colored light is absorbed to create black. +* Works well with printed pages +* CMY/CMYK - common in printing + +### HSV/HSB +* HSV stands for hue, saturation, and value/brightness. +* It is the most common model that we use for colors. +* Maps to visible spectrum. +* **Hue** is the actual color. +* **Saturation** is the richness of the color, i.e. from less hue to more hue. +* **Value** or **brightness** is a fixed-saturation representation for +* The model is represented spatially with hue going around, saturation extending out from the center, and brightness extending up. + +* You've probably seen HSV represented in a color picker, where you pick the hue in a separate widget, and then get a 2D grid of the saturation and brightness. + +![](http://learn.shayhowe.com/assets/images/courses/html-css/getting-to-know-css/photoshop-color-picker.png) + +### Perceiving Color +* Two different light sensors in the eye: + * 6-7 million **cones** that perceive color + * 120 million **rods** that distinguish light from dark +* Cones and rods are unevenly distributed. +* Rods are mostly in the periphery of vision, cones are mostly in the middle (a lot are packed together in the **fovea**). +* There's a tiny little blind spot in our vision, where our brains fill in the information. + +* Humans are trichromatic. There are three main types of cones: + * Blue cones aren't good at picking up color. + * Green cones are great, which means that humans can generally perceive a wide spectrum of green colors. + * Interestingly, our red cones are better at picking out yellow colors. +* Rods aren't great at picking out color, but they're better than nothing. +* Remember though that it's not just one cone that's responsible for receiving colors, but rather a combination of lots of cones. + +* Color presentation matters. +* It's hard to distinguish two colors under these conditions: + * **paleness**: the colors are pale. + * **size**: the object is small or thin + * **separation**: the color patches are far apart. + +#### Color-blindness +* Precisely, color-blindness means that you're missing particular types of cones. +* **Monochromacy** is when you're missing 2 or 3 types of cone. This is super rare! +* **Dichromacy**: + * Protanopia: missing red cones (~1% of males) + * Deuteranopia: missing green cones (also ~1% of males) + * Tritanopia: missing blue cones and blue sensitive rods. (super super rare) + +#### Peripheral Vision +* We are essentially blind in the periphery, but it's still good for: + * Guiding the fovea, which helps with eye movement + * Detecting motion + * Helping us see light in the dark, because rods are more light sensitive. + +### Color and UI +To make messages visible: + * Put messages where users are already looking. + * Put the error message near what it refers to. (If this is in dispute with the above point, bias towards the above) + * Use red for errors (but consider color-blindness) + * Heavy artillery: pop-up alerts, sound, blink, motion, etc. + +### Displays +* A pixel on a display is really made up of red, blue, and green components, which we call **sub-pixels**. +* This is why we use RGB for colors: we want to dictate how the pixel is exactly displayed. +* **Bit-depth** or **color-depth** in a display is the number of bits allocated to each pixel to specify the color. + * e.g. 24 bits = 8 bits for each color, 16 bits = 5 red bits, 5 blue bits, 6 green bits (use 1 more bit for green because humans see green better) + * 24 bits is about 16.7 million colors, which is generally enough. + * We mostly do 32 bit displays, which lets us add an 8-bit alpha channel. + +#### CRT +* Phosphor coating on screen +* Shoot at phosphor with an electron gun to make it emit light in different colors +* Since the light is very temporary, we need to shoot at it very frequently. +* Very bulky. + +#### LCD +![](blob:https%3A//imgur.com/552293ce-95ac-4ec0-8f34-0f1631a49587) + +* We gradually shifted from CRT to LCD, because it's more portable. +* Use a layer of liquid crystal +* Rigid crystal membrane in the middle + +##### "LED" +* Basically the same as an LCD display, except with an LED backlight. + +#### OLED +* Like LCD, but throw away the crystal membrane in the middle +* We can build more flexible things with this display. +* Currently super expensive, but getting cheaper! + +## tl;dr +* The physical world is different from our visual perception of it. +* When designing a UI, it's important to consider that difference. From 9538825f6f32cffd820dfaac7c79da1f3cb37f9c Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sat, 6 Aug 2016 21:00:50 -0400 Subject: [PATCH 41/53] cs349: update index --- cs349/README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/cs349/README.md b/cs349/README.md index 99bebf5..3570003 100644 --- a/cs349/README.md +++ b/cs349/README.md @@ -18,3 +18,6 @@ The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are * [9.2 - Touchless Interfaces](9-1.md) - voice, in-air gestures, classifying/interpreting ambiguous command data * [10.1 - Wearables](10-1.md) - Smartwatches, ubiquitous computing, augmented reality * [10.2 - Input](10-2.md) - Different types of input (text, positional, gestural) +* [11.1 - Input Performance](11-1.md) - KLM, Fitts' Law, Steering Law, visual space and motor space +* [11.2 - Accessibility](11-2.md) - different types of ableness, accessibility tools, UI design considerations +* [12.1 - Visual Perception](12-1.md) - psychophysics, temporal resolution, spatial resolution, color spectrum, color perception and blindness, displays From c0677ffea7d09cfd5c0a76cbf5b72d3dd825113d Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sun, 7 Aug 2016 05:06:28 -0400 Subject: [PATCH 42/53] cs349: add 12.2 - cognition --- cs349/12-2.md | 166 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 cs349/12-2.md diff --git a/cs349/12-2.md b/cs349/12-2.md new file mode 100644 index 0000000..9ee607b --- /dev/null +++ b/cs349/12-2.md @@ -0,0 +1,166 @@ +# Cognition + +CS 349 - User interfaces (whichever section I decide to go to) + +7-22-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/12.2-perception_cognition.pdf) + +* We've talked about [physical](11-2.md) and [visual](12-1.md) capabilities. Now we'll talk about cognitive capabilities. +* Ergonomic research generally focuses on the physical side of things, but doesn't really cover anything on the cognitive side. + +* Instead, this is covered in a field called *cognetics*. +* One of the pioneers of this field is [Jef Raskin](http://www.folklore.org/ProjectView.py?project=Macintosh&characters=Jef%20Raskin), the lead of the Macintosh team before he got kicked out by Steve Jobs. + +* We want to cover mental processes like attention, memory, learning, and reasoning. +* [Model Human Processor](http://www2.parc.com/istl/groups/uir/publications/items/UIR-1986-05-Card.pdf) - try to treat humans like machines, to measure processing capabilities, etc. + * Compelling, but doesn't work! + +## Memory +* We tend to think of memory as short and long term. +* **Short-term** memory is what you're paying attention to right now, basically the current "working set". +* **Long-term** memory is information that is/should be retained over a long term. + +* It's tempting to think of the human memory model as perceptions getting input into short-term memory, which eventually gets flushed to long-term memory, but it isn't that simple! +* It's maybe more accurate to think of the short-term memory +* Also, long-term memory isn't really as concrete as you think - it's really more nebulous than than. + + +A rough model: +* Your perceptions (i.e. visual, auditory, olfactory, tactile, etc.) map to neurons. +* As data comes in, different neurons in the brain get activated. +* Combinations of different sensory input features get combined into higher-level features in +* The more similar perceptual stimuli are, they get + +* Memory is really a set of patterns that you repeat over and over again. +* Memory is *formed* essentially when some neural activity pattern is easier to reactivate in the future. +* **Remembering** is when the same neural activity patterns is reactivated. +* Somehow the brain can figure out a "new" pattern from a reactivation. +* **Recognition**: new perceptions that trigger some existing pattern. +* **Recall**: trigger pattern without some sort of stimulus. +* The strength of the memory depends on how often you remember it, how strong the original perception was, and sleep. + +### Short-term Memory +* Short-term memory is *not* just temporary neural activity. +* Instead, think of it as a combination of perception and attention, kind of like RAM. +* You choose what you want to look at, and your brain chooses what to focus on. +* Short-term memory is, then, the currently activated neural pattern. + +* Short-term memory is very low capacity and volatile. +* The general rule is that the average person can memory 7 ± 2 items. + * More recent research says 6 ± 5! + * Also, this depends on the similarity of the items, and our ability to cluster items. + * For example, phone numbers are easier to remember than a bunch of arbitrary numbers. +* If you're easily distracted, you have a hard time keeping things in short-term memory. +* Basically, in general it's super super volatile, and it's really easy to lose things. +* For design, this means that you can't rely on people to remember things. + +#### Implications for UI +* Interfaces should *help* people remember thing! +* Never expect people to remember things across different views/modes/etc. +* If the user doesn't have to keep state in their head, they can be purely functional :P + +* For search results: Keep search results displayed, so people can remember what they're working with. +* Instructions: Keep the steps visible while they're being followed. + +### Long-term Memory +* You don't have as much storage as you think you do, so your brain is compressing things as time goes on. +* But eventually things get lost. +* You might remember bits and pieces, and your brain might fill in the missing parts. +* Emotions can affect them. +* Retroactively alterable + +#### Implications for UI +* Don't burden the long term memory. +* Example: those annoying security questions + * People generally forget the questions, and forget the answers. + * Even if they remember the answer, they probably won't remember the exact case, punctuations etc. + +## Perception +* Our perception of the world is "wrong". +* Everyone has their own personal [reality distortion field](https://en.wikipedia.org/wiki/Reality_distortion_field). + +### By Experience +* Really good example in slides, hard to copy here. +* After interacting with a dialog box a few times, muscle memory sets in, and people stop paying attention. + +### By Context +* e.g. phrases perceived differently depending on context. + +>>>Fold napkins. Polish silverware. Wash dishes. +French napkins. Polish silverware. German dishes. + +### By Goals +* When you're looking for something specific, your brain becomes desensitized to unrelated things. +* Your goal also influences where you look. +* That [Selective Attention video](https://www.youtube.com/watch?v=vJG698U2Mvo) that everyone's already seen + +## Cognition +* **Cognitive unconscious**: Processes that you're not aware of doing when it happens. + * Basically, things happening on a "background thread. + * Generally for repetitive stuff like walking, breathing, etc. + * The cognitive unconscious is what lets you work in your day-to-day life. + * Potentially many things being processed in parallel. +* **Cognitive conscious**: Things that happen that you're aware of when they happen. + * Generally good at focusing at a very small number of things. + * Tiny capacity, generally sequential + * Basically, "foreground" processing - what you generally consider as "thinking". + +// TODO: copy table from slides + +* According to Jef Raskin, it's essential to recognize these types of cognition when designing a human-machine interface. + +### Locus +* **Locus** is the one thing you're actively focusing on. +* As much as we like to think that people can control what you think, but this isn't completely true. +* You can sort of guide your locus, but you don't have complete control over it. +* You have *at most* one locus. +* Since you only have one, when you're multitasking, you're not really multiprocessing. Just like with preempting threads, you incur overhead. + * (Limited multiprocessing *is* possible by combining a locus and some automatic unconscious activities.) +* For apps: don't encourage users to multitask! + +* A downside of a singe locus is that you might hyperfocus. +* For example, if you're writing an essay with a minimum word count, you might focus on the word count in the word processing program, and ignore the fact that there are spelling errors. +* As a designer, you should be careful not to overload things on one spot. + +* The danger of things like "low battery" dialogs: since they're irrelevant to what you're currently doing, it's easy to ignore them, and it becomes a habit. +* **Change blindness**: If there's a change of something unrelated to what you're looking at, you tend not to notice it. + * Magicians exploit this. + +### Context Switches +* **Context switch**: a switch from one locus of attention to another. +* As a programmer, you've probably experienced being in "[the zone](https://en.wikipedia.org/wiki/Flow_(psychology))", and then being snapped out of it. +* The cost is about 10 seconds, ridiculously expensive. +* Lots of HCI research in interruptibility indicates that this is very damaging to productivity. +* This also means that things like push notifications are really bad. + +### Absorption +* You can be completely absorbed in your locus of attention. +* Absorption is essential for high productivity - flow/being in the zone +* As a designer, take great pains not to take people out of this. + +* Negative side: it's possible to get so absorbed, it's possible to not pay attention to things that really matter. +* e.g. [Eastern Airlines Flight 401](https://en.wikipedia.org/wiki/Eastern_Air_Lines_Flight_401) + * pilots were preoccupied with fixing a burned out but unimportant indicator light + * put the plane on autopilot, and *turned off alarms that they were descending (i.e. crashing)* +* It's possible to become hyperfocused and ignore important help messages, etc. +* Implications for UI: don't give people minutia to focus on. + +### Automatic Actions +* If you practice something over and over again, eventually it becomes something that you just *do*, without thinking about it. +* You want as many activities as you can to be automatic, so that you can be more productive. +* e.g. it's hard to be a great programmer when you're spending all the time making sure you're typing correctly. + +* Practice doesn't make perfect, it makes permanent. +* With practice, you get to clump a sequence of actions together into a single action. +* e.g. Tying your shoelaces + +* When actions become automatic, it becomes hard to unlearn. +* e.g. if you're used to driving in the US (on the right), hard to adjust to driving in the UK (on the left) + +* For UIs, this means that you should be consistent. +* For example, in a dialog box, the OK and cancel buttons should be in the same position, so people don't have to think about where it is. +* You can also force people *not* to use their automatic actions + * e.g. Adobe Lightroom, the confirm dialog for deleting photos from the disk is in an unusual place, so that the button you have the most chance of doing automatically is the cancel button. From cf55c85aaba8287424545e13cf47c17831fe98ab Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sun, 7 Aug 2016 05:06:40 -0400 Subject: [PATCH 43/53] cs349: update index --- cs349/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/cs349/README.md b/cs349/README.md index 3570003..467dce4 100644 --- a/cs349/README.md +++ b/cs349/README.md @@ -21,3 +21,4 @@ The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are * [11.1 - Input Performance](11-1.md) - KLM, Fitts' Law, Steering Law, visual space and motor space * [11.2 - Accessibility](11-2.md) - different types of ableness, accessibility tools, UI design considerations * [12.1 - Visual Perception](12-1.md) - psychophysics, temporal resolution, spatial resolution, color spectrum, color perception and blindness, displays +* [12.2 - Cognition](12-2.md) - memory (short term, long term), perception (by experience, by context, by goals), cognition, locus, context switches, automatic actions From f9a8db3fb8c06660b89eb617ab903bd0753754eb Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sun, 7 Aug 2016 05:19:09 -0400 Subject: [PATCH 44/53] cs349: add 13.1 - ethics --- cs349/13-1.md | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 cs349/13-1.md diff --git a/cs349/13-1.md b/cs349/13-1.md new file mode 100644 index 0000000..535b7c6 --- /dev/null +++ b/cs349/13-1.md @@ -0,0 +1,66 @@ +# Ethics + +CS 349 - User interfaces (whichever section I decide to go to) + +7-25-2016 + +Elvin Yung + +[Slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/slides/13.1-ethics.pdf) + +**NOTE:** This is *not* covered on the final, but it's still really important to know about if you plan on designing UIs in real life. + +* Benevolent deception, malicious design +* Basically, manipulating the truth can be done for good or evil. +* It's possible to lie to users for their own good. + +* Example: robotic physical therapy system +* Example: electronic switching system, i.e. for phones + * When a switch fails, you could report the error to the user, which might confuse the users and reduce their confidence in the system. + * Or, you could put them through to the wrong person, and they can just think they made the wrong call. + * Arguably, this is *bad* deception. + +* Example: Placebo buttons + * e.g. crosswalk buttons, office thermostats, elevator close door buttons + * Give people the illusion of control + +* As designers, we want to balance between end-user expectations and the capabilities of the system. +* We use deception to fill the gap. + +## Benevolent Deception +Some *gaps* that deceptive design is used to fill in: + +### System vs. reality +* Maintain the user experience + * e.g. Netflix recommender will never recommend nothing +* Hide uncertainty + * e.g. Windows file operation time estimate +* Guarantee a level of entertainment + * e.g. tweak game AI such that they give you a challenge without making it impossible +* Maintain consistency/expectations + * e.g. artificial shutter noises from phone cameras + +### Individual vs. Group +* e.g hiding whether the username or password was wrong in a login screen +* e.g. Sandboxing +* e.g. timesharing systems let many people pretend they own the computer + +### Individual vs. Self +* Protect the user from themselves. + * e.g. don't actually delete a file, just move it to the trash can. + * e.g. [fake bus stop for Alzheimer's patients](https://www.fastcompany.com/1598472/uncommon-act-design-fake-bus-stop-helps-alzheimers-patients) + +## Malevolent Deception +* Deception can *definitely* be bad, like: + * Use confusing language (e.g. double negatives) + * Hiding certain functionality (e.g. the unsubscribe button) + * Exploiting user mistakes (e.g. torrent sites that have 10 different download buttons, 9 of which are on ads) + +## Experimentation +* You've probably already encountered the idea of *A/B testing*, where you show different versions of a UI to different users. +* In general, experiments like these are helpful for figuring out the effects of different UI changes in a real-world environment. +* But sometimes the ethics are questionable. +* For example, Facebook manipulated news feed posts in 2013 to figure out how it changes emotions. + +## tl;dr +* Build interfaces that you would let your grandmother use. From 9868d960553c019033baf7f0e7ef1360a72901eb Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sun, 7 Aug 2016 05:19:20 -0400 Subject: [PATCH 45/53] cs349: update index --- cs349/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/cs349/README.md b/cs349/README.md index 467dce4..70a3138 100644 --- a/cs349/README.md +++ b/cs349/README.md @@ -22,3 +22,4 @@ The [slides](https://www.student.cs.uwaterloo.ca/~cs349/s16/schedule.shtml) are * [11.2 - Accessibility](11-2.md) - different types of ableness, accessibility tools, UI design considerations * [12.1 - Visual Perception](12-1.md) - psychophysics, temporal resolution, spatial resolution, color spectrum, color perception and blindness, displays * [12.2 - Cognition](12-2.md) - memory (short term, long term), perception (by experience, by context, by goals), cognition, locus, context switches, automatic actions +* [13.1 - Ethics](13-1.md) - benevolent vs. malevolent deception, gaps for deceptive design, experimentation From 2e7e92834dacd0f580a7c25d55e5fa3df9209f79 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Sun, 7 Aug 2016 05:22:11 -0400 Subject: [PATCH 46/53] cs349: fix photoshop color picker image --- cs349/12-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/12-1.md b/cs349/12-1.md index 67b5ad2..9522297 100644 --- a/cs349/12-1.md +++ b/cs349/12-1.md @@ -65,7 +65,7 @@ Elvin Yung * You've probably seen HSV represented in a color picker, where you pick the hue in a separate widget, and then get a 2D grid of the saturation and brightness. -![](http://learn.shayhowe.com/assets/images/courses/html-css/getting-to-know-css/photoshop-color-picker.png) +![](https://pe-images.s3.amazonaws.com/basics/interface/cc/2014/color-panel/photoshop-color-picker.jpg) ### Perceiving Color * Two different light sensors in the eye: From dbb6ec355b27853701cb25b9c0bf8f0b17676145 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 02:41:57 -0400 Subject: [PATCH 47/53] cs350: add scheduling notes --- cs350/scheduling.md | 109 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 109 insertions(+) create mode 100644 cs350/scheduling.md diff --git a/cs350/scheduling.md b/cs350/scheduling.md new file mode 100644 index 0000000..ec6df1b --- /dev/null +++ b/cs350/scheduling.md @@ -0,0 +1,109 @@ +# Scheduling + +CS 350 - Operating Systems +Spring 2016 +Elvin Yung + +**Previous:** [Virtual Memory](vm.md) +**Next:** [I/O](io.md) + +## Motivation +* A **scheduling policy** determines how to assigning work (called *tasks* or *jobs*) to workers. +* In an OS, the jobs are the threads/processes (basically same thing in OS161), and the workers are the processors. + +* For now, the main metric that we care about is the average **turnaround time**: the time from the job's arrival to the job's completion. +* It would also be great if the scheduling was *fair*, i.e. that every job gets a proportionate amount of resources, although we don't talk about that here. + +## Assumptions +A bunch of mostly-unreasonable assumptions that we'll make, and then remove one by one: +* 1) Each job takes the same amount of time to run. +* 2) Every job arrives at the same time. +* 3) When a job runs, it runs to completion. +* 4) Jobs only use the CPU, and perform no I/O +* 5) We know the workload of each job. + +## First In, First Out (FIFO) +* (Assume that jobs don't all arrive at the *exact* same time, i.e. that assumption 2 doesn't strictly hold.) +* The most naive way to do it: just keep taking the first incomplete job and running it. + +### Example +* Suppose we have 3 jobs, A, B, and C, arriving in that order, and they each take 1 unit of time. +* Then the turnaround time for A is 1, for B it's 2, and for C it's 3. +* The average turnaround time is 2. + +### Relax Assumption 1 +* Now relax assumption 1, i.e. now assume that jobs can have different workloads. +* Suppose we have 3 jobs, A, B, and C, arriving in that order. A takes 10 units of time, and B and C each take 1 unit of time. +* Then the turnaround time for A is 10, for B it's 11, and for C it's 12. +* Average turnaround time = 11 +* That's terrible! + +* Obvious flaw of FIFO: if a long job gets run, many jobs can pile up behind it (**convoy effect**). + +## Shortest Job First (SJF) +* Also pretty simple: just keep picking the job with the smallest workload. +* Works great *as long as assumption 2 holds*. + +* With our previous example, one correct order is this: B, C, A +* Then the turnaround time for B is 1, for C it's 2, and for A it's 12. +* Average turnaround time = 5 +* Huge improvement. + +### Relax Assumption 2 +* Now suppose jobs can arrive at different times. +* In particular, suppose that job A arrives at t = 0, and then jobs B and C arrive at t = 1. +* Then at the beginning the scheduler would run A since that's the only (and thus shortest) job, and then B and C would still be forced to wait behind A. +* In other words, the convoy effect still occurs. + +## Shortest Time-to-Completion First (STCF) +* Now we relax assumption 3. +* We introduce **preemption**: the scheduler can choose to interrupt a job while it's running, run another job, and go back to it later. +* STCF works like this: when a new job is enqueued, if the new job has less time left than the current job, it preempts the current job and runs the new job. + +* With our previous example, when job B arrives at t = 1, job A still has 9 time units left, whereas job A only 1 left, so A gets preempted and B gets run. +* Then at t = 2, job B is done, but job C only has 1 time unit left while B still has 9, so C gets run. + +* The turnaround times are as follows: + * A: 12 - 0 = 12 + * B: 2 - 1 = 1 + * C: 3 - 1 = 2 + * Average turnaround time: 5 + +### Relax Assumption 4 +* STCF is great, but from the 1960s onward computers started being interactive. +* Interactivity means that jobs might spend time waiting for I/O (e.g. writing to disk, waiting for stdin, etc.) +* This means that we care about the **response time**: the time between when the job first arrives, and when it gets run for the first time. +* For our previous example, A and B each have response times of 0, but C has a response time of 1. + +## Round Robin (RR) +* So it's clear that preempting only when a new job arrives doesn't always work. +* We already know about timer interrupts - why not use those? +* In particular, we will run jobs a **slice** (or **quantum**) at a time, i.e. every time we run a job, we only run it until the next timer interrupt. +* RR works like this: + * Run a slice of the first job on the queue. + * If the job isn't complete at the end of the quantum, move it to the back. + * Repeat. + +## Multi-Level Feedback Queue (MLFQ) +* MLFQ is the most common scheduling algorithm in modern OSes. It's used in OS X, Linux (since 2.4), and Windows (since NT). +* It essentially gives preference to short I/O bound jobs. + +* The basic setup is that there are multiple queues with different priority levels (i.e. the *multi-level* part). +* Then, the scheduler runs jobs based on these rules: + * Each job starts off at the end of the highest priority queue. + * The scheduler processes the jobs in the highest non-empty priority queue, in a round-robin fashion. + * When a job is run, it's allocated a quantum of time. + * If it finishes before the end of the quantum, it leaves the system. + * If it uses up the entire quantum but doesn't finish, it's preempted, and put at the end of the queue below (obviously, unless it's already at the bottom). + * If it gives up CPU control before the end of the quantum without finishing, it stays at the same level. + * In other words, the scheduler uses *feedback* to determine which priority a job should be at. + +* You'll notice that it's possible that jobs in lower level queues might take a long time to complete if a lot of jobs enter (**starvation**). +* Some MLFQ implementations work around this by periodically moving all jobs back to the top priority queue. + +## Interesting Extra Stuff +* [What really happened on Mars?](http://research.microsoft.com/en-us/um/people/mbj/Mars_Pathfinder/Mars_Pathfinder.html) - tl;dr how the *Pathfinder* Mars probe got brought down by a crappy scheduling policy. +* [A Quantitative Measure of Fairness and Discrimination for Resource Allocation in Shared Computer Systems](http://www.cs.wustl.edu/~jain/papers/ftp/fairness.pdf) by R. Jain et al. - A common metric for scheduling fairness. +* [MIT OCW Queueing Theory course](http://ocw.mit.edu/courses/sloan-school-of-management/15-072j-queues-theory-and-applications-spring-2006/) + +**Next:** [I/O](io.md) From 48ab6342464973d37d9b90603dc8cfb70fd46472 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 02:42:06 -0400 Subject: [PATCH 48/53] cs350: add index --- cs350/README.md | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 cs350/README.md diff --git a/cs350/README.md b/cs350/README.md new file mode 100644 index 0000000..ace30c6 --- /dev/null +++ b/cs350/README.md @@ -0,0 +1,34 @@ +# The Hitchhiker's Guide to CS350 + +CS 350 - Operating Systems +Spring 2016 +Elvin Yung + +CS350 is an awesome course, but it's a lot of material. It can be overwhelming, especially if you're cramming 2 days before the final. + +Thankfully, the course is split up into (roughly) 5 sections, which are mostly (but not completely) independent of each other: + +* [Synchronization](synch.md) + * threads + * synchronization primitives: locks, semaphores, CVs +* [Kernel](kernel.md) + * processes + * system calls + * context switches +* [Virtual Memory](vm.md) + * address spaces + * TLBs +* [Scheduling](scheduling.md) + * first in first out (FIFO) + * shortest job first (SJF) + * shortest time-to-completion first (STCF) + * round robin (RR) + * multi-level feedback queue (MLFQ) +* [I/O](io.md) and [Filesystems](fs.md) + * I/O devices + * polling, vectored interrupts, DMA + * device drivers + * hard disks + * I/O performance + * filesystems + * journaling From d6f547c7a0945d85153eb61bd2bfd1de0d052d24 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 11:12:00 -0400 Subject: [PATCH 49/53] cs349: undo highlight phrasing --- cs349/7-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/7-1.md b/cs349/7-1.md index 19b6214..68107b2 100644 --- a/cs349/7-1.md +++ b/cs349/7-1.md @@ -38,7 +38,7 @@ Elvin Yung * e.g. should you highlight the thing that the undo restored? #### Suggestions -* They should be, since before the delete, the text/icon/etc. was selected. +* The restored item should be highlighted, since before the delete, it was selected. * It's probably meaningful for the scroll to move to the restored text. e.g. Sublime Text does this. ### Granularity From 7fc7844cc53105a435aa0d450ad05f03e483d38c Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 11:13:51 -0400 Subject: [PATCH 50/53] cs349: responsiveness phrasing fix --- cs349/6-2.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/cs349/6-2.md b/cs349/6-2.md index d5a0718..4398f52 100644 --- a/cs349/6-2.md +++ b/cs349/6-2.md @@ -166,7 +166,9 @@ For our prime calculation program, we basically have two strategies: #### `Demo4` * Why do we use `SwingUtilities.invokeLater()` to update the UI, instead of updating it directly? * `Demo4` has two worker threads: One that uses `invokeLater`, and one that updates it directly. -* The problem is that since Swing isn't thread-safe, which means that calling the Swing methods directly could lead to lots of synchronization issues. +* As it turns out, the thread that uses `invokeLater` updates much more smoothly. +* The problem is that since Swing isn't thread-safe, calling the Swing methods directly could lead to lots of synchronization issues. +* On the other hand, using `invokeLater` means that the UI thread performs synchronization for you. * [Race conditions are bad.](https://en.wikipedia.org/wiki/Therac-25) #### More stuff From 581c0c0baca565a376a280926e1d8f601586707c Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 11:35:44 -0400 Subject: [PATCH 51/53] cs349: fix visual perception LCD diagram image --- cs349/12-1.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cs349/12-1.md b/cs349/12-1.md index 9522297..9f932c3 100644 --- a/cs349/12-1.md +++ b/cs349/12-1.md @@ -124,7 +124,7 @@ To make messages visible: * Very bulky. #### LCD -![](blob:https%3A//imgur.com/552293ce-95ac-4ec0-8f34-0f1631a49587) +![](https://i.imgur.com/oZAKkTe.png) * We gradually shifted from CRT to LCD, because it's more portable. * Use a layer of liquid crystal From ed05eeac57fe15e059fcd6671fdb816e8c4e07b3 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 11:38:38 -0400 Subject: [PATCH 52/53] cs349: phrasing fixes for 10.1 --- cs349/10-1.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/cs349/10-1.md b/cs349/10-1.md index 30968d1..aeec7c6 100644 --- a/cs349/10-1.md +++ b/cs349/10-1.md @@ -56,7 +56,7 @@ Elvin Yung * Instead of having discrete devices that you carry, instrument the world around you to do things for you. * For Ubicomp to really work, you need: * Computation embedded into the environment - * Something that ties the person to the environment - a device that helps identify the person. Can a smartwatch + * Something that ties the person to the environment - a device that helps identify the person. Can a smartwatch fill this role? Maybe. ## Augmented Reality * Examples: Google Glass, Hololens @@ -82,5 +82,5 @@ And also other new technology. * Why do you need a wearable? -* A better mousetrap is not good enough - it needs to be solving a problem - be a 10x improvement +* A better mousetrap is not good enough - it needs to be solving a problem - 10x not 10%. * New technology takes time to mature! Remember old tablets and PDAs? From a20b2578773387e7e19f14b3fe6216714cada0c8 Mon Sep 17 00:00:00 2001 From: Elvin Yung Date: Thu, 11 Aug 2016 11:40:39 -0400 Subject: [PATCH 53/53] cs349: elaborate on cd gain --- cs349/10-2.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/cs349/10-2.md b/cs349/10-2.md index 755d0b9..0b73905 100644 --- a/cs349/10-2.md +++ b/cs349/10-2.md @@ -97,3 +97,9 @@ Elvin Yung #### Control-Display Gain * The **gain** is the ratio of how fast the pointer moves to how fast the input device moves. + +* If the CD gain is 1, when the input device moves some distance, the pointer moves the same distance. +* If the CD gain is less than 1, the pointer moves more slowly than the device. +* If the CD gain is more than 1, the pointer moves faster than the device. + +* In lots of OSes this is also known as the **sensitivity**, and it's generally tunable in the settings.