Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.vscode
data
dist
img
lib
model
notebooks
.gitignore
Collector.js
Demo.js
LICENSE
README.md
Trainer.js
article.md
article.pdf
index.html
index.js
package.json
webpack.js
yarn.lock

README.md

ux-by-tfjs

Online Demo: http://djcordhose.github.io/ux-by-tfjs/dist (watch https://youtu.be/kZ8sXFIJQyg to get an idea how to use)

Project: https://github.com/DJCordhose/ux-by-tfjs

Notebook for Server side processing: https://colab.research.google.com/github/DJCordhose/ux-by-tfjs/blob/master/notebooks/rnn-model.ipynb

What does it do?

After training on your personal mouse paths it can predict what button you are likely to click on. You can then highlight that button for easier access or do anything else that might seem reasonable.

How does this work?

Have a look at this more complete article

What makes this hard?

This is specific to how my input device works (this does not work for touch)

  • using trackpad
  • windows / mac / linux
  • using mouse
  • speed of input device

And personal style

  • beginner / expert
  • fast / slow
  • direct / curvy

History

Experiment 1

  • uses fixed padding (final 15 events before hitting button are dropped)
  • four input values: posX, posY, deltaX, deltaY
  • uses special zone: meaning no button navigated
  • buttons in a single line and close to each other
  • training data by always going from neutral zone to button in a straight line

Experiment 2

Hypothesis: more context works better and movements far away could still indicate button

  • split path to aim into fragments
    • better prediction: each good for prediction in different parts of the path
    • more training data
  • records raw events to give more flexibility
  • adds deltaT to input values
  • buttons moved closer together
  • training data much more random
  • training works by splitting into 4 segments each used for prediction of button
  • clearly overfits, but task is hard
  • does not well in practice

Experiment 3

Hypothesis: only close movements can be used for prediction

  • clipped to two close segments
  • introduced zeroed out zone for demo purposes (posY < 225)
  • better demo:
    • buttons closer together
    • prediction much earlier
  • Download known good model from remote

Experiment 4

  • added tfjs-vis
  • added across-the-board regularization
  • changed layout
  • different RNN types
    • LSTM/GRU: similar style, but LSTM seems to be a bit better in real world
    • simpleRNN: generalizes great to proximity, even though zero examples in training data
  • pre-trained server model converted to tf.js: seems to be somewhat broken (all predictions are button 3)

Experiment 5

  • pre-trained server model converted to tf.js now works due to SimpleRNN
  • different RNN types
    • LSTM/GRU: similar style, but LSTM seems to be a bit better in real world
    • simpleRNN: generalizes great to proximity, even though zero examples in training data

Possible improvements

  • create baselines to understand if this is really good
    • proximity based
    • interpolation of path using linear regression (or just dx/dy for a single point)
You can’t perform that action at this time.