This application is being developed for people with learning disabilities. In particular, this application is targeted at people learning to read phrases, speak phrases, hear phrases, and see the objects represented by the same.
The general usage of the application is so:
- Type a phrase to be learned into the top bar
- Read the phrase
- Speak the phrase out loud
- Listen to the phrase being spoken
- Right-click any word in the phrase to see a Google Image Search image of that phrase
- Speak the word associated with that image
- Speak and read the phrase while looking at the image
I welcome any and all pull requests. Features I would like to add are:
- A history of phrases (drop-down list), so that prior phrases can be revisited and reviewed (for memory recall)
- Caching of image results to that images are loaded quickly and limit API requests. This makes the application more accessible to people who need to use a free-tier SerpAPI account, which is limited to 100 requests per month.
- A bouncing-ball or rolling highlight of the words to be visually followed (see https://vimeo.com/103302283) [speed is configurable]
- A custom [large] mouse cursor to make pointing at parts of the image -- as well as pointing at parts of the text, easier to see
- A "paint bucket" tool that allows aspects of the image to be "filled" with color, as a way of highlighting parts of the image, allowing spatial and associative learning
- Alt-arrow keys manually moves the highlight / bouncing ball forward and backward in the text
- Easy in-app dictionary-lookup of definitions of terms
- Auto-generation of phrases based on Simple / Basic English (https://simple.wikipedia.org/wiki/Simple_English)
- Spidering of concept maps -- e.g., ability to pull up an auto-generated learning phrase based on the concepts or words present on the current exercise