Skip to content


Subversion checkout URL

You can clone with
Download ZIP
C Shell Objective-C C++ FORTRAN Scheme Other
Pull request Compare This branch is 2 commits ahead of doxaras:master.
Latest commit 3803efb @kremizask kremizask Added text to voice feature for the assistant. Also added the ability…
… to give a search confirmation or cancellation by saying "yes" or "no".
Failed to load latest commit information.
OpenEars Updated README file.
Reachability Basic commit.
TouchXML Basic commit.
WolframDemo Added text to voice feature for the assistant. Also added the ability…
asi-http-request Basic commit.
.gitignore Basic commit.
README.mdown Added tap to hide keyboard action in search view controller. Also rot…



This is a simple iPhone application that uses the WolframAlpha API to display answers to the users queries. There are three different ways to make a query. Users can either:

  1. Type the entry they want to search for
  2. use voice detection to provide their search term or
  3. Choose among some predefined searches.

External Libraries

The external libraries used by this app are:

For more information on these libraries and on how to install and use them in your project visit the corresponding websites.

Specifically for OpenEars, in addition to the instructions described in the it's website in order to make it work in this project I set the Header Search Paths both on project settings and the target settings.

Usage Notes

  • Even though the voice recognition module can run on the Simulator as well as on the device, it is optimized for the device.
  • What should be also noted is that the quality of the voice recognition is dependent on the dictionary used. In this demo I am using For the .languagemodel file from the install (note: files that end with .DMP are fine to use as ARPA language model files): [OPENEARS]/CMULibraries/pocketsphinx-0.6.1/model/lm/en_US/hub4.5000.DMP For the .dic I used this file from the Pocketsphinx repository. As stated on the OpenEars documentation "This will set you up with a matching 5000 word vocabulary for the default acoustic model which you can then tell PocketsphinxController to start with. To the best of my understanding, 5000 words is the maximum size for decent recognition performance for Pocketsphinx and reasonable resource usage on the device. Keep in mind that using such a large model will increase your memory overhead, and reduce recognition speed and recognition accuracy." As a result, quite a few of the voice searches I tried failed to be correctly recognized. Some search terms that worked for me in the "voice" mode are the following: "what is the population of China", "what is my name", "what time is it", "ice".
Something went wrong with that request. Please try again.