Skip to content
Agent is an app that remembers stuff via audio commands
Objective-C Protocol Buffer Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
Sound Resources
Agent Screenshot.png

Agent - An app to play with Google's Cloud Speech API

This app allows users to issue voice commands and remember stuff. Start recording by hitting the Record button and/or by pressing the main button with headphones plugged in.

Commands should start with read, watch, visit or remember. For example: watch The Big Short will fetch the movie The Big Short using the OMDB API and save it to the local Code Data database.

This app is based on Google's Streaming gRPC sample app. It uses Cloud Speech API to recognize speech in recorded audio, Goodreads API for Book search, Yelp API for Place search and OMDB API for Movie search.

## Prerequisites - An API key for the Cloud Speech API (See [the docs][getting-started] to learn more) - An OSX machine or emulator - [Xcode 7][xcode] - [Cocoapods][cocoapods] version 1.0 or later - API keys for [Goodreads][goodreads-api] and [Yelp][yelp-api].


  • Clone this repo and cd into this directory.
  • Run pod install to download and build Cocoapods dependencies.
  • Open the project by running open Agent.xcworkspace.
  • In Agent/SpeechRecognitionService.m, replace YOUR_GOOGLE_API_KEY with the API key obtained above.
  • In Agent/SavedTableViewController.m, replace YOUR_GOODREADS_API_KEY with the API key obtained from Goodreads.
  • In Agent/SavedTableViewController.m, replace YOUR_YELP_API_TOKEN with the API token obtained from Yelp.
  • Build and run the app.

Running the app

  • As with all Google Cloud APIs, every call to the Speech API must be associated with a project within the Google Cloud Console that has the Speech API enabled. This is described in more detail in the getting started doc, but in brief:

  • Create a project (or use an existing one) in the Cloud Console

  • Enable billing and the Speech API.

  • Create an API key, and save this for later.

  • Clone this repository on GitHub. If you have git installed, you can do this by executing the following command:

$ git clone

This will download the repository of samples into the directory ios-docs-samples.

  • cd into this directory in the repository you just cloned, and run the command pod install to prepare all Cocoapods-related dependencies.

  • open Agent.xcworkspace to open this project in Xcode. Since we are using Cocoapods, be sure to open the workspace and not Agent.xcodeproj.

  • In Xcode's Project Navigator, open the SpeechRecognitionService.m file within the Speech directory.

  • Find the line where the GOOGLE_API_KEY is set. Replace the string value with the API key obtained from the Cloud console above. This key is the credential used to authenticate all requests to the Speech API. Calls to the API are thus associated with the project you created above, for access and billing purposes.

  • You are now ready to build and run the project. In Xcode you can do this by clicking the 'Play' button in the top left. This will launch the app on the simulator or on the device you've selected. Be sure that the 'Agent' target is selected in the popup near the top left of the Xcode window.

  • Tap the Record button. This uses a custom AudioController class to capture audio in an in-memory instance of NSMutableData. When this data reaches a certain size, it is sent to the SpeechRecognitionService class, which streams it to the speech recognition service. Packets are streamed as instances of the RecognizeRequest object, and the first RecognizeRequest object sent also includes configuration information in an instance of InitialRecognizeRequest. As it runs, the AudioController logs the number of samples and average sample magnitude for each packet that it captures.

  • Speak a command that starts with either read, watch, visit or remember


You can’t perform that action at this time.