Objective-C Objective-C++ Swift C C++ Ruby
Latest commit 6c9051f Jan 18, 2017 Dmitriy Kuragin version 0.7.0

Objective-C(Cocoa) SDK for

Build Status Version License Platform


The API.AI Objective-C(Cocoa) SDK makes it easy to integrate speech recognition with API.AI natural language processing API on Apple devices. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.


Running the Demo app

  • Run pod update in the ApiAiDemo project folder.
  • Open ApiAIDemo.xworkspace in Xcode.
  • In ViewController -viewDidLoad insert API key.

    configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN";

    Note: an agent in should exist. Keys could be obtained on the agent's settings page.

  • Define sample intents in the agent.

  • Run the app in Xcode. Inputs are possible with text and voice (experimental).

Integrating into your app

1. Initialize CocoaPods

  • Run pod install in your project folder.

  • Update Podfile to include:

    pod 'ApiAI'
  • Run pod update

2. Init the SDK.

In the AppDelegate.h, add ApiAI.h import and property:

  #import <ApiAI/ApiAI.h>

  @property(nonatomic, strong) ApiAI *apiAI;

In the AppDelegate.m, add

    self.apiAI = [[ApiAI alloc] init];

    // Define API.AI configuration here.
    id <AIConfiguration> configuration = [[AIDefaultConfiguration alloc] init];
    configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN_HERE";

    self.apiAI.configuration = configuration;

3. Perform request.

  // Request using text (assumes that speech recognition / ASR is done using a third-party library, e.g. AT&T)
  AITextRequest *request = [apiai textRequest];
  request.query = @[@"hello"];
  [request setCompletionBlockSuccess:^(AIRequest *request, id response) {
      // Handle success ...
  } failure:^(AIRequest *request, NSError *error) {
      // Handle error ...

  [_apiAI enqueue:request];