Skip to content
iOS SDK for Dialogflow
Objective-C Swift Objective-C++ C++ Other
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
ApiAI.xcodeproj Added originalRequest support. Jul 24, 2017
ApiAI Add contributing file and update license Aug 17, 2017
ApiAIDemo Add contributing file and update license Aug 17, 2017
ApiAIDemoSwift Add contributing file and update license Aug 17, 2017
ApiAIDemoWatchOS Add contributing file and update license Aug 17, 2017
ApiAIDemoWatchOSSwift Add contributing file and update license Aug 17, 2017
ApiAIFramework Added methods for AIButton for start and cancel request Mar 29, 2016
ApiAIMacOSDemo Add contributing file and update license Aug 17, 2017
ApiAITests Add contributing file and update license Aug 17, 2017
.gitignore UserEntities method added to library Mar 2, 2016
.gitmodules Added support library for watch kit, added Demo for Apple Watch Apr 13, 2015
.swift-version added swift-version file. Jul 24, 2017
.travis.yml remove build warch os example from travis tests Jan 18, 2017
ApiAI.podspec.json added swift-version file. Jul 24, 2017
CONTRIBUTING.md Add contributing file and update license Aug 17, 2017
LICENSE
README.md Add deprecation notice Feb 20, 2019

README.md

DEPRECATED Objective-C(Cocoa) SDK for api.ai

Deprecated
This Dialogflow client library and Dialogflow API V1 have been deprecated and will be shut down on October 23th, 2019. Please migrate to Dialogflow API V2.

Build Status Version License Platform


Overview

The API.AI Objective-C(Cocoa) SDK makes it easy to integrate speech recognition with API.AI natural language processing API on Apple devices. API.AI allows using voice commands and integration with dialog scenarios defined for a particular agent in API.AI.

Prerequsites

Running the Demo app

  • Run pod update in the ApiAiDemo project folder.

  • Open ApiAIDemo.xworkspace in Xcode.

  • In ViewController -viewDidLoad insert API key.

    configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN";
    

    Note: an agent in api.ai should exist. Keys could be obtained on the agent's settings page.

  • Define sample intents in the agent.

  • Run the app in Xcode. Inputs are possible with text and voice (experimental).

Integrating into your app

1. Initialize CocoaPods

  • Run pod install in your project folder.

  • Update Podfile to include:

    pod 'ApiAI'
    
  • Run pod update

2. Init the SDK.

In the AppDelegate.h, add ApiAI.h import and property:

#import <ApiAI/ApiAI.h>

@property(nonatomic, strong) ApiAI *apiAI;

In the AppDelegate.m, add

  self.apiAI = [[ApiAI alloc] init];

  // Define API.AI configuration here.
  id <AIConfiguration> configuration = [[AIDefaultConfiguration alloc] init];
  configuration.clientAccessToken = @"YOUR_CLIENT_ACCESS_TOKEN_HERE";

  self.apiAI.configuration = configuration;

3. Perform request.

...
// Request using text (assumes that speech recognition / ASR is done using a third-party library, e.g. AT&T)
AITextRequest *request = [apiai textRequest];
request.query = @[@"hello"];
[request setCompletionBlockSuccess:^(AIRequest *request, id response) {
    // Handle success ...
} failure:^(AIRequest *request, NSError *error) {
    // Handle error ...
}];

[_apiAI enqueue:request];

How to make contributions?

Please read and follow the steps in the CONTRIBUTING.md.

License

See LICENSE.

Terms

Your use of this sample is subject to, and by using or downloading the sample files you agree to comply with, the Google APIs Terms of Service.

This is not an official Google product.

You can’t perform that action at this time.