The Watson Developer Cloud iOS SDK makes it easy for mobile developers to build Watson-powered applications. With the iOS SDK you can leverage the power of Watson's advanced artificial intelligence, machine learning, and deep learning techniques to understand unstructured data and engage with mobile users in new ways.
There are many resources to help you build your first cognitive application with the iOS SDK:
- Read the Readme
- Follow the QuickStart Guide
- Review a Sample Application
- Browse the Documentation
- Requirements
- Installation
- Service Instances
- Sample Applications
- Xcode 7 Compatibility
- Objective-C Compatibility
- Contributing
- License
- AlchemyData News
- AlchemyLanguage
- Conversation
- Dialog
- Document Conversion
- Language Translator
- Natural Language Classifier
- Personality Insights
- Retrieve and Rank
- Speech to Text
- Text to Speech
- Tone Analyzer
- Tradeoff Analytics
- Visual Recognition
- iOS 8.0+
- Xcode 8.0+
- Swift 2.3+
The Watson Developer Cloud iOS SDK uses Carthage to manage dependencies and build binary frameworks.
You can install Carthage with Homebrew:
$ brew update
$ brew install carthage
To use the Watson Developer Cloud iOS SDK in your application, specify it in your Cartfile
:
github "watson-developer-cloud/ios-sdk"
Then run the following command to build the dependencies and frameworks:
$ carthage update --platform iOS
Finally, drag-and-drop the built frameworks into your Xcode project and import them as desired.
App Transport Security was introduced with iOS 9 to enforce secure Internet connections. To securely connect to IBM Watson services, please add the following exception to your application's Info.plist
file.
<key>NSAppTransportSecurity</key>
<dict>
<key>NSExceptionDomains</key>
<dict>
<key>watsonplatform.net</key>
<dict>
<key>NSTemporaryExceptionRequiresForwardSecrecy</key>
<false/>
<key>NSIncludesSubdomains</key>
<true/>
<key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key>
<true/>
<key>NSTemporaryExceptionMinimumTLSVersion</key>
<string>TLSv1.0</string>
</dict>
</dict>
</dict>
IBM Watson Developer Cloud offers a variety of services for developing cognitive applications. The complete list of Watson Developer Cloud services is available from the services catalog. Services are instantiated using the IBM Bluemix cloud platform.
Follow these steps to create a service instance and obtain its credentials:
- Log in to Bluemix at https://bluemix.net.
- Create a service instance:
- From the Dashboard, select "Use Services or APIs".
- Select the service you want to use.
- Click "Create".
- Copy your service credentials:
- Click "Service Credentials" on the left side of the page.
- Copy the service's
username
andpassword
(orapi_key
for Alchemy).
You will need to provide these service credentials in your mobile application. For example:
let textToSpeech = TextToSpeech(username: "your-username-here", password: "your-password-here")
Note that service credentials are different from your Bluemix username and password.
See Getting Started for more information on getting started with the Watson Developer Cloud and Bluemix.
As of v0.8.0, the iOS SDK is written in Swift 2.3 using Xcode 8. Unfortunately, Swift 2.3 is not backwards compatible with Xcode 7. We are not committed to maintaining Xcode 7 support but may occasionally publish a v0.7.x release with critical bug fixes.
To continue using the iOS SDK with Xcode 7, we recommend following the v0.7.x release branch with the following change to your Cartfile:
github "watson-developer-cloud/ios-sdk" ~> 0.7.0
Please see this tutorial for more information about consuming the Watson Developer Cloud iOS SDK in an Objective-C application.
We would love any and all help! If you would like to contribute, please read our CONTRIBUTING documentation with information on getting started.
This library is licensed under Apache 2.0. Full license text is available in LICENSE.
This SDK is intended solely for use with an Apple iOS product and intended to be used in conjunction with officially licensed Apple development tools.
AlchemyData News provides news and blog content enriched with natural language processing to allow for highly targeted search and trend analysis. Now you can query the world's news sources and blogs like a database.
The following example demonstrates how to use the AlchemyData News service:
import AlchemyDataNewsV1
let apiKey = "your-apikey-here"
let alchemyDataNews = AlchemyDataNews(apiKey: apiKey)
let start = "now-1d" // yesterday
let end = "now" // today
let query = [
"q.enriched.url.title": "O[IBM^Apple]",
"return": "enriched.url.title,enriched.url.entities.entity.text,enriched.url.entities.entity.type"
]
let failure = { (error: NSError) in print(error) }
alchemyDataNews.getNews(start, end: end, query: query, failure: failure) { news in
print(news)
}
Refine your query by referring to the Count and TimeSlice Queries and API Fields documentation.
The following links provide more information about the IBM AlchemyData News service:
- IBM AlchemyData News - Service Page
- IBM AlchemyData News - Documentation
- IBM AlchemyData News - Demo
AlchemyLanguage is a collection of text analysis functions that derive semantic information from your content. You can input text, HTML, or a public URL and leverage sophisticated natural language processing techniques to get a quick high-level understanding of your content and obtain detailed insights such as directional sentiment from entity to object.
AlchemyLanguage has a number of features, including:
- Entity Extraction
- Sentiment Analysis
- Keyword Extraction
- Concept Tagging
- Relation Extraction
- Taxonomy Classification
- Author Extraction
- Language Detection
- Text Extraction
- Microformats Parsing
- Feed Detection
The following example demonstrates how to use the AlchemyLanguage service:
import AlchemyLanguageV1
let apiKey = "your-apikey-here"
let alchemyLanguage = AlchemyLanguage(apiKey: apiKey)
let url = "https://github.com/watson-developer-cloud/ios-sdk"
let failure = { (error: NSError) in print(error) }
alchemyLanguage.getTextSentiment(forURL: url, failure: failure) { sentiment in
print(sentiment)
}
The following links provide more information about the IBM AlchemyLanguage service:
With the IBM Watson Conversation service you can create cognitive agents--virtual agents that combine machine learning, natural language understanding, and integrated dialog scripting tools to provide outstanding customer engagements.
The following example shows how to start a conversation with the Conversation service:
import ConversationV1
let username = "your-username-here"
let password = "your-password-here"
let version = "YYYY-MM-DD" // use today's date for the most recent version
let conversation = Conversation(username: username, password: password, version: version)
let workspaceID = "your-workspace-id-here"
let failure = { (error: NSError) in print(error) }
var context: Context? // save context to continue conversation
conversation.message(workspaceID, failure: failure) { response in
print(response.output.text)
context = response.context
}
The following example shows how to continue an existing conversation with the Conversation service:
let text = "Turn on the radio."
let failure = { (error: NSError) in print(error) }
conversation.message(workspaceID, text: text, context: context, failure: failure) { response in
print(response.output.text)
context = response.context
}
The following links provide more information about the IBM Conversation service:
The IBM Watson Dialog service provides a comprehensive and robust platform for managing conversations between virtual agents and users through an application programming interface (API). Developers automate branching conversations that use natural language to automatically respond to user questions, cross-sell and up-sell, walk users through processes or applications, or even hand-hold users through difficult tasks.
To use the Dialog service, developers script conversations as they would happen in the real world, upload them to a Dialog application, and enable back-and-forth conversations with a user.
The following example demonstrates how to instantiate a Dialog
object:
import DialogV1
let username = "your-username-here"
let password = "your-password-here"
let dialog = Dialog(username: username, password: password)
The following example demonstrates how to create a dialog application:
// store dialog id to access application
var dialogID: DialogID?
// load dialog file
guard let fileURL = NSBundle.mainBundle().URLForResource("your-dialog-filename", withExtension: "xml") else {
print("Failed to locate dialog file.")
return
}
// create dialog application
let name = "your-dialog-name"
let failure = { (error: NSError) in print(error) }
dialog.createDialog(dialogName, fileURL: fileURL, failure: failure) { dialogID in
self.dialogID = dialogID
print(dialogID)
}
The following example demonstrates how to start a conversation with a dialog application:
// store ids to continue conversation
var conversationID: Int?
var clientID: Int?
let failure = { (error: NSError) in print(error) }
dialog.converse(dialogID!, failure: failure) { response in
self.conversationID = response.conversationID
self.clientID = response.clientID
print(response.response)
}
The following example demonstrates how to continue a conversation with a dialog application:
let input = "your-text-here"
let failure = { (error: NSError) in print(error) }
dialog.converse(dialogID!, conversationID: conversationID!, clientID: clientID!, input: input, failure: failure) { response in
print(conversationResponse.response)
}
The following links provide more information about the IBM Watson Dialog service:
The IBM Watson Document Conversion Service converts a single HTML, PDF, or Microsoft Wordâ„¢ document. The input document is transformed into normalized HTML, plain text, or a set of JSON-formatted Answer units that can be used with other Watson services, like the Watson Retrieve and Rank Service.
The following example demonstrates how to convert a document with the Document Conversation service:
import DocumentConversionV1
let username = "your-username-here"
let password = "your-password-here"
let version = "2015-12-15"
let documentConversion = DocumentConversion(username: username, password: password, version: version)
// load document
guard let document = NSBundle.mainBundle().URLForResource("your-dialog-filename", withExtension: "xml") else {
print("Failed to locate dialog file.")
return
}
// convert document
let config = documentConversion.writeConfig(ReturnType.Text)
let failure = { (error: NSError) in print(error) }
documentConversion.convertDocument(config, document: document, failure: failure) { text in
print(text)
}
The following links provide more information about the IBM Document Conversion service:
- IBM Watson Document Conversion - Service Page
- IBM Watson Document Conversion - Documentation
- IBM Watson Document Conversion - Demo
The IBM Watson Language Translator service lets you select a domain, customize it, then identify or select the language of text, and then translate the text from one supported language to another.
The following example demonstrates how to use the Language Translator service:
import LanguageTranslatorV2
let username = "your-username-here"
let password = "your-password-here"
let languageTranslator = LanguageTranslator(username: username, password: password)
let failure = { (error: NSError) in print(error) }
languageTranslator.translate("Hello", source: "en", target: "es", failure: failure) { translation in
print(translation)
}
The following links provide more information about the IBM Watson Language Translator service:
- IBM Watson Language Translator - Service Page
- IBM Watson Language Translator - Documentation
- IBM Watson Language Translator - Demo
The IBM Watson Natural Language Classifier service enables developers without a background in machine learning or statistical algorithms to create natural language interfaces for their applications. The service interprets the intent behind text and returns a corresponding classification with associated confidence levels. The return value can then be used to trigger a corresponding action, such as redirecting the request or answering a question.
The following example demonstrates how to use the Natural Language Classifier service:
import NaturalLanguageClassifierV1
let username = "your-username-here"
let password = "your-password-here"
let naturalLanguageClassifier = NaturalLanguageClassifier(username: username, password: password)
let classifierID = "your-trained-classifier-id"
let text = "your-text-here"
let failure = { (error: NSError) in print(error) }
naturalLanguageClassifier.classify(classifierID, text: text, failure: failure) { classification in
print(classification)
}
The following links provide more information about the Natural Language Classifier service:
- IBM Watson Natural Language Classifier - Service Page
- IBM Watson Natural Language Classifier - Documentation
- IBM Watson Natural Language Classifier - Demo
The IBM Watson Personality Insights service enables applications to derive insights from social media, enterprise data, or other digital communications. The service uses linguistic analytics to infer personality and social characteristics, including Big Five, Needs, and Values, from text.
The following example demonstrates how to use the Personality Insights service:
import PersonalityInsightsV2
let username = "your-username-here"
let password = "your-password-here"
let personalityInsights = PersonalityInsights(username: username, password: password)
let text = "your-input-text"
let failure = { (error: NSError) in print(error) }
personalityInsights.getProfile(text: text, failure: failure) { profile in
print(profile)
}
The following links provide more information about the Personality Insights service:
- IBM Watson Personality Insights - Service Page
- IBM Watson Personality Insights - Documentation
- IBM Watson Personality Insights - Demo
The IBM Watson Retrieve and Rank service combines Apache Solr and a machine learning algorithm, two information retrieval components, into a single service in order to provide users with the most relevant search information.
The following example demonstrates how to instantiate a Retrieve and Rank
object.
import RetrieveAndRankV1
let username = "your-username-here"
let password = "your-password-here"
let retrieveAndRank = RetrieveAndRank(username: username, password: password)
The following example demonstrates how to create a Solr Cluster, configuration, and collection.
let failure = { (error: NSError) in print(error) }
// Create and store the Solr Cluster so you can access it later.
var cluster: SolrCluster?
retrieveAndRank.createSolrCluster("your-cluster-name-here", failure: failure) { solrCluster in
cluster = solrCluster
}
// Load the configuration file.
guard let configFile = NSBundle.mainBundle().URLForResource("your-config-filename", withExtension: "zip") else {
print("Failed to locate configuration file.")
return
}
let configurationName = "your-config-name-here"
// Create the configuration. Make sure the Solr Cluster status is READY first.
retrieveAndRank.uploadSolrConfiguration(
cluster.solrClusterID,
configName: configurationName,
zipFile: configFile,
failure: failure)
// Create and store your Solr collection name.
let collectionName = "your-collection-name-here"
retrieveAndRank.createSolrCollection(
cluster.solrClusterID,
name: collectionName,
configName: configurationName,
failure)
// Load the documents you want to add to your collection.
guard let collectionFile = NSBundle.mainBundle().URLForResource("your-collection-filename", withExtension: "json") else {
print("Failed to locate collection file.")
return
}
// Upload the documents to your collection.
retrieveAndRank.updateSolrCollection(
cluster.solrClusterID,
collectionName: collectionName,
contentType: "application/json",
contentFile: collectionFile,
failure: failure)
The following example demonstrates how to use the Retrieve and Rank service to retrieve answers without ranking them.
retrieveAndRank.search(
cluster.solrClusterID,
collectionName: collectionName,
query: "your-query-here",
returnFields: "your-return-fields-here",
failure: failure) { response in
print(response)
}
The following example demonstrates how to create and train a Ranker.
// Load the ranker training data file.
guard let rankerTrainingFile = NSBundle.mainBundle().URLForResource("your-ranker-training-data-filename", withExtension: "json") else {
print("Failed to locate collection file.")
return
}
// Create and store the ranker.
var ranker = RankerDetails?
retrieveAndRank.createRanker(
rankerTrainingFile,
name: "your-ranker-name-here",
failure: failure) { rankerDetails in
ranker = rankerDetails
}
The following example demonstrates how to use the service to retrieve and rank the results.
retrieveAndRank.searchAndRank(
cluster.solrClusterID,
collectionName: collectionName,
rankerID: ranker.rankerID,
query: "your-query-here",
returnFields: "your-return-fields-here",
failure: failure) { response in
print(response)
}
The following links provide more information about the Retrieve and Rank service:
- IBM Watson Retrieve and Rank - Service Page
- IBM Watson Retrieve and Rank - Documentation
- IBM Watson Retrieve and Rank - Demo
The IBM Watson Speech to Text service enables you to add speech transcription capabilities to your application. It uses machine intelligence to combine information about grammar and language structure to generate an accurate transcription. Transcriptions are supported for various audio formats and languages.
The SpeechToText
class is the SDK's primary interface for performing speech recognition requests. It supports the transcription of audio files, audio data, and streaming microphone data. Advanced users, however, may instead wish to use the SpeechToTextSession
class that exposes more control over the WebSockets session.
The RecognitionSettings
class is used to define the audio format and behavior of a recognition request. These settings are transmitted to the service when initating a request.
The following example demonstrates how to define a recognition request that transcribes Opus-formatted audio data with interim results until the stream terminates:
var settings = RecognitionSettings(contentType: .WAV)
settings.interimResults = true
settings.continuous = true
See the class documentation or service documentation for more information about the available settings.
The Speech to Text framework makes it easy to perform speech recognition with microphone audio. The framework internally manages the microphone, starting and stopping it with various function calls (such as recognizeMicrophone(settings:model:learningOptOut:compress:failure:success)
and stopRecognizeMicrophone()
or startMicrophone(compress:)
and stopMicrophone()
).
Knowing when to stop the microphone depends upon the recognition request's continuous
setting:
-
If
false
, then the service ends the recognition request at the first end-of-speech incident (denoted by a half-second of non-speech or when the stream terminates). This will coincide with afinal
transcription result. So thesuccess
oronResults
callback should be configured to stop the microphone when a final transcription result is received. -
If
true
, then the microphone will typically be stopped by user-feedback. For example, your application may have a button to start/stop the request, or you may stream the microphone for the duration of a long press on a UI element.
To reduce latency and bandwidth, the microphone audio is compressed to Opus format by default. To disable compression, set the compress
parameter to false
.
It's important to specify the correct audio format for recognition requests that use the microphone:
// compressed microphone audio uses the Opus format
let settings = RecognitionSettings(contentType: .Opus)
// uncompressed microphone audio uses a 16-bit mono PCM format at 16 kHz
let settings = RecognitionSettings(contentType: .L16(rate: 16000, channels: 1))
The following example demonstrates how to use the Speech to Text service to transcribe a WAV audio file.
import SpeechToTextV1
let username = "your-username-here"
let password = "your-password-here"
let speechToText = SpeechToText(username: username, password: password)
let audio = NSBundle.mainBundle().URLForResource("filename", withExtension: "wav")!
var settings = RecognitionSettings(contentType: .WAV)
settings.interimResults = true
let failure = { (error: NSError) in print(error) }
speechToText.recognize(audio, settings: settings, failure: failure) { results in
print(results.bestTranscript)
}
Audio can be streamed from the microphone to the Speech to Text service for real-time transcriptions. The following example demonstrates how to use the Speech to Text service to transcribe microphone audio:
import SpeechToTextV1
let username = "your-username-here"
let password = "your-password-here"
let speechToText = SpeechToText(username: username, password: password)
func startStreaming() {
var settings = RecognitionSettings(contentType: .Opus)
settings.continuous = true
settings.interimResults = true
let failure = { (error: NSError) in print(error) }
let request = speechToText.recognizeMicrophone(settings, failure: failure) { results in
print(results.bestTranscript)
}
}
func stopStreaming() {
speechToText.stopRecognizeMicrophone()
}
Advanced users may want more customizability than provided by the SpeechToText
class. The SpeechToTextSession
class exposes more control over the WebSockets connection and also includes several advanced features for accessing the microphone. Before using SpeechToTextSession
, it's helpful to be familiar with the Speech to Text WebSocket interface.
The following steps describe how to execute a recognition request with SpeechToTextSession
:
- Connect: Invoke
connect()
to connect to the service. - Start Recognition Request: Invoke
startRequest(settings:)
to start a recognition request. - Send Audio: Invoke
recognize(audio:)
orstartMicrophone(compress:)
/stopMicrophone()
to send audio to the service. - Stop Recognition Request: Invoke
stopRequest()
to end the recognition request. The service will automatically stop the request if thecontinuous
setting is not set totrue
. If the recognition request is already stopped, then sending a stop message will have no effect. - Disconnect: Invoke
disconnect()
to wait for any remaining results to be received and then disconnect from the service.
All text and data messages sent by SpeechToTextSession
are queued, with the exception of connect()
which immediately connects to the server. The queue ensures that the messages are sent in-order and also buffers messages while waiting for a connection to be established. This behavior is generally transparent.
A SpeechToTextSession
also provides several (optional) callbacks. The callbacks can be used to learn about the state of the session or access microphone data.
onConnect
: Invoked when the session connects to the Speech to Text service.onMicrophoneData
: Invoked with microphone audio when a recording audio queue buffer has been filled. If microphone audio is being compressed, then the audio data is in Opus format. If uncompressed, then the audio data is in 16-bit PCM format at 16 kHz.onPowerData
: Invoked every 0.025s when recording with the average dB power of the microphone.onResults
: Invoked when transcription results are received for a recognition request.onError
: Invoked when an error or warning occurs.onDisconnect
: Invoked when the session disconnects from the Speech to Text service.
The following example demonstrates how to use SpeechToTextSession
to transcribe microphone audio:
import SpeechToTextV1
let username = "your-username-here"
let password = "your-password-here"
let speechToTextSession = SpeechToTextSession(username: username, password: password)
func startStreaming() {
// define callbacks
speechToTextSession.onConnect = { print("connected") }
speechToTextSession.onDisconnect = { print("disconnected") }
speechToTextSession.onError = { error in print(error) }
speechToTextSession.onPower = { decibels in print(decibels) }
speechToTextSession.onMicrophoneData = { data in print("received data") }
speechToTextSession.onResults = { results in print(results.bestTranscript) }
// define recognition request settings
var settings = RecognitionSettings(contentType: .Opus)
settings.interimResults = true
settings.continuous = true
// start streaming microphone audio for transcription
speechToTextSession.connect()
speechToTextSession.startRequest(settings)
speechToTextSession.startMicrophone()
}
func stopStreaming() {
speechToTextSession.stopMicrophone()
speechToTextSession.stopRequest()
speechToTextSession.disconnect()
}
The following links provide more information about the IBM Speech to Text service:
- IBM Watson Speech to Text - Service Page
- IBM Watson Speech to Text - Documentation
- IBM Watson Speech to Text - Demo
The IBM Watson Text to Speech service synthesizes natural-sounding speech from input text in a variety of languages and voices that speak with appropriate cadence and intonation.
The following example demonstrates how to use the Text to Speech service:
import TextToSpeechV1
let username = "your-username-here"
let password = "your-password-here"
let textToSpeech = TextToSpeech(username: username, password: password)
let text = "your-text-here"
let failure = { (error: NSError) in print(error) }
textToSpeech.synthesize(text, failure: failure) { data in
let audioPlayer = try AVAudioPlayer(data: data)
audioPlayer.prepareToPlay()
audioPlayer.play()
}
The Text to Speech service supports a number of voices for different genders, languages, and dialects. The following example demonstrates how to use the Text to Speech service with a particular voice:
import TextToSpeechV1
let username = "your-username-here"
let password = "your-password-here"
let textToSpeech = TextToSpeech(username: username, password: password)
let text = "your-text-here"
let failure = { (error: NSError) in print(error) }
textToSpeech.synthesize(text, voice: SynthesisVoice.GB_Kate, failure: failure) { data in
let audioPlayer = try AVAudioPlayer(data: data)
audioPlayer.prepareToPlay()
audioPlayer.play()
}
The following links provide more information about the IBM Text To Speech service:
- IBM Watson Text To Speech - Service Page
- IBM Watson Text To Speech - Documentation
- IBM Watson Text To Speech - Demo
The IBM Watson Tone Analyzer service can be used to discover, understand, and revise the language tones in text. The service uses linguistic analysis to detect three types of tones from written text: emotions, social tendencies, and writing style.
Emotions identified include things like anger, fear, joy, sadness, and disgust. Identified social tendencies include things from the Big Five personality traits used by some psychologists. These include openness, conscientiousness, extraversion, agreeableness, and emotional range. Identified writing styles include confident, analytical, and tentative.
The following example demonstrates how to use the Tone Analyzer service:
import ToneAnalyzerV3
let username = "your-username-here"
let password = "your-password-here"
let version = "YYYY-MM-DD" // use today's date for the most recent version
let toneAnalyzer = ToneAnalyzer(username: username, password: password, version: version)
let text = "your-input-text"
let failure = { (error: NSError) in print(error) }
toneAnalyzer.getTone(text, failure: failure) { tones in
print(tones)
}
The following links provide more information about the IBM Watson Tone Analyzer service:
- IBM Watson Tone Analyzer - Service Page
- IBM Watson Tone Analyzer - Documentation
- IBM Watson Tone Analyzer - Demo
The IBM Watson Tradeoff Analytics service helps people make better choices when faced with multiple, often conflicting, goals and alternatives. By using mathematical filtering techniques to identify the best candidate options based on different criteria, the service can help users explore the tradeoffs between options to make complex decisions. The service combines smart visualization and analytical recommendations for easy and intuitive exploration of tradeoffs.
The following example demonstrates how to use the Tradeoff Analytics service:
import TradeoffAnalyticsV1
let username = "your-username-here"
let password = "your-password-here"
let tradeoffAnalytics = TradeoffAnalytics(username: username, password: password)
// define columns
let price = Column(
key: "price",
type: .Numeric,
goal: .Minimize,
isObjective: true
)
let ram = Column(
key: "ram",
type: .Numeric,
goal: .Maximize,
isObjective: true
)
let screen = Column(
key: "screen",
type: .Numeric,
goal: .Maximize,
isObjective: true
)
let os = Column(
key: "os",
type: .Categorical,
isObjective: true,
range: Range.CategoricalRange(categories: ["android", "windows-phone", "blackberry", "ios"]),
preference: ["android", "ios"]
)
// define options
let galaxy = Option(
key: "galaxy",
values: ["price": .Int(50), "ram": .Int(45), "screen": .Int(5), "os": .String("android")],
name: "Galaxy S4"
)
let iphone = Option(
key: "iphone",
values: ["price": .Int(99), "ram": .Int(40), "screen": .Int(4), "os": .String("ios")],
name: "iPhone 5"
)
let optimus = Option(
key: "optimus",
values: ["price": .Int(10), "ram": .Int(300), "screen": .Int(5), "os": .String("android")],
name: "LG Optimus G"
)
// define problem
let problem = Problem(
columns: [price, ram, screen, os],
options: [galaxy, iphone, optimus],
subject: "Phone"
)
// define failure function
let failure = { (error: NSError) in print(error) }
// identify optimal options
tradeoffAnalytics.getDilemma(problem, failure: failure) { dilemma in
print(dilemma.solutions)
}
The following links provide more information about the IBM Watson Tradeoff Analytics service:
- IBM Watson Tradeoff Analytics - Service Page
- IBM Watson Tradeoff Analytics - Documentation
- IBM Watson Tradeoff Analytics - Demo
The IBM Watson Visual Recognition service uses deep learning algorithms to analyze images (.jpg or .png) for scenes, objects, faces, text, and other content, and return keywords that provide information about that content. The service comes with a set of built-in classes so that you can analyze images with high accuracy right out of the box. You can also train custom classifiers to create specialized classes.
The following example demonstrates how to use the Visual Recognition service:
The following example demonstrates how to use the Visual Recognition service to detect faces in an image:
import VisualRecognitionV3
let apiKey = "your-apikey-here"
let version = "YYYY-MM-DD" // use today's date for the most recent version
let visualRecognition = VisualRecognition(apiKey: apiKey, version: version)
let url = "your-image-url"
let failure = { (error: NSError) in print(error) }
visualRecognition.classify(url, failure: failure) { classifiedImages in
print(classifiedImages)
}
The following links provide more information about the IBM Watson Visual Recognition service: