Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -27,25 +27,20 @@
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.DetectFacesOptions;
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.DetectedFaces;
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.GetClassifierOptions;
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.GetCoreMlModelOptions;
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.ListClassifiersOptions;
import com.ibm.watson.developer_cloud.visual_recognition.v3.model.UpdateClassifierOptions;
import okhttp3.MultipartBody;
import okhttp3.RequestBody;

import java.io.File;
import java.io.InputStream;

/**
* **Important:** As of September 8, 2017, the beta period for Similarity Search is closed. For more information, see
* [Visual Recognition API – Similarity Search
* Update](https://www.ibm.com/blogs/bluemix/2017/08/visual-recognition-api-similarity-search-update).
*
* The IBM Watson Visual Recognition service uses deep learning algorithms to identify scenes, objects, and faces in
* images you upload to the service. You can create and train a custom classifier to identify subjects that suit your
* needs.
*
* **Tip:** To test calls to the **Custom classifiers** methods with the API explorer, provide your `api_key` from your
* IBM® Cloud service instance.
*
* @version v3
* @see <a href="http://www.ibm.com/watson/developercloud/visual-recognition.html">Visual Recognition</a>
*/
Expand Down Expand Up @@ -169,9 +164,13 @@ public ServiceCall<ClassifiedImages> classify() {
/**
* Detect faces in images.
*
* Analyze and get data about faces in images. Responses can include estimated age and gender, and the service can
* identify celebrities. This feature uses a built-in classifier, so you do not train it on custom classifiers. The
* Detect faces method does not support general biometric facial recognition.
* **Important:** On April 2, 2018, the identity information in the response to calls to the Face model will be
* removed. The identity information refers to the `name` of the person, `score`, and `type_hierarchy` knowledge
* graph. For details about the enhanced Face model, see the [Release
* notes](https://console.bluemix.net/docs/services/visual-recognition/release-notes.html#23february2018). Analyze and
* get data about faces in images. Responses can include estimated age and gender, and the service can identify
* celebrities. This feature uses a built-in classifier, so you do not train it on custom classifiers. The Detect
* faces method does not support general biometric facial recognition.
*
* @param detectFacesOptions the {@link DetectFacesOptions} containing the options for the call
* @return a {@link ServiceCall} with a response type of {@link DetectedFaces}
Expand Down Expand Up @@ -205,9 +204,13 @@ public ServiceCall<DetectedFaces> detectFaces(DetectFacesOptions detectFacesOpti
/**
* Detect faces in images.
*
* Analyze and get data about faces in images. Responses can include estimated age and gender, and the service can
* identify celebrities. This feature uses a built-in classifier, so you do not train it on custom classifiers. The
* Detect faces method does not support general biometric facial recognition.
* **Important:** On April 2, 2018, the identity information in the response to calls to the Face model will be
* removed. The identity information refers to the `name` of the person, `score`, and `type_hierarchy` knowledge
* graph. For details about the enhanced Face model, see the [Release
* notes](https://console.bluemix.net/docs/services/visual-recognition/release-notes.html#23february2018). Analyze and
* get data about faces in images. Responses can include estimated age and gender, and the service can identify
* celebrities. This feature uses a built-in classifier, so you do not train it on custom classifiers. The Detect
* faces method does not support general biometric facial recognition.
*
* @return a {@link ServiceCall} with a response type of {@link DetectedFaces}
*/
Expand Down Expand Up @@ -287,7 +290,7 @@ public ServiceCall<Classifier> getClassifier(GetClassifierOptions getClassifierO
}

/**
* Retrieve a list of custom classifiers.
* Retrieve a list of classifiers.
*
* @param listClassifiersOptions the {@link ListClassifiersOptions} containing the options for the call
* @return a {@link ServiceCall} with a response type of {@link Classifiers}
Expand All @@ -305,7 +308,7 @@ public ServiceCall<Classifiers> listClassifiers(ListClassifiersOptions listClass
}

/**
* Retrieve a list of custom classifiers.
* Retrieve a list of classifiers.
*
* @return a {@link ServiceCall} with a response type of {@link Classifiers}
*/
Expand All @@ -323,7 +326,7 @@ public ServiceCall<Classifiers> listClassifiers() {
* (https://console.bluemix.net/docs/services/visual-recognition/customizing.html#updating-custom-classifiers).
* Encode all names in UTF-8 if they contain non-ASCII characters (.zip and image file names, and classifier and class
* names). The service assumes UTF-8 encoding if it encounters non-ASCII characters. **Important:** You can't update a
* custom classifier with an API key for a Lite plan. To update a custom classifer on a Lite plan, create another
* custom classifier with an API key for a Lite plan. To update a custom classifier on a Lite plan, create another
* service instance on a Standard plan and re-create your custom classifier. **Tip:** Don't make retraining calls on a
* classifier until the status is ready. When you submit retraining requests in parallel, the last request overwrites
* the previous requests. The retrained property shows the last time the classifier retraining finished.
Expand Down Expand Up @@ -360,4 +363,23 @@ public ServiceCall<Classifier> updateClassifier(UpdateClassifierOptions updateCl
return createServiceCall(builder.build(), ResponseConverterUtils.getObject(Classifier.class));
}

/**
* Retrieve a Core ML model of a classifier.
*
* Download a Core ML model file (.mlmodel) of a custom classifier that returns <tt>\"core_ml_enabled\": true</tt> in
* the classifier details.
*
* @param getCoreMlModelOptions the {@link GetCoreMlModelOptions} containing the options for the call
* @return a {@link ServiceCall} with a response type of {@link InputStream}
*/
public ServiceCall<InputStream> getCoreMlModel(GetCoreMlModelOptions getCoreMlModelOptions) {
Validator.notNull(getCoreMlModelOptions, "getCoreMlModelOptions cannot be null");
String[] pathSegments = { "v3/classifiers", "core_ml_model" };
String[] pathParameters = { getCoreMlModelOptions.classifierId() };
RequestBuilder builder = RequestBuilder.get(RequestBuilder.constructHttpUrl(getEndPoint(), pathSegments,
pathParameters));
builder.query(VERSION, versionDate);
return createServiceCall(builder.build(), ResponseConverterUtils.getInputStream());
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,13 @@ public interface Status {
private String name;
private String owner;
private String status;
@SerializedName("core_ml_enabled")
private Boolean coreMlEnabled;
private String explanation;
private Date created;
private List<Class> classes;
private Date retrained;
private Date updated;

/**
* Gets the classifierId.
Expand Down Expand Up @@ -92,6 +95,17 @@ public String getStatus() {
return status;
}

/**
* Gets the coreMlEnabled.
*
* Whether the classifier can be downloaded as a Core ML model after the training status is `ready`.
*
* @return the coreMlEnabled
*/
public Boolean isCoreMlEnabled() {
return coreMlEnabled;
}

/**
* Gets the explanation.
*
Expand All @@ -106,7 +120,7 @@ public String getExplanation() {
/**
* Gets the created.
*
* Date and time in Coordinated Universal Time that the classifier was created.
* Date and time in Coordinated Universal Time (UTC) that the classifier was created.
*
* @return the created
*/
Expand All @@ -128,12 +142,24 @@ public List<Class> getClasses() {
/**
* Gets the retrained.
*
* Date and time in Coordinated Universal Time that the classifier was updated. Returned when verbose=`true`. Might
* not be returned by some requests.
* Date and time in Coordinated Universal Time (UTC) that the classifier was updated. Returned when verbose=`true`.
* Might not be returned by some requests. Identical to `updated` and retained for backward compatibility.
*
* @return the retrained
*/
public Date getRetrained() {
return retrained;
}

/**
* Gets the updated.
*
* Date and time in Coordinated Universal Time (UTC) that the classifier was most recently updated. The field matches
* either `retrained` or `created`. Returned when verbose=`true`. Might not be returned by some requests.
*
* @return the updated
*/
public Date getUpdated() {
return updated;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import com.ibm.watson.developer_cloud.service.model.GenericModel;

/**
* Verbose list of classifiers retrieved in the GET v2/classifiers call.
* List of classifiers.
*/
public class Classifiers extends GenericModel {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,27 +12,25 @@
*/
package com.ibm.watson.developer_cloud.visual_recognition.v3.model;

import com.ibm.watson.developer_cloud.service.model.GenericModel;
import com.ibm.watson.developer_cloud.util.Validator;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;

import com.ibm.watson.developer_cloud.service.model.GenericModel;
import com.ibm.watson.developer_cloud.util.Validator;

/**
* The classify options.
*/
public class ClassifyOptions extends GenericModel {

/**
* Specifies the language of the output class names. Can be `en` (English), `ar` (Arabic), `de` (German), `es`
* (Spanish), `it` (Italian), `ja` (Japanese), or `ko` (Korean). Classes for which no translation is available are
* omitted. The response might not be in the specified language under these conditions: - English is returned when the
* requested language is not supported. - Classes are not returned when there is no translation for them. - Custom
* classifiers returned with this method return tags in the language of the custom classifier.
* The language of the output class names. The full set of languages is supported only for the built-in `default`
* classifier ID. The class names of custom classifiers are not translated. The response might not be in the specified
* language when the requested language is not supported or when there is no translation for the class name.
*/
public interface AcceptLanguage {
/** en. */
Expand All @@ -43,6 +41,8 @@ public interface AcceptLanguage {
String DE = "de";
/** es. */
String ES = "es";
/** fr. */
String FR = "fr";
/** it. */
String IT = "it";
/** ja. */
Expand Down Expand Up @@ -277,8 +277,8 @@ public Builder newBuilder() {
*
* An image file (.jpg, .png) or .zip file with images. Maximum image size is 10 MB. Include no more than 20 images
* and limit the .zip file to 100 MB. Encode the image and .zip file names in UTF-8 if they contain non-ASCII
* characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters. You can also include images
* with the `url` property in the **parameters** object.
* characters. The service assumes UTF-8 encoding if it encounters non-ASCII characters. You can also include an image
* with the **url** parameter.
*
* @return the imagesFile
*/
Expand All @@ -300,11 +300,9 @@ public String imagesFilename() {
/**
* Gets the acceptLanguage.
*
* Specifies the language of the output class names. Can be `en` (English), `ar` (Arabic), `de` (German), `es`
* (Spanish), `it` (Italian), `ja` (Japanese), or `ko` (Korean). Classes for which no translation is available are
* omitted. The response might not be in the specified language under these conditions: - English is returned when the
* requested language is not supported. - Classes are not returned when there is no translation for them. - Custom
* classifiers returned with this method return tags in the language of the custom classifier.
* The language of the output class names. The full set of languages is supported only for the built-in `default`
* classifier ID. The class names of custom classifiers are not translated. The response might not be in the specified
* language when the requested language is not supported or when there is no translation for the class name.
*
* @return the acceptLanguage
*/
Expand All @@ -315,8 +313,8 @@ public String acceptLanguage() {
/**
* Gets the url.
*
* A string with the image URL to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is
* 32X32 pixels per inch, and the maximum image size is 10 MB. You can also include images in the **images_file**
* The URL of an image to analyze. Must be in .jpg, or .png format. The minimum recommended pixel density is 32X32
* pixels per inch, and the maximum image size is 10 MB. You can also include images with the **images_file**
* parameter.
*
* @return the url
Expand All @@ -328,8 +326,7 @@ public String url() {
/**
* Gets the threshold.
*
* A floating point value that specifies the minimum score a class must have to be displayed in the response. The
* default threshold for returning scores from a classifier is `0.5`. Set the threshold to `0.0` to ignore the
* The minimum score a class must have to be displayed in the response. Set the threshold to `0.0` to ignore the
* classification score and return all values.
*
* @return the threshold
Expand All @@ -341,11 +338,11 @@ public Float threshold() {
/**
* Gets the owners.
*
* An array of the categories of classifiers to apply. Use `IBM` to classify against the `default` general classifier,
* and use `me` to classify against your custom classifiers. To analyze the image against both classifier categories,
* set the value to both `IBM` and `me`. The built-in `default` classifier is used if both **classifier_ids** and
* **owners** parameters are empty. The **classifier_ids** parameter overrides **owners**, so make sure that
* **classifier_ids** is empty.
* The categories of classifiers to apply. Use `IBM` to classify against the `default` general classifier, and use
* `me` to classify against your custom classifiers. To analyze the image against both classifier categories, set the
* value to both `IBM` and `me`. The built-in `default` classifier is used if both **classifier_ids** and **owners**
* parameters are empty. The **classifier_ids** parameter overrides **owners**, so make sure that **classifier_ids**
* is empty.
*
* @return the owners
*/
Expand All @@ -356,13 +353,11 @@ public List<String> owners() {
/**
* Gets the classifierIds.
*
* The **classifier_ids** parameter overrides **owners**, so make sure that **classifier_ids** is empty. -
* **classifier_ids**: Specifies which classifiers to apply and overrides the **owners** parameter. You can specify
* both custom and built-in classifiers. The built-in `default` classifier is used if both **classifier_ids** and
* **owners** parameters are empty. The following built-in classifier IDs require no training: - `default`: Returns
* classes from thousands of general tags. - `food`: (Beta) Enhances specificity and accuracy for images of food
* items. - `explicit`: (Beta) Evaluates whether the image might be pornographic. Example:
* `"classifier_ids="CarsvsTrucks_1479118188","explicit"`.
* Which classifiers to apply. Overrides the **owners** parameter. You can specify both custom and built-in
* classifiers. The built-in `default` classifier is used if both **classifier_ids** and **owners** parameters are
* empty. The following built-in classifier IDs require no training: - `default`: Returns classes from thousands of
* general tags. - `food`: (Beta) Enhances specificity and accuracy for images of food items. - `explicit`: (Beta)
* Evaluates whether the image might be pornographic.
*
* @return the classifierIds
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,11 +172,10 @@ public String name() {
* Gets the class names.
*
* A .zip file of images that depict the visual subject of a class in the new classifier. You can include more than
* one positive example file in a call. Append `_positive_examples` to the form name. The prefix is used as the class
* one positive example file in a call. Specify the parameter name by appending `_positive_examples` to the class
* name. For example, `goldenretriever_positive_examples` creates the class **goldenretriever**. Include at least 10
* images in .jpg or .png format. The minimum recommended image resolution is 32X32 pixels. The maximum number of
* images is 10,000 images or 100 MB per .zip file. Encode special characters in the file name in UTF-8. The API
* explorer limits you to training only one class. To train more classes, use the API functionality.
* images is 10,000 images or 100 MB per .zip file. Encode special characters in the file name in UTF-8.
*
* @return the classNames
*/
Expand All @@ -197,8 +196,8 @@ public File positiveExamplesByClassName(String className) {
/**
* Gets the negativeExamples.
*
* A compressed (.zip) file of images that do not depict the visual subject of any of the classes of the new
* classifier. Must contain a minimum of 10 images. Encode special characters in the file name in UTF-8.
* A .zip file of images that do not depict the visual subject of any of the classes of the new classifier. Must
* contain a minimum of 10 images. Encode special characters in the file name in UTF-8.
*
* @return the negativeExamples
*/
Expand Down
Loading