Skip to content
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ Watson services are migrating to token-based Identity and Access Management (IAM
### Getting credentials
To find out which authentication to use, view the service credentials. You find the service credentials for authentication the same way for all Watson services:

1. Go to the IBM Cloud **[Dashboard][watson-dashboard]** page.
1. Either click an existing Watson service instance or click **Create**.
1. Click **Show** to view your service credentials.
1. Copy the `url` and either `apikey` or `username` and `password`.
1. Go to the IBM Cloud [Dashboard](https://console.bluemix.net/dashboard/apps?category=ai) page.
1. Either click an existing Watson service instance or click [**Create resource > AI**](https://console.bluemix.net/catalog/?category=ai) and create a service instance.
1. Click **Show** to view your service credentials.
1. Copy the `url` and either `apikey` or `username` and `password`.

### IAM

Expand Down Expand Up @@ -283,7 +283,7 @@ function (err, token) {

Use the [Assistant][conversation] service to determine the intent of a message.

Note: you must first create a workspace via Bluemix. See [the documentation](https://console.bluemix.net/docs/services/conversation/index.html#about) for details.
Note: You must first create a workspace via IBM Cloud. See [the documentation](https://console.bluemix.net/docs/services/conversation/index.html#about) for details.

```js
var AssistantV1 = require('watson-developer-cloud/assistant/v1');
Expand Down
582 changes: 569 additions & 13 deletions discovery/v1-generated.ts

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions natural-language-classifier/v1-generated.ts
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ class NaturalLanguageClassifierV1 extends BaseService {
*
* @param {Object} params - The parameters to send to the service.
* @param {string} params.classifier_id - Classifier ID to use.
* @param {string} params.text - The submitted phrase.
* @param {string} params.text - The submitted phrase. The maximum length is 2048 characters.
* @param {Object} [params.headers] - Custom request headers
* @param {Function} [callback] - The callback that handles the response.
* @returns {NodeJS.ReadableStream|void}
Expand Down Expand Up @@ -342,7 +342,7 @@ namespace NaturalLanguageClassifierV1 {
export interface ClassifyParams {
/** Classifier ID to use. */
classifier_id: string;
/** The submitted phrase. */
/** The submitted phrase. The maximum length is 2048 characters. */
text: string;
headers?: Object;
}
Expand Down Expand Up @@ -446,13 +446,13 @@ namespace NaturalLanguageClassifierV1 {

/** Request payload to classify. */
export interface ClassifyInput {
/** The submitted phrase. */
/** The submitted phrase. The maximum length is 2048 characters. */
text: string;
}

/** Response from the classifier for a phrase in a collection. */
export interface CollectionItem {
/** The submitted phrase. */
/** The submitted phrase. The maximum length is 2048 characters. */
text?: string;
/** The class with the highest confidence. */
top_class?: string;
Expand Down
2 changes: 1 addition & 1 deletion speech-to-text/v1-generated.ts
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import { getMissingParams } from '../lib/helper';
import { FileObject } from '../lib/helper';

/**
* The IBM® Speech to Text service provides an API that uses IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic transcription, the service can produce detailed information about many aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For more information about the service, see the [IBM® Cloud documentation](https://console.bluemix.net/docs/services/speech-to-text/index.html). ### API usage guidelines * **Audio formats:** The service accepts audio in many formats (MIME types). See [Audio formats](https://console.bluemix.net/docs/services/speech-to-text/audio-formats.html). * **HTTP interfaces:** The service provides three HTTP interfaces for speech recognition. The sessionless interface includes a single synchronous method. The session-based interface includes multiple synchronous methods for maintaining a long, multi-turn exchange with the service. And the asynchronous interface provides multiple methods that use registered callbacks and polling for non-blocking recognition. See [The HTTP REST interface](https://console.bluemix.net/docs/services/speech-to-text/http.html) and [The asynchronous HTTP interface](https://console.bluemix.net/docs/services/speech-to-text/async.html). * **WebSocket interface:** The service also offers a WebSocket interface for speech recognition. The WebSocket interface provides a full-duplex, low-latency communication channel. Clients send requests and audio to the service and receive results over a single connection in an asynchronous fashion. See [The WebSocket interface](https://console.bluemix.net/docs/services/speech-to-text/websockets.html). * **Customization:** Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. Language model customization is generally available for production use by most supported languages; acoustic model customization is beta functionality that is available for all supported languages. See [The customization interface](https://console.bluemix.net/docs/services/speech-to-text/custom.html). * **Customization IDs:** Many methods accept a customization ID to identify a custom language or custom acoustic model. Customization IDs are Globally Unique Identifiers (GUIDs). They are hexadecimal strings that have the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. * **`X-Watson-Learning-Opt-Out`:** By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public. To prevent IBM from accessing your data for general service improvements, set the `X-Watson-Learning-Opt-Out` request header to `true` for all requests. You must set the header on each request that you do not want IBM to access for general service improvements. Methods of the customization interface do not log corpora, words, and audio resources that you use to build custom models. Your training data is never used to improve the service's base models. However, the service does log such data when a custom model is used with a recognition request. You must set the `X-Watson-Learning-Opt-Out` request header to `true` to prevent IBM from accessing the data to improve the service. * **`X-Watson-Metadata`**: This header allows you to associate a customer ID with data that is passed with a request. If necessary, you can use the **Delete labeled data** method to delete the data for a customer ID. See [Information security](https://console.bluemix.net/docs/services/speech-to-text/information-security.html).
* The IBM® Speech to Text service provides an API that uses IBM's speech-recognition capabilities to produce transcripts of spoken audio. The service can transcribe speech from various languages and audio formats. It addition to basic transcription, the service can produce detailed information about many aspects of the audio. For most languages, the service supports two sampling rates, broadband and narrowband. It returns all JSON response content in the UTF-8 character set. For more information about the service, see the [IBM® Cloud documentation](https://console.bluemix.net/docs/services/speech-to-text/index.html). ### API usage guidelines * **Audio formats:** The service accepts audio in many formats (MIME types). See [Audio formats](https://console.bluemix.net/docs/services/speech-to-text/audio-formats.html). * **HTTP interfaces:** The service provides three HTTP interfaces for speech recognition. The sessionless interface includes a single synchronous method. The session-based interface includes multiple synchronous methods for maintaining a long, multi-turn exchange with the service. And the asynchronous interface provides multiple methods that use registered callbacks and polling for non-blocking recognition. See [The HTTP REST interface](https://console.bluemix.net/docs/services/speech-to-text/http.html) and [The asynchronous HTTP interface](https://console.bluemix.net/docs/services/speech-to-text/async.html). **Important:** The session-based interface is deprecated as of August 8, 2018, and will be removed from service on September 7, 2018. Use the sessionless, asynchronous, or WebSocket interface instead. For more information, see the August 8 service update in the [Release notes](https://console.bluemix.net/docs/services/speech-to-text/release-notes.html#August2018). * **WebSocket interface:** The service also offers a WebSocket interface for speech recognition. The WebSocket interface provides a full-duplex, low-latency communication channel. Clients send requests and audio to the service and receive results over a single connection in an asynchronous fashion. See [The WebSocket interface](https://console.bluemix.net/docs/services/speech-to-text/websockets.html). * **Customization:** Use language model customization to expand the vocabulary of a base model with domain-specific terminology. Use acoustic model customization to adapt a base model for the acoustic characteristics of your audio. Language model customization is generally available for production use by most supported languages; acoustic model customization is beta functionality that is available for all supported languages. See [The customization interface](https://console.bluemix.net/docs/services/speech-to-text/custom.html). * **Customization IDs:** Many methods accept a customization ID to identify a custom language or custom acoustic model. Customization IDs are Globally Unique Identifiers (GUIDs). They are hexadecimal strings that have the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. * **`X-Watson-Learning-Opt-Out`:** By default, all Watson services log requests and their results. Logging is done only to improve the services for future users. The logged data is not shared or made public. To prevent IBM from accessing your data for general service improvements, set the `X-Watson-Learning-Opt-Out` request header to `true` for all requests. You must set the header on each request that you do not want IBM to access for general service improvements. Methods of the customization interface do not log corpora, words, and audio resources that you use to build custom models. Your training data is never used to improve the service's base models. However, the service does log such data when a custom model is used with a recognition request. You must set the `X-Watson-Learning-Opt-Out` request header to `true` to prevent IBM from accessing the data to improve the service. * **`X-Watson-Metadata`**: This header allows you to associate a customer ID with data that is passed with a request. If necessary, you can use the **Delete labeled data** method to delete the data for a customer ID. See [Information security](https://console.bluemix.net/docs/services/speech-to-text/information-security.html).
*/

class SpeechToTextV1 extends BaseService {
Expand Down
170 changes: 170 additions & 0 deletions test/integration/test.discovery.js
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ describe('discovery_integration', function() {
let configuration_id;
let collection_id;
let collection_id2;
let document_id;

before(function() {
environment_id = auth.discovery.environment_id;
Expand Down Expand Up @@ -158,6 +159,7 @@ describe('discovery_integration', function() {
discovery.addDocument(document_obj, function(err, response) {
assert.ifError(err);
assert(response.document_id);
document_id = response.document_id;
done(err);
});
});
Expand Down Expand Up @@ -325,4 +327,172 @@ describe('discovery_integration', function() {
);
});
});

describe('events tests', function() {
let document_id;
let session_token;

before(function(done) {
const addDocParams = {
environment_id,
collection_id,
file: fs.createReadStream('./test/resources/sampleWord.docx'),
};

discovery.addDocument(addDocParams, function(error, response) {
document_id = response.document_id;

const queryParams = {
environment_id,
collection_id,
natural_language_query: 'jeopardy',
};

discovery.query(queryParams, function(err, res) {
session_token = res.session_token;
done();
});
});
});

it('should create event', function(done) {
const type = 'click';
const createEventParams = {
type,
data: {
environment_id,
session_token,
collection_id,
document_id,
},
};
discovery.createEvent(createEventParams, function(err, res) {
assert.ifError(err);
assert.equal(res.type, type);
assert.equal(res.data.environment_id, environment_id);
assert.equal(res.data.collection_id, collection_id);
assert.equal(res.data.document_id, document_id);
assert.equal(res.data.session_token, session_token);
assert(res.data.result_type);
assert(res.data.query_id);
done();
});
});

after(function(done) {
const params = {
environment_id,
collection_id,
document_id,
};
discovery.deleteDocument(params, function(err, res) {
done();
});
});
});

describe('metrics tests', function() {
const start_time = '2018-08-07T00:00:00Z';
const end_time = '2018-08-08T00:00:00Z';

it('should get metrics event rate', function(done) {
const params = {
start_time,
end_time,
// result_type can only be either document or passage.
// but i get no results with either
};
discovery.getMetricsEventRate(params, function(err, res) {
assert.ifError(err);
assert(res.aggregations);
assert(Array.isArray(res.aggregations));
assert(res.aggregations.length);
assert(res.aggregations[0].results);
assert(Array.isArray(res.aggregations[0].results));
assert(res.aggregations[0].results.length);
assert.notEqual(res.aggregations[0].results[0].event_rate, undefined);
done();
});
});
it('should get metrics query', function(done) {
const params = {
start_time,
end_time,
};
discovery.getMetricsQuery(params, function(err, res) {
assert.ifError(err);
assert(res.aggregations);
assert(Array.isArray(res.aggregations));
assert(res.aggregations.length);
assert(res.aggregations[0].results);
assert(Array.isArray(res.aggregations[0].results));
assert(res.aggregations[0].results.length);
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
done();
});
});
it('should get metrics query event', function(done) {
discovery.getMetricsQueryEvent(function(err, res) {
assert.ifError(err);
assert(res.aggregations);
assert(Array.isArray(res.aggregations));
assert(res.aggregations.length);
assert(res.aggregations[0].results);
assert(Array.isArray(res.aggregations[0].results));
assert(res.aggregations[0].results.length);
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
done();
});
});
it('should get metrics query no results', function(done) {
discovery.getMetricsQueryNoResults(function(err, res) {
assert.ifError(err);
assert(res.aggregations);
assert(Array.isArray(res.aggregations));
assert(res.aggregations.length);
assert(res.aggregations[0].results);
assert(Array.isArray(res.aggregations[0].results));
assert(res.aggregations[0].results.length);
assert.notEqual(res.aggregations[0].results[0].matching_results, undefined);
done();
});
});
it('should get metrics query token event', function(done) {
const count = 2;
const params = { count };
discovery.getMetricsQueryTokenEvent(params, function(err, res) {
assert.ifError(err);
assert(res.aggregations);
assert(Array.isArray(res.aggregations));
assert(res.aggregations.length);
assert(res.aggregations[0].results);
assert(Array.isArray(res.aggregations[0].results));
assert.equal(res.aggregations[0].results.length, count);
assert.notEqual(res.aggregations[0].results[0].event_rate, undefined);
done();
});
});
});

describe('logs tests', function() {
it('should query log', function(done) {
const count = 2;
const filter = 'stuff';
const params = {
count,
offset: 1,
filter,
sort: ['created_timestamp'],
};
discovery.queryLog(params, function(err, res) {
assert.ifError(err);
assert(res.matching_results);
assert(res.results);
assert(Array.isArray(res.results));
assert.equal(res.results.length, count);
assert.notEqual(res.results[0].natural_language_query.indexOf(filter), -1);
done();
});
});
});
});
Loading