Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support apiEndpoint override #500

Merged
merged 2 commits into from
Jun 21, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 20 additions & 2 deletions .kokoro/continuous/node10/test.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,30 @@ before_action {
}
}

# token used by release-please to keep an up-to-date release PR.
# tokens used by release-please to keep an up-to-date release PR.
before_action {
fetch_keystore {
keystore_resource {
keystore_config_id: 73713
keyname: "yoshi-automation-github-key"
keyname: "github-magic-proxy-key-release-please"
}
}
}

before_action {
fetch_keystore {
keystore_resource {
keystore_config_id: 73713
keyname: "github-magic-proxy-token-release-please"
}
}
}

before_action {
fetch_keystore {
keystore_resource {
keystore_config_id: 73713
keyname: "github-magic-proxy-url-release-please"
}
}
}
11 changes: 6 additions & 5 deletions .kokoro/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,11 @@ else
echo "coverage is only reported for Node $COVERAGE_NODE"
fi

# if the GITHUB_TOKEN is set, we kick off a task to update the release-PR.
GITHUB_TOKEN=$(cat $KOKORO_KEYSTORE_DIR/73713_yoshi-automation-github-key) || true
if [ "$GITHUB_TOKEN" ]; then
npx release-please release-pr --token=$GITHUB_TOKEN \
# if release-please keys set, we kick off a task to update the release-PR.
if [ -f ${KOKORO_KEYSTORE_DIR}/73713_github-magic-proxy-url-release-please ]; then
npx release-please release-pr --token=${KOKORO_KEYSTORE_DIR}/73713_github-magic-proxy-token-release-please \
--repo-url=googleapis/nodejs-bigtable \
--package-name=@google-cloud/bigtable
--package-name=@google-cloud/bigtable \
--api-url=${KOKORO_KEYSTORE_DIR}/73713_github-magic-proxy-url-release-please \
--proxy-key=${KOKORO_KEYSTORE_DIR}/73713_github-magic-proxy-key-release-please
fi
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ Google APIs Client Libraries, in [Client Libraries Explained][explained].
### Before you begin

1. [Select or create a Cloud Platform project][projects].
1. [Enable billing for your project][billing].
1. [Enable the Cloud Bigtable API][enable_api].
1. [Set up authentication with a service account][auth] so you can access the
API from your local workstation.
Expand Down
72 changes: 72 additions & 0 deletions samples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,10 @@
* [Instances](#instances)
* [Quickstart](#quickstart)
* [Tableadmin](#tableadmin)
* [Write Batch](#write-batch)
* [Write Conditionally](#write-conditionally)
* [Write Increment](#write-increment)
* [Write Simple](#write-simple)

## Before you begin

Expand Down Expand Up @@ -71,6 +75,74 @@ __Usage:__
`node tableadmin.js`


-----




### Write Batch

View the [source code](https://github.com/googleapis/nodejs-bigtable/blob/master/samples/writeBatch.js).

[![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-bigtable&page=editor&open_in_editor=samples/writeBatch.js,samples/README.md)

__Usage:__


`node writeBatch.js`


-----




### Write Conditionally

View the [source code](https://github.com/googleapis/nodejs-bigtable/blob/master/samples/writeConditionally.js).

[![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-bigtable&page=editor&open_in_editor=samples/writeConditionally.js,samples/README.md)

__Usage:__


`node writeConditionally.js`


-----




### Write Increment

View the [source code](https://github.com/googleapis/nodejs-bigtable/blob/master/samples/writeIncrement.js).

[![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-bigtable&page=editor&open_in_editor=samples/writeIncrement.js,samples/README.md)

__Usage:__


`node writeIncrement.js`


-----




### Write Simple

View the [source code](https://github.com/googleapis/nodejs-bigtable/blob/master/samples/writeSimple.js).

[![Open in Cloud Shell][shell_img]](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/googleapis/nodejs-bigtable&page=editor&open_in_editor=samples/writeSimple.js,samples/README.md)

__Usage:__


`node writeSimple.js`





Expand Down
48 changes: 28 additions & 20 deletions src/v2/bigtable_client.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@

const gapicConfig = require('./bigtable_client_config.json');
const gax = require('google-gax');
const merge = require('lodash.merge');
const path = require('path');

const VERSION = require('../../../package.json').version;
Expand Down Expand Up @@ -56,14 +55,18 @@ class BigtableClient {
* API remote host.
*/
constructor(opts) {
opts = opts || {};
this._descriptors = {};

const servicePath =
opts.servicePath || opts.apiEndpoint || this.constructor.servicePath;

// Ensure that options include the service address and port.
opts = Object.assign(
{
clientConfig: {},
port: this.constructor.port,
servicePath: this.constructor.servicePath,
servicePath,
},
opts
);
Expand All @@ -88,12 +91,9 @@ class BigtableClient {
}

// Load the applicable protos.
const protos = merge(
{},
gaxGrpc.loadProto(
path.join(__dirname, '..', '..', 'protos'),
'google/bigtable/v2/bigtable.proto'
)
const protos = gaxGrpc.loadProto(
path.join(__dirname, '..', '..', 'protos'),
['google/bigtable/v2/bigtable.proto']
);

// This API contains "path templates"; forward-slash-separated
Expand Down Expand Up @@ -169,6 +169,14 @@ class BigtableClient {
return 'bigtable.googleapis.com';
}

/**
* The DNS address for this API service - same as servicePath(),
* exists for compatibility reasons.
*/
static get apiEndpoint() {
return 'bigtable.googleapis.com';
}

/**
* The port for this API service.
*/
Expand Down Expand Up @@ -234,7 +242,7 @@ class BigtableClient {
* default (zero) is to return all results.
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @returns {Stream}
* An object stream which emits [ReadRowsResponse]{@link google.bigtable.v2.ReadRowsResponse} on 'data' event.
*
Expand Down Expand Up @@ -281,7 +289,7 @@ class BigtableClient {
* "default" application profile will be used.
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @returns {Stream}
* An object stream which emits [SampleRowKeysResponse]{@link google.bigtable.v2.SampleRowKeysResponse} on 'data' event.
*
Expand Down Expand Up @@ -321,7 +329,7 @@ class BigtableClient {
* The unique name of the table to which the mutation should be applied.
* Values are of the form
* `projects/<project>/instances/<instance>/tables/<table>`.
* @param {string} request.rowKey
* @param {Buffer} request.rowKey
* The key of the row to which the mutation should be applied.
* @param {Object[]} request.mutations
* Changes to be atomically applied to the specified row. Entries are applied
Expand All @@ -334,7 +342,7 @@ class BigtableClient {
* "default" application profile will be used.
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @param {function(?Error, ?Object)} [callback]
* The function which will be called with the result of the API call.
*
Expand All @@ -352,7 +360,7 @@ class BigtableClient {
* });
*
* const formattedTableName = client.tablePath('[PROJECT]', '[INSTANCE]', '[TABLE]');
* const rowKey = '';
* const rowKey = Buffer.from('');
* const mutations = [];
* const request = {
* tableName: formattedTableName,
Expand Down Expand Up @@ -407,7 +415,7 @@ class BigtableClient {
* "default" application profile will be used.
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @returns {Stream}
* An object stream which emits [MutateRowsResponse]{@link google.bigtable.v2.MutateRowsResponse} on 'data' event.
*
Expand Down Expand Up @@ -452,7 +460,7 @@ class BigtableClient {
* applied.
* Values are of the form
* `projects/<project>/instances/<instance>/tables/<table>`.
* @param {string} request.rowKey
* @param {Buffer} request.rowKey
* The key of the row to which the conditional mutation should be applied.
* @param {string} [request.appProfileId]
* This value specifies routing for replication. If not specified, the
Expand Down Expand Up @@ -482,7 +490,7 @@ class BigtableClient {
* This object should have the same structure as [Mutation]{@link google.bigtable.v2.Mutation}
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @param {function(?Error, ?Object)} [callback]
* The function which will be called with the result of the API call.
*
Expand All @@ -500,7 +508,7 @@ class BigtableClient {
* });
*
* const formattedTableName = client.tablePath('[PROJECT]', '[INSTANCE]', '[TABLE]');
* const rowKey = '';
* const rowKey = Buffer.from('');
* const request = {
* tableName: formattedTableName,
* rowKey: rowKey,
Expand Down Expand Up @@ -545,7 +553,7 @@ class BigtableClient {
* applied.
* Values are of the form
* `projects/<project>/instances/<instance>/tables/<table>`.
* @param {string} request.rowKey
* @param {Buffer} request.rowKey
* The key of the row to which the read/modify/write rules should be applied.
* @param {Object[]} request.rules
* Rules specifying how the specified row's contents are to be transformed
Expand All @@ -558,7 +566,7 @@ class BigtableClient {
* "default" application profile will be used.
* @param {Object} [options]
* Optional parameters. You can override the default settings for this call, e.g, timeout,
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/global.html#CallOptions} for the details.
* retries, paginations, etc. See [gax.CallOptions]{@link https://googleapis.github.io/gax-nodejs/interfaces/CallOptions.html} for the details.
* @param {function(?Error, ?Object)} [callback]
* The function which will be called with the result of the API call.
*
Expand All @@ -576,7 +584,7 @@ class BigtableClient {
* });
*
* const formattedTableName = client.tablePath('[PROJECT]', '[INSTANCE]', '[TABLE]');
* const rowKey = '';
* const rowKey = Buffer.from('');
* const rules = [];
* const request = {
* tableName: formattedTableName,
Expand Down
16 changes: 8 additions & 8 deletions src/v2/doc/google/bigtable/v2/doc_bigtable.js
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ const ReadRowsRequest = {
* @property {Object[]} chunks
* This object should have the same structure as [CellChunk]{@link google.bigtable.v2.CellChunk}
*
* @property {string} lastScannedRowKey
* @property {Buffer} lastScannedRowKey
* Optionally the server might return the row key of the last row it
* has scanned. The client can use this to construct a more
* efficient retry request if needed: any row keys or portions of
Expand All @@ -76,7 +76,7 @@ const ReadRowsResponse = {
* Specifies a piece of a row's contents returned as part of the read
* response stream.
*
* @property {string} rowKey
* @property {Buffer} rowKey
* The row key for this chunk of data. If the row key is empty,
* this CellChunk is a continuation of the same row as the previous
* CellChunk in the response stream, even if that CellChunk was in a
Expand Down Expand Up @@ -116,7 +116,7 @@ const ReadRowsResponse = {
* RowFilter. Labels are only set
* on the first CellChunk per cell.
*
* @property {string} value
* @property {Buffer} value
* The value stored in the cell. Cell values can be split across
* multiple CellChunks. In that case only the value field will be
* set in CellChunks after the first: the timestamp and labels
Expand Down Expand Up @@ -169,7 +169,7 @@ const SampleRowKeysRequest = {
/**
* Response message for Bigtable.SampleRowKeys.
*
* @property {string} rowKey
* @property {Buffer} rowKey
* Sorted streamed sequence of sample row keys in the table. The table might
* have contents before the first row key in the list and after the last one,
* but a key containing the empty string indicates "end of table" and will be
Expand Down Expand Up @@ -204,7 +204,7 @@ const SampleRowKeysResponse = {
* This value specifies routing for replication. If not specified, the
* "default" application profile will be used.
*
* @property {string} rowKey
* @property {Buffer} rowKey
* The key of the row to which the mutation should be applied.
*
* @property {Object[]} mutations
Expand Down Expand Up @@ -259,7 +259,7 @@ const MutateRowsRequest = {
// This is for documentation. Actual contents will be loaded by gRPC.

/**
* @property {string} rowKey
* @property {Buffer} rowKey
* The key of the row to which the `mutations` should be applied.
*
* @property {Object[]} mutations
Expand Down Expand Up @@ -329,7 +329,7 @@ const MutateRowsResponse = {
* This value specifies routing for replication. If not specified, the
* "default" application profile will be used.
*
* @property {string} rowKey
* @property {Buffer} rowKey
* The key of the row to which the conditional mutation should be applied.
*
* @property {Object} predicateFilter
Expand Down Expand Up @@ -394,7 +394,7 @@ const CheckAndMutateRowResponse = {
* This value specifies routing for replication. If not specified, the
* "default" application profile will be used.
*
* @property {string} rowKey
* @property {Buffer} rowKey
* The key of the row to which the read/modify/write rules should be applied.
*
* @property {Object[]} rules
Expand Down
Loading