Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bigtable: Move admin api into its own artifact. #3494

Merged

Conversation

igorbernstein2
Copy link

@igorbernstein2 igorbernstein2 commented Jul 23, 2018

The target usecases are different enough that the clients should be split.
Also, it avoids confusion associated with duplicate static names

The target usecases are different enough that the clients should be split.
Also, it avoids confusion associated with duplicate static names
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Jul 23, 2018
@igorbernstein2 igorbernstein2 changed the title Bigtable: Move admin api into its own artifact. Bigtable: Move admin api into its own artifact. WIP Jul 23, 2018
@igorbernstein2
Copy link
Author

This is ready for review

@igorbernstein2 igorbernstein2 changed the title Bigtable: Move admin api into its own artifact. WIP Bigtable: Move admin api into its own artifact. Jul 24, 2018
@igorbernstein2
Copy link
Author

@pongad Can you take a look? I would like to merge this sooner rather than later to prevent it from diverging in the next client regen


## Quickstart

[//]: # ({x-version-update-start:google-cloud-bigtable:released})

This comment was marked as spam.

This comment was marked as spam.

[cloud-platform]: https://cloud.google.com/
[cloud-bigtable]: https://cloud.google.com/bigtable/
[bigtable-product-docs]: https://cloud.google.com/bigtable/docs/
[bigtable-client-lib-docs]: https://googlecloudplatform.github.io/google-cloud-java/google-cloud-clients/apidocs/index.html?com/google/cloud/bigtable/package-summary.html

This comment was marked as spam.

This comment was marked as spam.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>google-cloud-bigtable-admin</artifactId>
<version>0.55.2-alpha-SNAPSHOT</version><!-- {x-version-update:google-cloud-bigtable:current} -->

This comment was marked as spam.

This comment was marked as spam.

@@ -414,6 +414,7 @@
<modules>
<module>google-cloud-automl</module>
<module>google-cloud-bigtable</module>
<module>google-cloud-bigtable-admin</module>

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

@@ -30,7 +30,6 @@


dir_overrides = {
'bigtable-admin': 'google-cloud-bigtable',
'error-reporting': 'google-cloud-errorreporting',

This comment was marked as spam.

This comment was marked as spam.

This comment was marked as spam.

@@ -0,0 +1,134 @@
# Google Cloud Java Client for Bigtable Admin

This comment was marked as spam.

This comment was marked as spam.

@igorbernstein2
Copy link
Author

PTAL

Copy link
Member

@garrettjonesgoogle garrettjonesgoogle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@garrettjonesgoogle garrettjonesgoogle merged commit 89433fc into googleapis:master Jul 25, 2018
@igorbernstein2
Copy link
Author

Thanks!

@igorbernstein2 igorbernstein2 deleted the bigtable-split-admin branch July 26, 2018 16:19
snehashah16 added a commit that referenced this pull request Sep 18, 2018
* bigquery: let users contruct TableResult for testing (#3242)

Fixes #3228.

* remove dead assignment (#3247)

* bigquery: unbox Boolean in examples (#3248)

* Move grpc and proto artifacts to google-cloud-java from api-client-staging (second part) (#3251)

1) Radically change the structure of the repo. Now the root `pom.xml` file is not deployed to maven and is not a parent pom for any of the other modules, but is simply an aggregator pom, which aggregates other mostly independent of each other modules.
2) Update proto and grpc artifact to the latest generated versions as of time of this PR.
3) Add `cloudiot-v1` and `websecurityscanner-v1alpha` proto and grp artifacts (not released yet even in api-client-staging). Note, gapic clients for this apis are not added yet.
4) Rename `google-cloud-pom` parent artifact (for manual and gapic clients) to `google-cloud-clients`.
5) Move all manual and gapic clients from root directory to `google-cloud-clients` sub directory.
6) Make `google-cloud-bom` not a child of `google-cloud-clients` (former `google-cloud-pom`) anymore, keep it on root level so it becomes a sibling of `google-cloud-clients` (module which used to be its parent).
7) Similarly make `google-cloud-examples`, `google-cloud-testing` and `google-cloud-util` not children of `google-cloud-clients` and keep them on root level. Make these three modules also excluded from maven deployment (they will not be published to maven anymore).

After this PR is done, additional work is required to fix circleci individual IT tests runs (should be trivial). Also deployment and documentation scripts must be modified accordingly (will be done right after pushing this PR).

* BigQuery: add missing query statistics.

* translate: document concurrent use (#3243)

Fixes #3191.

* all: fix integration CI (#3222)

Proto- and grpc- packages have moved to this repository.
When we run integration on CI, we must build them, otherwise the
tasks will fail from not being able to find dependencies.

* versions: fix typos in versions and docs (#3261)

* move resource names (#3259)

* fix integration tests, seriously for real this time (#3262)

We need to update the script since the clients moved.

* remove resource name types; to be moved to api-client-staging (#3264)

* Add new clients to README.md (#3266)

- BigQuery Data Transfer
- Cloud Redis

* properly link Redis docs (#3272)

Fixes #3270.

* fix README API link (#3273)

Fixes #3260.

* Adjust documentation creation to new repo structure (#3274)

1) Add `utilities/stage_sites.py` (probably temporary solution).
2) Remove `utilities/create_site.sh` and `utilities/stage_release.sh` scripts. Other `.sh` releasing scripts will be removed soon. `RELEASING.md` was not update to reflect the changes. The readme will be updated after first successfull release with new structure (the only way to have an accurate releasing reame is to make the release first and record the steps).
3) Remove `.settings` folder. It is an eclipse-specific folder which hasn't been updated for 2 years. Nobody uses eclipse in our team, plus we should keep repo IDE-independent.
4) Move documentaiton look & feel files (.css, .html and .js files) to proper location (to `google-cloud-clients` from root).
5) Remove `google-cloud` maven dependency sample from landing page (since `google-cloud` metapackage was removed)
5) Several minor documentaiton-related fixes.

* PubSub: Update region tags to standard

* Switch StatementType to a StringEnumType, more complete testing.

* remove unused imports

* push setEstimatedBytesProcessed in toPb()

* Update pubsub sample links (#3285)

* Regenerate gapic clients, add IoT and Web Security Scanner clients (#3282)

Also remove beta packages for dlp, as they are removed from googleapis. IoT packages were moved from `cloudiot` to just `iot`.

Also regenerate clients with @BetaApi annotation for LRO-specific code.

* Bump version to 0.48.0 release (#3288)

* Update storage api client library version

* Post-release cleanup and fixes (specific to recent repository restructure) (#3290)

1) Fix poms deployment config.
2) Fix documentaiton links in readmes.
3) Update main README (remove `goolge-cloud` metapackage reference).
4) Update `RELEASING.md` to reflect changes. The instructions try to avoid having mysterious scripts running in the release process and also ensures that all disruptive operations (actual release/push) are done explicitly and not somewhere in the middle of a mysterious script.
5) Remove `deploy.sh` and `finalize_release.sh` scripts.
6) Fix few previously broken links (bigquerydatatransfer and compute apidocs links)

* Bump version to 0.48.1-SNASPHOT for development (#3294)

* Update CreateTopicAndPublishMessages.java (#3249)

Add the publisher error handler sample. As requested, the error handling part in the publishing quickstart sample is now removed.

* Remove String instantiation

* Pubsub: adds missing region tags

* Update README.md (#3278)

Updating the links for Cloud Tools for IntelliJ and Cloud Tools for Eclipse to include campaign tags

* Add support for BigQuery's NUMERIC type (#3110)

* storage: fix integration (#3297)

Do not pollute main test bucket with default kms key

* spanner: use method getters (#3299)

instead of deprecated fields.

Original PR: #2989

* BigQuery: correct ITBigQueryTest (#3303)

Test asserts consistent values for all rows within a table, but the
sample data used for generating tables was inconsistent between rows.

* pubsub: declare GA (#3298)

The version bump will be picked up by the next release.
We'll hold off on the README update until then.

* refresh proto, grpc and gapic clients (#3306)

* Regenerate proto, grpc and gapic clients.

* revert pubsub version changes - which will be picked up by version bump script during release

* revert more version changes

* release 0.49.0 (#3310)

* bump version for development (#3311)

* Bump Pub/Sub from Beta list to GA list

* Remove note about client surface changing

* Adding Timestamp.toDate() (#3313)

* make MetadataConfig.getAttribute() public (#3307)

* fix run-on sentence (#3318)

* fix storage auth example in README (#3322)

* Add port to storage upload url (#3324)

* Add TextToSpeech v1 pom.xml and java files (#3327)

This concludes weekly batch refresh.

* fix example links (#3328)

* Update gax-java, api-common dependecies to latest (#3335)

* update gax deps to latest

* remove ResourceNameType, update api-common to 1.6, update jsr to 3.0.2

* regenerate grpc/proto packages using new protoc-gapic-p:

* Release 0.50.0 (#3337)

* Also add texttospeech v1 to versions.txt

* Bump version for development (#3338)

* add newBuilder() to logging.SourceLocation (#3339)

* add newBuilder() to logging.SourceLocation

* fix code review

* GCS w/ KMS Samples (#3323)

* Add samples for 'storage_upload_with_kms_key' and 'storage_set_bucket_default_kms_key'.

* Remove accidental newline.

* Remove use of deprecated infoStream create.

* Address feedback.

* Fix test method name.

* Additional feedback.

* Fix testBlobAcl test.

* AssertEquals instead of assertTrue.

* Update formating.

* make storage batch request honor options.getHost() (#3340)

* Update README.md (#3341)

Fix pubsub README links

* Add a spanId field to the google-cloud-logging LogEntry class. (#3332)

The new field is consistent with the span_id field in the Stackdriver LogEntry
protocol buffer message.  Fixes #3325.

* bigtable: delete unnecessary test class (#3342)

The file was copied from gax to help test BigTable.
However, the deleted class isn't actually used anywhere,
and will fail to compile when we update gax to stabilize bidi-streaming
API.

This commit simply deletes the problematic class. If we need it later to
test BigTable, we can just copy it again.
In this way, we don't need to go through the complicated breaking-change
release process.

* Logging: set Trace in trace instead of label (#3301)

* Add link to cpp doc page in google-cloud-clients/src/site/index.html (#3347)

* Return Dates as com.google.cloud.Timestamps (#3317)

* Return Dates as com.google.cloud.Timestamps

* Address feedback

* Adding queryWithMicrosecondPrecision test

* Updating log message to use .get()

* Fix ITComputeTest (#3348)

@ignore failing Compute integration tests and close #3312

* bigquery: allow user to null partition expiration (#3353)

* regenerate clients (#3354)

* Add support for PARQUET format in BigQuery load jobs. (#3357)

* Add support for PARQUET format in BigQuery load jobs.

Also adds a code sample demonstrating / testing parquet loads. Modelled
after the Python sample at https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-parquet

* s/remoteLoadJob/loadJob/g

All jobs are remote jobs.

* Release 0.51.0 (#3358)

* Exposing TransportChannelProvider & CredentialsProvider (#3320)

* bump version for development (#3360)

* Add samples for managing tables. (#3361)

Add missing Java samples for managing BigQuery tables documentation at
https://cloud.google.com/bigquery/docs/managing-tables

* Batch sample now reflects Go and Python versions (#3359)

It doesn't make sense to wait synchronously for a batch query to
complete because they could take a long time to get scheduled. Instead,
demonstrate how to poll for the job state (possibly from a different
machine).

* update websecurityscanner readme (#3352)

* Bigtable: enable JWT tokens (#3351)

This will remove the periodic stop the world OAuth token refresh.
JWT tokens will be used instead of OAuth tokens when the user uses
a service account and the default scopes.

* spanner: add support for struct-typed parameters. (#3287)

* Add GAPIC Compute java library (#3100)

* Removes old sample (#3364)

* Removes old sample

bigquery_query_standard has been removed because it is no longer being used in the docs. (a combination of bigquery_query and bigquery_query_legacy are being used instead)

* Removes bigquery_query_standard test

* Fix zipslip vulnerability (#3366)

Thanks to The Snyk security team for bringing this up to our attention.

* Bumping monitoring-v3 to GA (#3365)

* bump grpc version (#3374)

* Bumping gax to 1.28.0 (bidi streaming updates) (#3375)

* Release 0.52.0 (#3376)

* Fixing nexus-staging:release from root (#3379)

* Bumping to snapshot versions (#3380)

* Fixing releasing instructions [ci skip] (#3382)

* Creating generate_api.py (#3389)

* Moving Monitoring to GA section [ci skip] (#3392)

* BigQuery: Add ORC format support for load jobs, missing bigtable support. (#3391)

* BigQuery: Add ORC format support for load jobs.

Additionally, plumb in the (missing) Bigtable format support for
federated tables.

* add overrides, unit testing

* Wire bigtable up into formatoptions

* add copyright headers.

* Convert BigtableColumn and BigtableColumnFamily to autovalue generation.

* excise unused imports, address codacy kvetching about declaration order.

* Address reviewer comments: formatting/whitespace, serializable, asserts

* unused imports (asserts)

* regenerating libraries for release (#3398)

* regenerate libraries

* Release 0.53.0 (#3399)

* Bump to next snapshot version (#3401)

* bigtable: Fluent DSL TableAdmin client  (#3395)

* Bumping Speech from alpha to beta (#3404)

* BigQuery: Document the behavior that streaming inserts are not present in destination output (#3407)

* Point ALPN not configured properly in TROUBLESHOOTING.md to (#3408)

the compatibility checker.

* Fixing versioning comments for GA clients (#3411)

* Set project id from credentials (#3413)

* Add test for setting project id in service options from credentials

* Set projectId from credentials if available

* Only set the projectId if not explicitly set

* readme: add region tags (#3421)

Adding region tags to include in Java Cloud Client Library docs on cloud.google.com

* Removing the region tags (#3422)

The text of the tags are displaying on the page.
Need to find a different way to grab this text snippet.

* Requester-Pays bucket support. (#3406)

* Requester-Pays bucket support.

Code and integration test.

To use this feature, set the "userProject" setting in the
CloudStorageConfiguration.

Optionally, set autoDetectRequesterPays to avoid automatically
unset userProject if the bucket isn't requester-pays.

* linter fixes

* minor linter fixes

* reviewer comments

* apply all codacy recommendations

* Put defaults back, remove unused import.

* new approach for snippet injection (#2838)

The current snippet injector does not work properly
with google-java-format, because GJF formats short javadoc comments
on one line, eg "/** comment */".
However, the injector script looks for "/**" on a line by itself.
The script will also not work if/when we move to Java 8, due to lack
of parser support.

This PR takes a different approach of not caring about Java syntax and
copy-paste everything in the SNIPPET block. While less powerful, it is
more robust.

As written, the script is also easier to use. There's no need to tell it
what file contains snippets and where to copy the snippets to. The
script recursively scan given directories.

Updates #2413.

* license

* Add test case for getSnip

* Add support for cloud region tags to snippet.go

* Release gapics (#3423)

Generated with googleapis/gapic-generator#2131 to fix error in LoggingClient (see fix at andreamlin/google-cloud-java@386c987)

Update google-common-protos to v 1.12.0.

Add google-cloud-tasks for the first time.

Remove LoggingSmokeTest.java, because the Logging smoke config has been removed from googleapis/googleapis, and the existing LoggingSmokeTest.java is incompatible with the current google-cloud-logging/src/main.

* Creating batch_generate_apis.py (#3428)

* Bump maven-source-plugin to 3.0.1 (#3435)

* Regenerate Compute client (#3438)

* Update READMEs for Compute (#3388)

Update READMEs and URL links.
Add ComputeExample.java which contains working client code for the Compute client.

* Fix the error when calling Timestamp.of(Date date) when date is pre epoch (#3434)

* Fix the error when calling Timestamp.of(Date date) when date is pre
epoch.

* Bump gax versions to 1.29.0/0.46.0 (#3439)

* Fix appveyor CI by using TLS 1.2 for Java 7 (#3440)

* Add GCE and discogapic to batch gen script (#3441)

* Add IoT, KMS and Tasks to batch generation (#3445)

* Regenerating proto/grpc/gapic code (#3444)

* Ignore deprecated Compute integration test (#3446)

* bump grpc version (#3447)

* Retry IOException: Error writing request body to server to fix the intermittent failure when uploading to gcs(#3433)

* ignore nio tests related to requester pays bucket (#3452)

* Adding kms-v1 (#3450)

* Cleanup: generation instructions, version ordering (#3451)

* Bump protobuf java to 3.6.0 (#3449)

Artman uses protoc v3.6.0 (see Dockerfile) to generate the proto and grpc libraries.

* Generation refresh - Cloud Tasks (#3453)

* Fix indentation in code example (#3456)

* Fixing BigTable javadoc errors (#3459)

* Release 0.54.0 (#3457)

* Bumping to snapshot versions (#3463)

* Updating RELEASING.md [skip ci] (#3464)

* Cleaning up client lists and API titles [skip ci] (#3465)

* storage: Fix rewrite operation to support predefinedAcl on a copy (#3467)

* Add core NIO contributors to credits (#3468)

* GCS NIO: fix one test to work with the other unit tests (#3454)

* bigtable: fix hardcoded admin test integration target (#3471)

* bigtable: fix integration test (#3473)

* firestore: use custom credential and channel provider by default (#3472)

* Avoid listing table data for destination of CREATE VIEW DDL queries. (#3469)

* Avoid listing table data for destination of CREATE VIEW DDL queries.

CREATE VIEW DDL queries set a "destination table" field, but that table
is actually a view. If you attempt to list rows on the view table, it
results in an error.

The fix is to avoid listing rows altogether if getQueryResults() says
that a query has completed but the number of rows in the result set is
undefined.

* Applied google-java-format

* Correct EmptyTableResult javadoc

* Fix BigQueryImpl unit tests.

* Fix JobTest unit tests.

* Remove unused TableResult.

* Add new API versions (#3477)

* update GAPIC clients (#3483)

* add vision/v1p3beta1 and automl clients (#3484)

* Release 0.55.0 (#3485)

* bump version for development (#3486)

* Release 0.55.1 (#3489)

* bump version for development (#3490)

* actually set speech to beta (#3487)

* Update batch_generate_apis.py

* update opencensus (#3481)

* Bigtable: remove regen scripts in favor of utilities/generate_api.py (#3495)

* ci: don't JAR javadoc (#3493)

Creating javadoc is sufficient to make sure the docs are free of errors.
JARing it just wastes time.

* Fix pom urls (#3499)

* bump auth version (#3498)

* pubsub: minor doc fix for Publisher (#3501)

* Regenerating with protoc 3.6.0 (#3506)

* pubsub: document auto message extension (#3491)

* Bigtable: Move admin api into its own artifact. (#3494)

The target usecases are different enough that the clients should be split.
Also, it avoids confusion associated with duplicate static names.

* Using --aspect CODE from artman; supporting java_proto (#3507)

* BigQuery: Add Clustering support (#3415)

* BigQuery: Add Clustering support to library.

Initial changes for table.  Next: plumb in configurations for
load/query destination.

* Plumb configuration options in.

* add missing license header, remove unused import

* Address reviewer comments: list immutability

* fixed Tasks client library link [skip ci] (#3516)

* Regenerating proto/client classes (#3519)

* Adding dataproc-v1beta2 (#3520)

* Release 0.56.0 (#3521)

* Fixing javadoc for release (#3522)

* Bumping to snapshot versions (#3523)

* Remove duplicate ">"s (#3528)

* Upgrading dependencies (#3530)

* Add model table type for the new BigQuery ML models (#3529)

Fixes listTables on datasets containing models.

* Bigtable: Decouple TableAdminClient from BigtableTableAdminSettings. (#3512)

This is in preparation for renaming the auto generated GAPIC clients &
settings to be prefixed with 'Base'. This will make it easier to
understand the layout of the code: the GAPIC generated client/settings
will be called BaseBigtableTableAdmin{Client,Settings}, while the
handwritten overlay will be called BigtableTableAdmin{Client,Settings}.

* Move Translate snippets to test file, use new snippets.go for inclusion. (#3527)

Updates snippets.go utility to Deindent and Indent by one space.

Updates to samples which are included in the docs to meet our
cdpe-snippets-rubric (show imports, show client construction, show how
to use the resulting object). I will follow-up with a PR to
java-docs-samples to remove any redundant snippets from that repo.

Why move to a test file?

We are considering moving all snippets-style samples to test files to
avoid redundant work involved with keeping snippets separate from test
files. The important thing for snippets is that they are runnable and
copy-pastable.

Ran snippets utility to include snippets in JavaDoc.

    ./utilities/snippets \
        ./google-cloud-examples/src/test/java/com/google/cloud/examples/translate/snippets/ITTranslateSnippets.java \
        ./google-cloud-clients/google-cloud-translate/src/main/java/com/google/cloud/translate/Translate.java

* Bigtable: prefix gapic generated clients with `Base`, prefix overlay clients with `Bigtable` (#3538)

* compute engine credentials and project ids go before service account (#3539)

* batch_generate_apis.py comments (#3544)

* set userProject from static default in FileSystemProvider (#3504)

* Regenerate all clients (#3547)

* Bump versions to 1.39.0/0.57.0 (#3548)

* Hacky Credentials Fix (#3541)

* Fix TableInfo javadoc (#3550)

* generate-api: print component versions (#3552)

* Bump to snapshot (#3553)

* Modified RetrySettings (#3549)

* Bigtable: Implement query sharding by generalizing ReadRows resume request builder. (#3103)

The generalized sharding can be used by map reduce style frameworks like beam.

* Add container analysis v1beta to batch generation script (#3558)

* logging: update tags for samples (#3560)

* Add Array Features to Firestore Java (#3561)

* Elevate access level to support mocking (#3562)

* Regenerate Clients, add Container Analysis client (#3563)

Also remove 'build' from ignored packages in .gitignore (required by io.grafeas.v1beta1.build package added in this PR).

* Release 0.58.0 and 1.40.0 (#3565)

* Bump version to 0.58.1-SNAPSHOT and 1.40.1-SNAPSHOT for development (#3566)

* Bigtable Admin: Promote models to top level classes (#3513)

* Bigtable: start working on BigtableInstanceAdmin (#3564)

* Bigtable: start working BigtableInstanceAdmin

This is the beginning of importing work done by spollapally in:
https://github.com/spollapally/google-cloud-java/tree/instance-admin-client/google-cloud-clients/google-cloud-bigtable

I will be importing & finishing it piecemeal. This first PR establishes the scaffolding for the client & settings

* Properly close snippet pubsub_subscriber_custom_credentials (#3575)

snippets.go fails and exits due to `snippet pubsub_subscriber_custom_credentials` not being closed.

```
$ go run utilities/snippets.go .
snippet: [START
google-cloud-examples/src/main/java/com/google/cloud/examples/pubsub/snippets/SubscriberSnippets.java]:178 snippet "pubsub_subscriber_custom_credentials" not closed
exit status 1
```

Rename the 2nd `START` to `END` to correct the typo.

* Bigtable: cleanup of futures + extras (#3571)

* spanner: add snippets for InstanceAdminClient (#3578)

also fix snippet `pubsub_subscriber_custom_credentials` not being closed.
```
$ go run utilities/snippets.go .
snippet: [START
google-cloud-examples/src/main/java/com/google/cloud/examples/pubsub/snippets/SubscriberSnippets.java]:178 snippet "pubsub_subscriber_custom_credentials" not closed
exit status 1
```
Rename the 2nd `START` to `END` to correct the typo.

* bump checkstyle version to build on Java 9 (#3577)

The version of checkstyle we currently use uses tools.jar
which is removed from Java 9 and above as part of Project Jigsaw.
This commit uses a newer version of checkstyle that does not
use tools.jar.

Running `mvn checkstyle:checkstyle` succeeded.

* nio: narrower shading (#3568)

* narrower shading

* use include for a shorter pom.xml

* remove _stuff prefix from shadedPattern

* update google-cloud-nio-examples README. It works.

* remove duplicated artifacts in bigtable pom file (#3584)

* Add Cloud Asset API (#3588)

* add Asset client (#3591)

* refresh clients (#3596)

* release 0.59.0 (#3598)

* bump version for development (#3599)

* Fix documentation for setParallelPullCount (#3542)

#3147 Changed the default without updating the documentation.

* removing word "natural" from product name (#3610)

* upgrade auth version (#3606)

* upgrade auth version

* jdk5 no longer comes in with oauth

* [Storage] Make StorageIT easier to setup with new projects. (#3608)

* batch generation before release (#3622)

* Release 0.60.0 (#3623)

* bump to snapshot version (#3625)

* fix logging unit tests (#3630)

(#3615) is happening when running on GCE. GCE would override the DEFAULT_RESOURCE and that's causing the tests to fail. This commit fixes the bug.

* Bigtable: add CRUD for instances (#3569)

* Bigtable: clean up consistency token (#3570)

* Bigtable: add CRUD for clusters (#3612)

* Bigtable: add CRUD for AppProfiles (#3619)

* spanner: Add snippets for Spanner, BatchClient and BatchReadOnlyTransaction (#3611)

* google-cloud-nio: retry on 502 errors, and increase max depth when doing channel reopens (#3557)

We've frequently encountered transient 502 errors in the wild when using
google-cloud-nio (see broadinstitute/gatk#4888),
implying that this error should be added to the list of retryable errors.

I also increased the maximum depth when inspecting nested exceptions looking for
reopenable errors from 10 to 20, as we've seen chains of exceptions that come very
close to the current limit of 10.

* Update signUrl documentation (#3546)

* Removed ComputeCredentials from examples of credentials that cannot
sign URLs.
* Added a note to look at the implementations' documentation for
additional setup steps needed.

* Bigtable: add resource level IAM (#3624)

* make DatastoreBatchWriter public (#3387)

It would be useful for `DatastoreBatchWriter` to be public for instrumentation purposes. Its sibling interfaces `DatastoreWriter`, `DatastoreReaderWriter`, `DatastoreReader` etc are all public so I assume that it not being public is not intentional.

* pubsub: clean up after extension gives up (#3633)

* [Storage] Bucket lock (#3574)

* Regenerate compute (#3642)

* Revert "[Storage] Bucket lock (#3574)" (#3644)

This reverts commit 9f1a96b.

* Fix logging integration test failure on when running on GCE (#3641)

The monitoredResource for logging would contain information of the GCE instance when running on GCE. Change the hard-coded assertions to make the test pass when running on GCE as well as local environment. Fix #3632

* Bigtable: table model improvements (#3640)

* Flatten cluster
* expose column families as list only
* use relative ids for tables & families

* Bigtable: cosmetic cleanup of table admin (#3638)

* Bigtable: cosmetic cleanup of table admin

* improve sample code
* improve javadoc
* silence warnings
* inline compose helpers
* copy instance admin's future unwrap code

* address feedback

* Refresh all clients (#3647)

* Bigtable: improve list tables spooler (#3639)

* Bigtable: improve list tables spooler

Avoid blocking the event loop. Previously the first page would be fetched
asynchronously, but all of the other pages would be fetched synchronously
which would block the grpc event loop. The new implementation uses future
chaining.

* update async test as well

* reformat

* tell JVM to use less memory when testing (#3650)

* don't install when testing

We already installed in build step.

* Javadoc fixes for Bigtable client (#3652)

* Release 0.61.0/1.43.0 (#3653)

* Bump to snapshot version for development (#3656)

* bigtable: RowMutation should allow passing of a Mutation (#3643)

Fixes #3637

* Add handwritten integration test for Compute GAPIC (#3660)

* Releasing.md instructions to uncomment nexus-staging-maven-plugin (#3654)

* Fixes for ITComputeTest (#3667)

- add scopes

* Add redis-v1 and video-intelligence-v1p2beta1 to batch (#3670)

* Regenerate proto/grpc files with protoc 3.6.0 (#3672)

* Adding redis-v1 and video-intelligence-v1p2beta1 (#3669)

* update gax to 1.31/0.48 (#3675)

* Weekly proto refresh (#3674)

* Release 1.44.0/0.62.0 (#3677)

* Bump to next snapshot versions (#3679)

* Add Kokoro CI config (#3664)

* Add presubmit test configs

Add windows test config and add credentials for integration tests

Invert the env var check

Use fastconfigpush for faster keystore propagation

Fix missing ;; in build script

Allow LoggingAppender default options test to pass locally and on GCE

Temporarily comment out the resource test

Also grab surefire reports

Fix java8-win bat path

credentials path debug

Set GCLOUD_PROJECT environment variable

Add Java 11 test config

try uploading surefire results as sponge_log.xml

Fix BigTable IT args

Temporarily test Java 11

Upload integration test output as sponge_log.xml too

Revert "Temporarily comment out the resource test"

This reverts commit f01bdbd.

Revert "Allow LoggingAppender default options test to pass locally and on GCE"

This reverts commit 90e28af.

* Temporarily comment out the LoggingAppender default resource test.

The default depends on the execution environment (GCE vs. locally) and
Kokoro tests run on GCE.

* Clean up debug output

* Add continuous build configs

* Fix the java10 build images to use Java 10, not 11

* Verify protoc version for batch-generation (#3676)

Fail fast when using utilities/batch_generate_apis.py if the local protoc version doesn't match the protobuf-java version defined in the pom.xml.

* spanner: Add snippets for ReadContext (#3662)

spanner: Add snippets for ReadContext

* link to google-cloud-logging from README (#3681)

* storage: include information on a bucket prefix (#3671)

* Add downloadFile sample and reformat storage snippets (#3689)

* pubsub: add Publisher.awaitTermination (#3688)

[Newer gRPC versions](https://github.com/grpc/grpc-java/releases/tag/v1.12.0) seem to check that we call this method.
Currently shutdown waits for all messages to publish and return before shutting
anything down, so awaitTermination likely won't do anything meaningful.

In the future, we should make shutdown return promptly and use
awaitTermination to wait for messages.
I reported this at #3687.

Fixes #3648.

* spanner: expand test coverage for getDatabaseClient() (#3686)

This change also adds the jacoco coverage plugin in the spanner pom.xml
and sets jacoco.skip to true to disable it by default. It can be enabled
by passing -Djacoco.skip=false to the mvn command.

* Kokoro additions (#3685)

* Add Kokoro CI badge

* Set integration test timeout at 10 minutes

* Link to the devrel public bucket

* Add java 8 on osx tests

* empty commit to force ci

* Bigtable: add enhanced stub for bigtable table admin client (#3691)

This will be used #3658 to add new callables that can't be autogenerated.

* Bigtable: wrap proto enums (#3659)

* Bigtable: add await replication (#3658)

This replaces the raw calls to generate/check consistancy token with a
polling wrapper that wait until all clusters are consistent.

* bigquery: properly fail when setting TableId's project twice (#3694)

Fixes #3283
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes This human has signed the Contributor License Agreement.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants