Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shouldn't Cluster.hdd_bytes and Cluster.ssd_bytes be in a oneof block? #366

Closed
dhermes opened this issue Jul 21, 2015 · 7 comments
Closed
Assignees
Labels
api: bigtable Issues related to the googleapis/java-bigtable-hbase API. 🚨 This issue needs some love. triage me I really want to be triaged.

Comments

@dhermes
Copy link

dhermes commented Jul 21, 2015

I'm implementing the Python library for the API and noticed that both hdd_bytes and ssd_bytes can be set on a CreateCluster call (and they persist when calling GetCluster).

Shouldn't the values (in the Cluster message class) be in a oneof block:

  // The maximum HDD storage usage allowed in this cluster, in bytes.
  int64 hdd_bytes = 6;

  // The maximum SSD storage usage allowed in this cluster, in bytes.
  int64 ssd_bytes = 7;
@dhermes
Copy link
Author

dhermes commented Jul 21, 2015

/cc @tseaver @jgeewax

@jgeewax
Copy link
Contributor

jgeewax commented Jul 21, 2015

/cc @coryoconnor @carterpage @maxluebbe -- Can a Bigtable cluster have both hdd_bytes and ssd_bytes ? I'd think no, and that these below in a one-of (union) like @dhermes is saying.

@mgarolera
Copy link
Contributor

The cluster api is not currently enabled. There will be some iteration on it. This fields are likely to not be used any time soon, prefer using only default_storage_type instead.

Right now clusters are only created via the UI and a cluster can only be SSD. In the future we plan to enable HDD clusters, and maybe one day mixed HDD and SSD in the same cluster, which would make these fields meaningful.

@jgeewax
Copy link
Contributor

jgeewax commented Jul 21, 2015

OK.

This leads to a second question. If we were to send a pull request to modify these protos, would it be worthwhile? Or is this "read-only" open source that will have changes blasted out during some sync job?

@dhermes
Copy link
Author

dhermes commented Jul 21, 2015

Thanks for the reply @mgarolera !

Regarding

The cluster api is not currently enabled.

I have been able to use it (with a user account, not with a service account) when enabling the "Cloud Bigtable Table Admin API" in the APIs console. See googleapis/google-cloud-python#872 for discussion.


@jgeewax It doesn't seem we'd want to make a PR for this issue.

@mgarolera
Copy link
Contributor

@jgeewax: Yes, there protos are not yet the authoritative source and a change would be overwritten.
@dhermes: The api shouldn't be enabled for external users. If you are writing tooling to use it, don't use hdd_bytes and ssd_bytes. HDD clusters are coming soon so it would be useful to be able to control the default_storage_type field.

@sduskis sduskis closed this as completed Sep 2, 2015
@dhermes
Copy link
Author

dhermes commented Sep 2, 2015

Thanks @sduskis!

@google-cloud-label-sync google-cloud-label-sync bot added the api: bigtable Issues related to the googleapis/java-bigtable-hbase API. label Jan 31, 2020
@yoshi-automation yoshi-automation added 🚨 This issue needs some love. triage me I really want to be triaged. labels Apr 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the googleapis/java-bigtable-hbase API. 🚨 This issue needs some love. triage me I really want to be triaged.
Projects
None yet
Development

No branches or pull requests

5 participants