Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query : quota-backend-bytes max value #7045

Closed
vimalk78 opened this issue Dec 21, 2016 · 4 comments
Closed

Query : quota-backend-bytes max value #7045

vimalk78 opened this issue Dec 21, 2016 · 4 comments

Comments

@vimalk78
Copy link
Contributor

hi, is there any reason for the 8 GB limit on the keyspace memory size? is it a boltdb limitation?

 46 const (
 47     // DefaultQuotaBytes is the number of bytes the backend Size may
 48     // consume before exceeding the space quota.
 49     DefaultQuotaBytes = int64(2 * 1024 * 1024 * 1024) // 2GB
 50     // MaxQuotaBytes is the maximum number of bytes suggested for a backend
 51     // quota. A larger quota may lead to degraded performance.
 52     MaxQuotaBytes = int64(8 * 1024 * 1024 * 1024) // 8GB
 53 )   

--quota-backend-bytes can have a default value of 2GB and if set to -ve, disable the quota, and let user define the limit by setting to some value.
so no need of Default and Max Quota. only one Quota value is enough.

@gyuho
Copy link
Contributor

gyuho commented Dec 21, 2016

See this https://groups.google.com/d/msg/etcd-dev/vCeSLBKC_M8/2pPBXpc9BwAJ

The main reason of the size limit is MTTR mean time to recovery.

etcd is designed to be a highly available storage. It replicates all data across all nodes. If you lose a etcd member, simply adding a member should bring
the cluster back to full health state within 10s of seconds with little impact on the overall performance.

Typically, we can recover 2GB of data within 20 seconds on good hardware. We cannot do that for 1TB data due to today's hardware limitation. If you do not care about MTTR, you can in theory to store 1TB in etcd.

There are people making multiple consistent kv groups into a logic unified kv space: see Tikv or Cockroachdb's kv layer. Basically the physical consistent storage layer is well partitioned, with a logic proxy layer to make the kv space seems to be unified. But that all comes with cost, and a pretty expensive cost for cross kv groups consistency.

etcd's main use case is for storing metadata. We want to ensure there is no additional cost for that use case. We might make etcd proxy to be able to talk to multiple actual etcd clusters with some API limitation in the future. So you can get the same feeling that etcd is horizontal scalable.

@xiang90
Copy link
Contributor

xiang90 commented Dec 21, 2016

@gyuho I will make this into faq soon.

@xiang90 xiang90 closed this as completed Dec 21, 2016
@gyuho
Copy link
Contributor

gyuho commented Dec 21, 2016

Yeah let's put this into FAQ.

@vimalk78
Copy link
Contributor Author

thanks @gyuho @xiang90 👍

jonsyu1 added a commit to jonsyu1/etcd that referenced this issue Mar 27, 2017
The recovery case is very infreqeuent and we are okay with longer mean time to
recovery on a per-configuration basis as a means to break glass and store more
data in etcd.

See original issue for more details: etcd-io#7045
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants