-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set deserialization cache size based on target memory usage #34000
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -145,6 +145,28 @@ func Run(s *options.APIServer) error { | |
glog.Fatalf("Failed to start kubelet client: %v", err) | ||
} | ||
|
||
if s.StorageConfig.DeserializationCacheSize == 0 { | ||
// When size of cache is not explicitly set, estimate its size based on | ||
// target memory usage. | ||
glog.V(2).Infof("Initalizing deserialization cache size based on %dMB limit", s.TargetRAMMB) | ||
|
||
// This is the heuristics that from memory capacity is trying to infer | ||
// the maximum number of nodes in the cluster and set cache sizes based | ||
// on that value. | ||
// From our documentation, we officially recomment 120GB machines for | ||
// 2000 nodes, and we scale from that point. Thus we assume ~60MB of | ||
// capacity per node. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. our max object size is 900kb, IIRC. I would suggest an alternate heuristic, which is to decide e.g. 1/3 of this memory will be used for the deserialization cache, and then divide by the max object size to see how big the cache should be. (It would of course be better to just actually measure the collective sizes of the objects in the cache but that's much much harder. A TODO mentioning this might be good.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I also considered it (when thinking about it for the sizes of caches in "Cacher"), but decided that it's not the best option for now. First of all, because the target memory is not super accurate now (it actually is more-or-less target memory for all components on master machine). Second, because it will result in waaaaay bigger memory usage for larger clusters, which is also not very good in my opinion. So I would prefer to leave it as is (especially since we want to cherrypick) and probably solve it better later (for 1.5 maybe?). [Though I extended the TODO so that it contains what you basically wrote]. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK, I think we should probably reconsider in the future (it would be good to explicitly set how much memory is allowed for this, so admins get predictable usage) but I guess this is still a big improvement, so I won't block it over this. |
||
// TODO: We may consider deciding that some percentage of memory will | ||
// be used for the deserialization cache and divide it by the max object | ||
// size to compute its size. We may even go further and measure | ||
// collective sizes of the objects in the cache. | ||
clusterSize := s.TargetRAMMB / 60 | ||
s.StorageConfig.DeserializationCacheSize = 25 * clusterSize | ||
if s.StorageConfig.DeserializationCacheSize < 1000 { | ||
s.StorageConfig.DeserializationCacheSize = 1000 | ||
} | ||
} | ||
|
||
storageGroupsToEncodingVersion, err := s.StorageGroupsToEncodingVersion() | ||
if err != nil { | ||
glog.Fatalf("error generating storage version map: %s", err) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -82,6 +82,10 @@ func Run(s *options.ServerRunOptions) error { | |
// TODO: register cluster federation resources here. | ||
resourceConfig := genericapiserver.NewResourceConfig() | ||
|
||
if s.StorageConfig.DeserializationCacheSize == 0 { | ||
// When size of cache is not explicitly set, set it to 50000 | ||
s.StorageConfig.DeserializationCacheSize = 50000 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @quinton-hoole @nikhiljindal how much memory do you expect federation apiserver to consume? |
||
} | ||
storageGroupsToEncodingVersion, err := s.StorageGroupsToEncodingVersion() | ||
if err != nil { | ||
glog.Fatalf("error generating storage version map: %s", err) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -37,8 +37,6 @@ import ( | |
) | ||
|
||
const ( | ||
DefaultDeserializationCacheSize = 50000 | ||
|
||
// TODO: This can be tightened up. It still matches objects named watch or proxy. | ||
defaultLongRunningRequestRE = "(/|^)((watch|proxy)(/|$)|(logs?|portforward|exec|attach)/?$)" | ||
) | ||
|
@@ -158,7 +156,9 @@ func NewServerRunOptions() *ServerRunOptions { | |
func (o *ServerRunOptions) WithEtcdOptions() *ServerRunOptions { | ||
o.StorageConfig = storagebackend.Config{ | ||
Prefix: DefaultEtcdPathPrefix, | ||
DeserializationCacheSize: DefaultDeserializationCacheSize, | ||
// Default cache size to 0 - if unset, its size will be set based on target | ||
// memory usage. | ||
DeserializationCacheSize: 0, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will break federation apiserver if they use this cache? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fixed. |
||
} | ||
return o | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI - This is pretty-much copy from:
https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/cachesize/cachesize.go#L75