You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reported in #313. The root cause there is that the namespace being deployed to has over 4000 pods, most of which are old evicted pods. Since we are unconditionally using the batched fetching strategy, our sync cycles fetch and initialize all 4000+ pods even though in the example given only a single one was needed.
Expected behavior: Polling is instant for small resource groups.
Actual behavior: Polling can take forever for small resource groups, if the cluster contains a large number of resources of that type.
Bug report
Reported in #313. The root cause there is that the namespace being deployed to has over 4000 pods, most of which are old evicted pods. Since we are unconditionally using the batched fetching strategy, our sync cycles fetch and initialize all 4000+ pods even though in the example given only a single one was needed.
Expected behavior: Polling is instant for small resource groups.
Actual behavior: Polling can take forever for small resource groups, if the cluster contains a large number of resources of that type.
Version(s) affected: Latest.
Proposed solution
Wrap https://github.com/Shopify/kubernetes-deploy/blob/master/lib/kubernetes-deploy/sync_mediator.rb#L32-L37 in something along the lines of
if resources.length < LARGE_BATCH_THRESHOLD
and add/adjust tests.cc @dwradcliffe @csfrancis (likely affects overall core deploy speed too) @Shopify/cloudx
The text was updated successfully, but these errors were encountered: