Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply all resources in a single kubectl call #872

Merged
merged 19 commits into from Jan 2, 2018

Conversation

@sambooo
Copy link
Contributor

commented Dec 17, 2017

In an attempt to make full-syncs as fast as possible, this change combines all yamels into a big multidoc and attempts to apply that with a single kubectl call.

In the event that the mega-apply fails, fallback to applying yamels one at a time.

Obligatory random cleanups included at no extra cost.

@sambooo sambooo force-pushed the kubectl-megapply branch from 3bc99d5 to 4cae92f Dec 17, 2017

@rade

This comment has been minimized.

Copy link
Contributor

commented Dec 17, 2017

Does this make flux less tolerant of

  • syntax errors in individual yamls
  • failures in applying a particular yaml

?

@sambooo

This comment has been minimized.

Copy link
Contributor Author

commented Dec 17, 2017

Yes (to both), fallback yet to be implemented.

I'm trying to figure out if there's some way to get the best of both worlds (single kubectl apply that doesn't stop at the first failure), but if that doesn't work we can always just attempt the mega-apply and go back to applying yamels one at a time if that fails.

@sambooo sambooo force-pushed the kubectl-megapply branch from 569fc1d to 6bbad23 Dec 17, 2017

@sambooo sambooo requested a review from squaremo Dec 17, 2017

@sambooo sambooo changed the title [WIP] Apply all resources in a single kubectl call Apply all resources in a single kubectl call Dec 17, 2017

@squaremo

This comment has been minimized.

Copy link
Member

commented Dec 18, 2017

Shout-out for the great commit messages ⭐️

@squaremo

This comment has been minimized.

Copy link
Member

commented Dec 18, 2017

Does this make syncs appreciably faster? It's one exec vs ~100, of course, but I would expect most of the latency to be from the network round-trips kubectl does on each apply.

I'm also worried that there's much more chance of running into the limitation that caused #826 (which will be built into kubectl). It'd be quite easy to accumulate 64k of manifests. Granted, it will fall back to doing things one by one.

@rade

This comment has been minimized.

Copy link
Contributor

commented Dec 18, 2017

Does this make syncs appreciably faster?

Some measurements would be nice.

@squaremo

This comment has been minimized.

Copy link
Member

commented Dec 18, 2017

Probably the best place to get measurements is by running in dev :) We can do that by ignoreing the fluxd deployment, applying a config to use an image built from this branch, and looking at the metrics.

Sam Broughton added 5 commits Dec 14, 2017
Sam Broughton
Remove namespace flag from kubectl apply/delete calls
Since the namespace must be specified in the parsed manifest for us to retrieve
it, we don't need to set this flag. Kubectl figures it out on its own.
Sam Broughton
Remove Stdout/err fields from Kubectl struct
These values are never actually used. They look like they are, but their only
usage is immediately reassigned.
Sam Broughton
Modify Applier to stage and execute rather than execute directly
This change to the interface facilitates creating an Applier that combines all
yamels into a multidoc to be applied in one call to kubectl.

Note that the deleted test was testing behaviour that exists nowhere in flux.
All references to `SyncAction` in the code contain only a single change, so the
test was unnecessary.
Sam Broughton
Accept []byte instead of *apiObject in Applier methods
Since the namespace flag is no longer needed for kubectl, and since the tests
don't actually need the object name, we can pass a byte slice and simplify a
bit.

This change guides us towards doing the single mega-apply without smushing bytes
into an apiObject in a kludgey way.

@sambooo sambooo force-pushed the kubectl-megapply branch from 46d37b2 to f2b6463 Dec 21, 2017

@squaremo
Copy link
Member

left a comment

Please rebut or adapt at your discretion.

return "default"
}
return obj.Metadata.Namespace
func (o *apiObject) hasDefaultNamespace() bool {

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

I thought the problem was with manifests that did not give a namespace. If "default" is supplied, that should be fine, as with any other namespace. In other words, what you really care about here is whether it's supplied or not.

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 22, 2017

Author Contributor

Correctamundo. Addressed in an unpushed commit.

version string // string response for the version command.
logger log.Logger
sshKeyRing ssh.KeyRing

sync.Mutex

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

If this isn't be to be used outside the methods of Cluster, it should be given an unexported name.

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 22, 2017

Author Contributor

Fixed.

cmd.Stderr = c.stderr
return cmd
func (c *Kubectl) execute(logger log.Logger, errs cluster.SyncError) {
defer c.changeSet.clear()

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

This implicit statefulness makes me feel queasy -- it feels like you would really want to lock the applier while you staged things, then execute; or, you'd construct a set of staged actions, and supply that to execute. I guess it doesn't matter too much if the methods aren't exported.

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 28, 2017

Author Contributor

Yep you're totally right here. Fixed. Why did I stash this data in the Kubectl struct again? 🤷‍♂️

}
}

type executeSet struct {

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

Is this layered exec -> stage -> exec structure necessary? Can you not just keep two lists when staging, and do two execs here?

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 22, 2017

Author Contributor

Removed. The sifting is now done in changeSet.stage(...)

@@ -146,7 +146,7 @@ func main() {
var clusterVersion string
var sshKeyRing ssh.KeyRing
var k8s cluster.Cluster
var image_creds func() registry.ImageCreds
var imageCreds func() registry.ImageCreds

This comment has been minimized.

Copy link
@squaremo

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 22, 2017

Author Contributor

😄

return
}
// Attempt to apply everything in s at once, else fallback to applying one at a time.
if err := c.doCommand(logger, s.rw, s.cmd...); err != nil {

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

I anticipate that it'd be useful to keep count of syncs that have to fall back -- either a separate counter, or as a label of the sync counter.

set.stage(obj)
}

c.exec(logger, defaultSet, cmd, errs)

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

If you always apply things without namespaces first, maybe you can get rid of the sorting of namespace resources to the front, in sync/sync.go?

This comment has been minimized.

Copy link
@squaremo

squaremo Dec 21, 2017

Member

(in fact I would prefer that, because it is particular to Kubernetes, which that package is not supposed to be ..)

This comment has been minimized.

Copy link
@sambooo

sambooo Dec 28, 2017

Author Contributor

Done.

Sam Broughton added 9 commits Dec 17, 2017
Sam Broughton
Replace `actionc` with a mutex
As much as I love finding places to put channels, in this case it is entirely
unnecessary. The only use of actionc can be replaced with a mutex, and any
future methods can just compete for the lock instead of competing for a channel
write.
Sam Broughton
Store staged objects in a slice rather than a map
This brings back the previous ordering without doing a sort. Since we no longer
apply objects one at a time, we don't need an id.
Sam Broughton
Remove command order test
This command was simply testing the mock.
Sam Broughton
Remove mockClientSet noise
We have this enormous block of nothingness all for satisying the requirements of
a constructor we don't even need to use.

@sambooo sambooo force-pushed the kubectl-megapply branch from f2b6463 to 9884a57 Dec 22, 2017

Sam Broughton
Apply namespaceless objects to default namespace
In some not-so-well-defined cases, applying an object to the default namespace
can fail when its yaml doesn't explicitly specify the namespace. The simplest
way to overcome this is to separate them out and pass the namespace flag to
kubectl.
Sam Broughton

@sambooo sambooo force-pushed the kubectl-megapply branch from 9884a57 to 68db839 Dec 22, 2017

Sam Broughton added 2 commits Dec 28, 2017
Sam Broughton
Remove namespace filtering logic from sync package
The ordering of application/deletion of namespaces before/after namespaced
resources is specific to kubernetes and so does not belong in the generic sync
package. This ordering is now implicitly handled by the order in which commands
are run in the kubernetes package.
Sam Broughton
Remove state from kubectl struct
It occurred to me that there's no good reason this state should be stored within
the struct that happens to implement the Applier interface. The state can be
built in the Sync method and then garbage collected at the end.
@squaremo
Copy link
Member

left a comment

LGTM, thanks for persevering Sam.

}
return <-errc
c.applier.apply(logger, cs, errs)

This comment has been minimized.

Copy link
@squaremo

squaremo Jan 2, 2018

Member

You don't need the lock until here, I believe

This comment has been minimized.

Copy link
@sambooo

sambooo Jan 2, 2018

Author Contributor

Right you are

@sambooo sambooo merged commit 0972ee9 into master Jan 2, 2018

1 check passed

ci/circleci: build Your tests passed on CircleCI!
Details

@sambooo sambooo deleted the kubectl-megapply branch Jan 2, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.