New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tagging scheme to enable multiple use cases (see discussion for details) #139

Closed
rhs opened this Issue Jan 24, 2018 · 1 comment

Comments

Projects
None yet
1 participant
@rhs
Contributor

rhs commented Jan 24, 2018

Tristan Pemble @tristanpemble 16:04
thoughts on manifests deletion? I feel like Forge would have to start putting metadata into manifests to track which resources are managed by it
if I remove an ingress from the repository, for example, it'll just stick around forever until I remove it manually

Rafael Schloming @rhs 16:05
funny you should ask

Tristan Pemble @tristanpemble 16:05
I think it would be as simple as putting metadata tags with the forge service name/release sha or something

Rafael Schloming @rhs 16:06
yeah, we've actually been hitting that issue also

Tristan Pemble @tristanpemble 16:06
that's also tricky though.. say you rename a deployment
and service pointing to deployment etc
you would have to be careful

Rafael Schloming @rhs 16:07
yeah, so as far as I can tell, the "right" way to do this is to have a tagging scheme and then use the --purge option with apply
where the tagging scheme might be something along the lines of add a label to every kubernetes resource that identifies what forge service and/or profile it belongs to

Tristan Pemble @tristanpemble 16:09
would be helpful for identifying which resources are managed by forge anyway
but this is getting into helm territory
/shrug

Rafael Schloming @rhs 16:10
I was thinking of just adding a label that points back to the git repo.

Tristan Pemble @tristanpemble 16:11
what about monolithic repos
I guess wouldn't change anything

Rafael Schloming @rhs 16:12
we recently had an outage for a service that has motivated us to optimize the path from URL -> k8s resources -> source code
so that's part of the labeling motivation for us

Tristan Pemble @tristanpemble 16:12
ah ok
for me I am just changing so much right now and other devs are starting to get this running, that it's easier for me to tell them to wipe their machine and start over than try to debug what is out of sync

Rafael Schloming @rhs 16:13
it also happens to solve the deletion problem
is the main source of out of syncess renaming of k8s resources?

Tristan Pemble @tristanpemble 16:14
that and just removing/moving stuff

Rafael Schloming @rhs 16:18
ok, so if we had a tagging scheme that let me a) quickly navigate to the source code for a given service (maybe just pointer to service.yaml within a given repo so monorepos work) and b) changed apply to use purge with an appropriate selector to limit the scope of the purge to the given service, and c) maybe have a fancy query command you could run that would show you the diff between k8s resources and your source code, would that address both your issue and our issue?

Tristan Pemble @tristanpemble 16:19
yes, I think so
and maybe longer term a way to "undeploy" an entire service

Rafael Schloming @rhs 16:20
good point
that should be easy
pass the same selector to kubectl delete or whatever it's called

@rhs

This comment has been minimized.

Contributor

rhs commented Feb 8, 2018

I've implemented this as discussed in forge 0.4.0, see https://forge.sh/docs/reference/managing-services and https://forge.sh/docs/reference/metadata for the docs.

@rhs rhs closed this Feb 8, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment