Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tanka should explicitly warn when deleting a Namespace object #311

Closed
Duologic opened this issue Jul 1, 2020 · 14 comments · Fixed by #531
Closed

Tanka should explicitly warn when deleting a Namespace object #311

Duologic opened this issue Jul 1, 2020 · 14 comments · Fixed by #531
Labels
component/kubernetes Working with Kubernetes clusters keepalive kind/feature Something new should be added
Milestone

Comments

@Duologic
Copy link
Member

Duologic commented Jul 1, 2020

A library may have something like this, pretty innocent:

local k = import 'k.libsonnet';
{
  namespace:  k.core.v1.namespace.new('default'),
}

If tanka uses the prune labels, it will happily label the namespace object as part of the environment, even though the object exists. Now, tk prune will pick up that object, just like any other and offer it to kubectl to delete it.

I'd like to propose a BIG FAT warning right before confirming for any namespace deletions, as deleting them might turn out really disastrous.

@issue-label-bot
Copy link

Issue-Label Bot is automatically applying the label kind/feature to this issue, with a confidence of 0.94. Please mark this comment with 👍 or 👎 to give our bot feedback!

Links: app homepage, dashboard and code for this bot.

@issue-label-bot issue-label-bot bot added the kind/feature Something new should be added label Jul 1, 2020
@sh0rez
Copy link
Member

sh0rez commented Jul 1, 2020

What about aborting deleting a ns if a namespace contains things we don't own?

@Duologic
Copy link
Member Author

Duologic commented Jul 1, 2020

Sounds like a good combination, have both a warning (to make sure the action was intended) and abort (because conflict of ownership). The abort might come at a performance penalty, I don't know the implementation details.

@sh0rez sh0rez changed the title tk prune should explicitly warn when deleting a Namespace object Tanka should explicitly warn when deleting a Namespace object Jul 6, 2020
@sh0rez
Copy link
Member

sh0rez commented Jul 6, 2020

Now that we also have tk delete, Tanka should generally not delete namespaces.

I see three ways here:

  1. Never doing it. Users need to clean up namespaces themselves.
  2. Having a flag to allow it, this will also have dangerous in it
  3. Trying to be clever and deleting it if we own all resources, otherwise aborting

I favor option 2 and we should also have the delete message print a warning that we delete a namespace and how many objects are still in there

@sh0rez sh0rez added the component/kubernetes Working with Kubernetes clusters label Jul 6, 2020
@Duologic
Copy link
Member Author

Duologic commented Jul 8, 2020

I don't see why tk delete can't delete namespaces. I can see many valid use cases.

Do you mean warning before confirmation? As in, tk prune or tk delete will show a diff result and at the end we say:

<diff here>

If you continue, this will also delete these namespaces:
 * metrictank
 * default
 * kube-system
Are you sure? [yes/No]

You could go one step further and say 'Type namespace default to delete:'.
An argument could exist --delete-namespace=metrictank which would allow to answer that in automation or something.

Personally I'm not a fan of the 'dangerous' keyword, most commands I know use --force to express that.

@sh0rez
Copy link
Member

sh0rez commented Jul 8, 2020

Actually thinking about this, if we deleted everything else before namespaces (happens like this anyways), we could then check whether it is empty. If it is, no worries, delete it.

If it is not, abort and let the user know. No need for warnings or more manual approval. wdyt?

@malcolmholmes
Copy link
Contributor

Yes, that nails it, I'd say.

@Duologic
Copy link
Member Author

Duologic commented Jul 8, 2020

Sounds very unix-y: rmdir doesn't continue either if the dir ain't empty.

@davidovich
Copy link
Contributor

In our multi-tenant environment, we have an out of band process to create and delete namespaces (to allow setting credentials for the namepace default serviceaccount and other policies.

I would not want tanka to delete the namespace even if empty because it is hard for the tenant to recreate it.

@sh0rez
Copy link
Member

sh0rez commented Jul 8, 2020

If the namespace is not created using Tanka, it's doesn't consider it owned (no label) and not in Jsonnet, so it'll never delete it.

This is only about namespaces created as part of the environment.

You need to make sure to not actually include the namespace as an object in your Jsonnet though

@ghostsquad
Copy link

I'd definitely vote for an option to include/exclude the namespace in the deletion. Then also require --force for the non empty namespace scenario.

@sh0rez sh0rez added this to the 0.12 milestone Jul 24, 2020
@stale
Copy link

stale bot commented Aug 23, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 23, 2020
@stale stale bot removed the stale label Aug 23, 2020
@sh0rez sh0rez modified the milestones: 0.12, 0.13 Sep 14, 2020
@Duologic
Copy link
Member Author

Duologic commented Mar 3, 2021

I tried to address not deleting namespaces in #477 but I think we have too jump way too many hoops and if not that there are a bunch of caveats that we don't yet see.

How do we feel about just warning before the apply? Then this issue becomes low hanging fruit with little impact on how Tanka works.

Applying to namespace 'default' of cluster 'cluster_name' at 'https://127.0.0.1' using context 'cluster_name'.
WARNING: This will delete these namespaces and all resources in them:
- <namespace>
- <namespace>
Please type 'yes' to confirm:

@Duologic
Copy link
Member Author

Duologic commented Mar 3, 2021

Ha, seems like I already propose that structure. ^^ 🤦

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes Working with Kubernetes clusters keepalive kind/feature Something new should be added
Projects
None yet
5 participants