Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update pods using podman play kube file.yml #4478

Closed
yangm97 opened this issue Nov 8, 2019 · 23 comments
Closed

Update pods using podman play kube file.yml #4478

yangm97 opened this issue Nov 8, 2019 · 23 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@yangm97
Copy link
Contributor

yangm97 commented Nov 8, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

Currently, you can create pods using podman play kube app-pod.yml but if you try running the same command again you get the following message:

Error: error adding pod to state: name app is in use: pod already exists

kubectl handles updating bare pods, even though it is recommended that you use deployments instead, but it's the subject of another issue:

$ kubectl apply -f app-pod.yml 
pod/app created
# change the app-pod.yml file
$ kubectl apply -f app-pod.yml 
pod/app configured

Steps to reproduce the issue:

  1. podman play kube app-pod.yml

  2. podman play kube app-pod.yml

Describe the results you received:

Error: error adding pod to state: name app is in use: pod already exists

Describe the results you expected:

Updating pods generated from a kubernetes yaml file.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

(paste your output here)

Output of podman info --debug:

(paste your output here)

Package info (e.g. output of rpm -q podman or apt list podman):

(paste your output here)

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 8, 2019
@rhatdan
Copy link
Member

rhatdan commented Nov 8, 2019

@yangm97 So the behaviour you would expect would be to replace the existing pod with a new pod, correct?

@mheon
Copy link
Member

mheon commented Nov 8, 2019

If we're replacing, I would think we'd want a flag to make it explicit - --force or --replace to allow replacing existing pods with the same name?

@rhatdan
Copy link
Member

rhatdan commented Nov 8, 2019

SGTM

@yangm97
Copy link
Contributor Author

yangm97 commented Nov 9, 2019

@rhatdan I think it would be nice to replicate the kubernetes behavior, if possible, which I believe is updating a pod when applying a kind: Pod spec and recreating the pod(s) when using a kind: Deployment.

@baude
Copy link
Member

baude commented Nov 17, 2019

I'm a bit torn on this. It is a departure from the regular behavior. The error message seems pretty clear; is deleting it too big of a leap?

@yegle
Copy link

yegle commented Nov 26, 2019

I was looking for a docker-compose up replacement and found this FR.

In docker-compose up, Docker will only restart containers that are updated, without touching other containers. It looks like so far the discussion here is to recreate the pod (with every containers in it), which is very different from what docker-compose up is doing.

While I think it's important to replicate k8s behaviors, I also think it's worth to replicate some of the docker behaviors, and to the extent, the docker-compose behaviors.

I'm aware of podman-compose. Ultimately I'm looking for managing my container configuration in a version controlled way and would be happy to replace docker-compose.yaml with k8s Pod yaml file. But if the only way to update a container in a Pod is to recreate the whole pod, then podman is probably not the right tool for homelab setup...

@yangm97
Copy link
Contributor Author

yangm97 commented Nov 27, 2019

@yegle I think you got me wrong. I'm advocating for updating the pod to follow the k8s convention, when applying a spec of kind: Pod.
The only time a pod should be recreated when following the k8s convention is when you apply a kind: Deployment.

On a sidenote, if you're new to kubernetes I recommend "forgetting" the "docker/compose/swarm" way of doing stuff, for a moment, to ease your learning experience.
k8s takes a whole different approach at container orchestration and I've had issues myself until I stopped trying to map the "docker" knowledge into k8s.

@yegle
Copy link

yegle commented Dec 2, 2019

This is very conflicting: podman was presented to newcomers as a docker CLI drop-in replacement (this is evident if looking at the # of issues reporting incompatibility between Docker and Podman). But in many places the tool is trying to align with k8s, like this issue.

Perhaps the libpod project should clearly state the vision of podman, and the relationship between podman/docker/k8s. I'm probably not the only podman user who just assumes they can use podman with all the knowledge from Docker.

@mheon
Copy link
Member

mheon commented Dec 2, 2019

We don't view podman play kube as a direct replacement for docker compose and as such we don't feel that it needs to align to docker compose semantics.

We have other plants for supporting docker compose - more details on those at a later date.

@rhatdan
Copy link
Member

rhatdan commented Dec 3, 2019

Right "Docker compose" != Docker CLI. Our goal is to mostly replace the Docker CLI for as much as we can, with the exception of SWARM. We plan on supporting Kubernetes because this is what the Container communities have chosen.

There are some other features of Docker that we have either chosen not to implement (links) or can't implement because of a lack of a Daemon, or we fail it is a bug in Docker. Another difference can pop up when using Rootless, since some things we simply are not allowed to do because of kernel security features that require full root.

@github-actions
Copy link

github-actions bot commented Jan 3, 2020

This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.

@hikhvar
Copy link
Contributor

hikhvar commented Feb 2, 2020

Currently I miss the "recreate-if-changed" semantic in podman play kube pod.yaml. I want to use podman to deploy some containers to a low single digit number of hosts. The setup is so small, Kubernetes will be a big overhead. However it would be dope if I can upload the PodSpec to a host and just run podman play kube and podman replaces the pod if the spec has changed. Currently I have to emulate this by checking myself if the new pod.yaml differs from the old pod.yaml. If so, I have to remove and create the pod. I currently have to implement all race conditions like for example a pod.yaml exists but the pod was deleted.

To be clear: I don't want to recreate the pod every time I run podman play kube pod.yaml only if the pod.yaml has changed.

@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2020

That would mean we would have to squirrel away the yaml file, which does not feel like something podman should do, where it would be fairly easy for a script to do.

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

I am going to close this since I think this should be scripted, Reopen if you disagree.

@rhatdan rhatdan closed this as completed Jun 9, 2020
@yangm97
Copy link
Contributor Author

yangm97 commented Jun 19, 2020

Ok, I think this got a little out of track. The purpose of this issue was to provide a way to replicate the k8s behaviour of mutating (not recreating) pods when given a bare pod spec (kind: Pod). This doesn't imply in a need of storing the original yaml spec somewhere afaict.
I also don't see how this could be replicated with a wrapper script, seeing as podman doesn't provide a way to mutate a pod once created.

Since using bare pods isn't seen as a good practice and podman 2.0 is going to support kind: Deployment specs, maybe it's best to keep this issue closed as wontfix and make podman generate kube output spec files that use the kind: Deployment instead of kind: Pod?

@haircommander
Copy link
Collaborator

I am actually not sure how k8s "mutates" a pod, but under the hood, I am fairly certain the pod is just recreated. There is no way to, for instance, change the immutable config.json for a container once the container's been created in the runtime. I think all the apiserver is doing (possibly with help from kubelet) is checking if the spec has changed, and if so, create a new pod with the same name, removing the old one.

that's not arguing that it should or should not be handled by podman or a script, but CRI-O certainly doesn't have the capability to mutate a running container, meaning the CRI doesn't need support for it, meaning kubelet shouldn't expect it, and the kube-apiserver can't rely on it. Thus, it must be patched around by someone in kube, and would need to be similarly hacked around here

@xordspar0
Copy link
Contributor

In docker-compose up, Docker will only restart containers that are updated

We don't view podman play kube as a direct replacement for docker compose and as such we don't feel that it needs to align to docker compose semantics.

Ok, I think this got a little out of track. The purpose of this issue was to provide a way to replicate the k8s behaviour

I agree, this issue is about podman play kube having parity with kubectl apply, not about docker-compose, even though it has a similar feature.

I am actually not sure how k8s "mutates" a pod, but under the hood, I am fairly certain the pod is just recreated.

That's right, when the pod spec of a deployment spec changes, Kubernetes does a rolling redeploy of the deployment. The old pods go down one by one and get replaced by a new pod from the new deployment.

I think all the apiserver is doing (possibly with help from kubelet) is checking if the spec has changed, and if so, create a new pod with the same name, removing the old one.

The new pods don't even have the same name as the old ones. The deployment is able to keep track of them by labels. That's why you define a metadata.labels.app (or whatever labels you prefer) in the pod spec and a selector.matchLabels in the deployment.

Thus, it must be patched around by someone in kube, and would need to be similarly hacked around here

Right, this is totally implemented by Kubernetes, not the container runtime. Kubernetes keeps a copy of the currently deployed deployment spec and diffs it with the updated one at the time of the apply.

That said, it is a feature of Kubernetes that you can do kubectl apply -f deployment.yaml and it will update your deployment in place. Is it inside the scope of podman to do something similar for podman play kube? If not, what is a good workaround? podman is certainly not Kubernetes, and multinode rolling deployments might be out of scope, but to script around this issue I would need at minimum a way to find all the pods that podman made and delete them before podman play kube deployment.yaml will work at all. Currently playing the same deployment twice just fails.

@xordspar0
Copy link
Contributor

As for a way to find pods associated with a deployment, I opened a PR that adds labels to pods if they are defined in the deployment spec: #7648

@xordspar0
Copy link
Contributor

Now that #7648 and #7759 are merged, it's easy to find all pods from a deployment and stop them. For any visitors from the future, this is how I do it.

podman pod ps --filter label=app=myapp -q | xargs podman pod rm -f
podman play kube deployment.yaml

It may even be possible to write a script that does a rolling deployment, but I haven't needed that yet.

@cheng6563
Copy link

Is it possible to add a parameter similar to "--recreate" because it is a bit difficult for Shell to read the yaml file.

@ahmadalli
Copy link

Now that #7648 and #7759 are merged, it's easy to find all pods from a deployment and stop them. For any visitors from the future, this is how I do it.

podman pod ps --filter label=app=myapp -q | xargs podman pod rm -f
podman play kube deployment.yaml

It may even be possible to write a script that does a rolling deployment, but I haven't needed that yet.

Doesn't recreating pods cause issues if user wants to use systemd units for pods and containers? Since container IDs change at each recreation

@MartinX3
Copy link

It's a needed feature.
It's painful to always podman stop, podman pod stop, podman pod rm, podman play kube, just to apply the updated yaml.

@magikmw
Copy link

magikmw commented Sep 4, 2021

It's a needed feature.
It's painful to always podman stop, podman pod stop, podman pod rm, podman play kube, just to apply the updated yaml.

I agree, especially since podman is being posed as a "dev's tool" to play with pods and deployments that can be directly moved over to k8s. It's honestly easier to just spin minikube and run kubectl apply -f [...].yaml - but I don't want the whole cluster to be running in the background while podman can do the same.

Also, this man from podman's website suggest's there's a --down switch that would do what we want, but Fedora 34's version 3.3.1 doesn't have this switch (yet?).

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests