Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exemplify production image build #155

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Exemplify production image build #155

wants to merge 4 commits into from

Conversation

solsson
Copy link
Contributor

@solsson solsson commented Feb 18, 2018

Triggered by #154 I realized that for those, like us, who have come to depend on this setup in production it's only implied that you build your own image. There's no example. Upgrades is something you bother doing in production, and there you really want the functionality in kubectl set image --record=true and kubectl rollout undo. With a separate ConfigMap you get the chance to easily tweak the init script and properties to your needs, for example host name resolution for #78, but you don't get a rollback feature.

Switching to a production image should be a matter of, whenever you're done tweaking:

  • Build the image
  • Set image
  • upon upgrade completion, delete the configmap

This means you can still set up a production cluster using the flexible approach. You can also reverse the above to switch back to experimentation.

@solsson solsson added this to the 4.0 milestone Feb 18, 2018
@solsson
Copy link
Contributor Author

solsson commented Feb 18, 2018

We should also recommend RollingUpdate for images that don't depend on the configmap. A sample kubectl patch command could be useful.

solsson added a commit to StreamingMicroservicesPlatform/docker-kafka that referenced this pull request Feb 19, 2018
@solsson
Copy link
Contributor Author

solsson commented Feb 19, 2018

Maybe we'll have to maintain an example production manifest. My hopes on a simple patch command were a bit optimistic, as the init container and the volume mount needs to be removed. That'll make patch dangerous for forks that have modified those lists in the original manifest.

For now I've:

  • created an Automated Build at Docker Hub, as anyone can do from their fork: https://hub.docker.com/r/solsson/kafka-kubernetes
  • kubectl --namespace kafka set image --record=true statefulset kafka broker=solsson/kafka-kubernetes@sha256:ff399d1a8f42f55d5fcfbb781f2b49f6672579bf34b725b6efa47c0b684b8fbf
    • This is safe, even with RollingUpdate because the configmap and init script will be used as before.
  • kubectl --namespace kafka edit statefulset kafka and removed the initContainers and the volumeMounts with /etc/kafka
  • kubectl --namespace kafka patch statefulset kafka --patch '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'

... in a testing environment of course :)

An alternative would be to instead build a dedicated init container image.

Now we have to make sure the environment matches from init,
where we tweaked the script, to the broker pod.
@solsson
Copy link
Contributor Author

solsson commented Apr 6, 2018

I'm abandoning this initiative in favor of #167.

@solsson
Copy link
Contributor Author

solsson commented Apr 8, 2018

I'm abandoning this initiative in favor of #167.

Actually it's still quite interesting for Zookeeper. We could probably move anything that varies (the statefulset scale) to labels/annotations, so that most people will never need to edit the init script.

@solsson solsson mentioned this pull request Apr 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant