Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This seems like a perfectly valid way to deploy this seeing as it seems a bit inconsistent documentation wise #106

Closed
Leopere opened this issue Jul 6, 2023 · 2 comments

Comments

@Leopere
Copy link

Leopere commented Jul 6, 2023

version: "3.9"
services:
  shepherd:
    image: mazzolino/shepherd
    environment:
      SLEEP_TIME: "1d"
      TZ: "US/Eastern"
      VERBOSE: 'true'
      IMAGE_AUTOCLEAN_LIMIT: "5"
      WITH_REGISTRY_AUTH: "true"
      WITH_INSECURE_REGISTRY: "true"
      WITH_NO_RESOLVE_IMAGE: "true"
      IGNORELIST_SERVICES: "label=shepherd.autodeploy=false"
      FILTER_SERVICES: "label=shepherd.autodeploy=true"
      ROLLBACK_ON_FAILURE: "true"
      UPDATE_OPTIONS: "--update-delay=30s"

    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
      - "/root/.docker/config.json:/root/.docker/config.json:ro"
    deploy:
      placement:
        constraints:
        - node.role == manager
      labels:
        - shepherd.enable=true
        - shepherd.autodeploy=false

In this example all you must do to any additional container services in swarm would be to add the line - shepherd.autodeploy=true" and you will have an auto updating service you can also omit anything and it will just ignore that service but if you have a tin foil hat like me and don't like broken databases you can add - shepherd.autodeploy=false"

This example also includes a 1d 1day cooldown because I don't think that I really want this thing checking for container upgrades every 5 minutes and giving me not enough time to react to bad upgrades. You could probably also slow it down further to 1week 1w in case you're really lazy and really don't care if your system breaks.

I think that this is a perfectly reasonable way to deploy smaller more resilliant containers but ultimately this has major drawbacks and would likely also highly benefit from being deployed along side of a monitoring system such as Uptime-Kuma or something for the sake of being alerted more proactively when something goes wrong rather than having that one webapp that gets ignored like 99% of the time but is unknowingly killed by improper label/version pinning.

@moschlar
Copy link
Collaborator

I don't quite get what you're getting at - am I understanding it right, that you propose we add the part with the labels to documentation/examples?

@Leopere
Copy link
Author

Leopere commented Aug 18, 2023

Eh kinda I was just mostly recording a valid way that seemed to work recently to the git issues for anyone looking to use this going forward. If you find it useful deem it so and add it to docs, you're also encouraged to tell me to take a hike too <3 Whatever makes ya happy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants