Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic upgrade #3197

Closed
deviantony opened this issue Sep 25, 2019 · 8 comments
Closed

Automatic upgrade #3197

deviantony opened this issue Sep 25, 2019 · 8 comments
Labels
kind/enhancement Applied to Feature Requests

Comments

@deviantony
Copy link
Member

Related to #1649

Now that a notification is available in app to notify the user that an update is available we should figure out a way to automate the upgrade of Portainer.

@mustanggb
Copy link

This sounds like a lovely feature to have, now there is the ability to detect available updates (as shown by the notification), it would seem the next step would be to create the mechanism by which Portainer can "recreate" itself.

I chime in here because I noted in #1649 that there was mention that due to the HTTP API not being available during upgrade that a potential solution could be to spin up a new portainer_self_updater image.

I have fallen into a similar trap of trying to recreate a container and not being able to access the HTTP API during this time, however for my use-case I was recreating the proxy container via which Portainer was being accessed, namely jwilder/nginx-proxy.

So it strikes me that if done mindfully there could be the opportunity to kill two birds with one stone here. My suggestion is that rather than creating a highly specialised/specific portainer_self_updater image, it could be done with a more generic image that allows any container to enable enhanced/remote recreation, force enabled for Portainer, but optionally enabled for any other container.

This would solve both the self upgrade and proxy update use-cases, both of which encounter a disappearing HTTP API during the normal recreation process.

Granted the proxy use-case could alternatively make use of the recreate happening purely server side within the the Portainer container, which a self upgrade could not, but rather than having three different ways to do a recreate for complexity sake it probably makes sense to stick with just two.

I assume there is a good reason recreates rely on a client-server communication, and refactoring it to be completely server side just for the proxy use-case seems like a lot of effort, but tacking it onto a potentially new recreate image could be an easy win.

Please let me know your thoughts.

@urda
Copy link

urda commented Nov 7, 2019

From a DevOps point of view: make sure this is opt out. I love software that updates itself, but I like to make that choice for when I'm ready.

@deviantony
Copy link
Member Author

deviantony commented Nov 7, 2019

Thanks for the input guys.

We started the research about this but we rebalanced our resources on other topics.

Here is a summary of this past research though:

  • Portainer can be deployed as a container and as a binary. The automatic upgrade process might be different here depending on how we approach the automatic update (replace the binary inside the image would be compatible with both deployments. Although our preference is to only support the Docker image.

Ultimately, this feature could also be enabled when Portainer is deployed as a container (by updating the images to include a specific file inside the container like /etc/portainer that we can use to detect if Portainer is running inside a container or not).

  • This mecanism should be enabled by default with a setting available in the Portainer settings to disable it.

  • Portainer must be able to identify the container to update. To do so, we can add the concept of "managed service" inside Portainer by always adding the "io.portainer.managed=portainer" label to the Portainer container during deployment. This would allow Portainer to easily figure out which container is actually running Portainer.

  • In order to support re-deployment of a new Portainer container, the socket/named pipe bind mount will now be mandatory as Portainer MUST be able to access the environment where it is currently running.

  • Some thoughts must be added regarding the difference between the upgrade process of a Portainer instance running as a container or running as a service (standalone VS Swarm, different Swarm cluster topology...)

All thoughts and ideas welcome !

@ghost
Copy link

ghost commented Dec 9, 2019

Comment from @STaRDoGG on related container updating feature request
Currently I use a WatchTower container to keep my containers up to date. It's good, but also a little bit quirky to use, i.e. for manually updating a single container.

It would be great if Portainer basically included the WatchTower functions right within it (auto-check for any container updates, and if found, update using the same arguments as the original), and also include the ability to manually check for/update a container just by clicking an "update check" icon in the "Quick Actions" list of icons. Also, a button near the top that does the same if you want to put a check in the checkboxes for several individual containers.

@STaRDoGG
Copy link

STaRDoGG commented Dec 9, 2019

For the record, this is the WatchTower container's repo: https://github.com/containrrr/watchtower

@STaRDoGG
Copy link

STaRDoGG commented Dec 9, 2019

On a related note, if you're also looking for how to self-update Portainer itself, the Organizr container does it very well. https://github.com/causefx/Organizr

I haven't looked but the code might be right in the repo to make things a lot easier to add. Also, the dev, CauseFX is super cool and easy to work with, so he might be willing to help ya along if ya run into any speedbumps. =)

Personally speaking, I do like containers that keep themselves updated, but I also understand wanting the option to opt-out as well.

@PathToLife
Copy link

PathToLife commented Feb 11, 2021

I'm thinking perhaps have portainer create a second container that it uses to update itself?

mind, https://github.com/containrrr/watchtower does it..

For example,

  • [Env] A container running Portainer v1.5
  • [UI] User clicks update button in portainer UI v1.5
  • [Portainer 1.5] Creates a Portainer-Updater container with /var/docker.sock**:/***docker.sock etc etc access
  • [Portainer-Updater] Pulls latest portainer image v1.6
  • [Portainer-Updater] Runs some checks
  • [Portainer-Updater] docker rename current_portainer some_backup_portainer
  • [Portainer-Updater] stops backup from auto starting
  • [Portainer-Updater] docker create new portainer
  • [Portainer-Updater] docker logs newportainer - look for startup ok flag
  • [UI] New portainer should be up and running
  • [UI] Prompt to finish update / Always finish automatically
  • [Portainer 1.6] Delete [Portainer-Updater]
  • User must manually delete some_backup_portainer

@huib-portainer
Copy link
Contributor

This actually applies to more than just the Portainer instance:

  • Provide the ability for an administrator user to upgrade Portainer to the latest version from within Portainer
  • Show a notification in Portainer that one or more agent can be updated
  • Allow an administrator user to update an agent to the latest version from within Portainer
  • Allow an administrator user to directly upgrade a Portainer CE instance to Portainer BE

@portainer portainer locked and limited conversation to collaborators Jul 27, 2023
@jamescarppe jamescarppe converted this issue into discussion #9543 Jul 27, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
kind/enhancement Applied to Feature Requests
Projects
None yet
Development

No branches or pull requests

6 participants