Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.19.4/0.19.5 upgrade - looking for testers #91

Closed
ubergeek77 opened this issue Jun 7, 2024 · 41 comments
Closed

0.19.4/0.19.5 upgrade - looking for testers #91

ubergeek77 opened this issue Jun 7, 2024 · 41 comments
Labels
help wanted Extra attention is needed

Comments

@ubergeek77
Copy link
Owner

ubergeek77 commented Jun 7, 2024

I have been testing the 0.19.4/0.19.5 upgrade on a small, brand new 0.19.3 deployment. Everything worked fine. Huge thanks to @pallebone for pointing out pgautoupgrade to me, I really had no idea it existed, and it's a huge reason this is able to be so seamless.

However, I still have some concerns about how this auto upgrade will work on large instances:

  • pgautoupgrade is great, but how stable is it on very large databases. And, how long does it take?
  • pictrs was bumped to v0.5, which will also perform its own internal database migration. How stable is it, and how long will it take?

In addition, the official Lemmy deployment now recommends this custom Postgres config:

https://github.com/LemmyNet/lemmy-ansible/blob/main/examples/customPostgresql.conf

So far, my own single user instance has worked fine without any configuration, and even with a remarkably small 64MB SHM size, which is still the default.

I am tempted to ship this as the default here too, but I know a good handful of people use my project because I support 32-bit ARM deployments, and many of those SBCs have very low system RAM. As the above custom Postgres config wants a 2GB SHM size, I don't think those ARM users will be very happy. Feedback on this is welcome, though I will probably end up keeping the defaults untouched, and linking users to this config for extra performance.


TL;DR - I would like some feedback before I make this an official update. I will give instructions on how to perform this test. Here is what I am looking for:

  • How large was your Postgres volume before and after the upgrade?
  • How large was your Pictrs volume before and after the upgrade?
    • Both can be calculated by running docker system df -v
  • If you add the above two volume sizes together, did you have that much disk space free before the upgrade?
    • I want to test what happens if someone with low disk space attempts an upgrade
  • If possible, please give a rough time of how long the Postgres migration took.
  • I have turned off the time limit for the Lemmy-Easy-Deploy health checks.
  • Any issues? Does everything still work, like federation?
  • Any other comments/concerns/thoughts are more than welcome!

For the love of everything, PLEASE make a backup that you are 1,000% certain you can easily and quickly restore.

I have only tested pgautoupgrade on a nearly empty Lemmy deployment, I do not know what to expect from real-world servers. Please, make a backup and run through a restore process before trying this.

How to test:

  • Navigate to your Lemmy-Easy-Deploy folder
  • Run: git pull && git checkout 0.19.5-migrationcheck
  • Run deploy.sh like you normally would

Everything should be automatic from there, but I have turned off the time limit for the deployment health checks - for all I know, someone's Postgres migration might take 2 hours or something crazy.

I will do my own testing as well, and if I am satisfied with my own testing and/or the responses here, we will be good to go!

@ubergeek77 ubergeek77 pinned this issue Jun 7, 2024
@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 7, 2024

Unfortunately, 0.19.4 is an enormous update that requires manual user intervention to upgrade from Postgres 15 to 16, and some minor oversight when migrating from Pictrs 0.4 to 0.5:

https://github.com/LemmyNet/lemmy-ansible/blob/main/UPGRADING.md

A database upgrade is not something I'm comfortable with automating for the user, and the upgrade script they provided is currently not working on my own server (they didn't really add any error checks... at all).

I don't want to be overwhelmed with reports of people breaking their Lemmy-Easy-Deploy systems, so I need time to think about how to approach this properly.

I've already spent 3 hours today fixing my container builds for 0.19.4, and working on other 0.19.4 changes and unfortunately I don't have any more time to give to this today.

Feel free to suggest things in here, but otherwise I'm going to need more time to sort out this huge breaking change.

I just released LED 1.3.4 which adds a safeguard that prevents users from upgrading to 0.19.4 for the time being.

In the meantime, I'll be rolling back my own deployment to 0.19.3.

@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 7, 2024

Currently, the upgrade script provided by Lemmy is not compatible with Lemmy Easy Deploy, as it does not take into account the Compose project name. It might not be too bad, but I will probably have to make my own. I'm thinking about automating it in LED with the disclaimer that you MUST have made your own backups, but that seems dangerous to me.

If anyone has any free time and expertise, any help with migrating is greatly appreciated. I can do all of this myself at some point when I have free time, but I have very little of that this weekend.

Sorry for the inconvenience!

@ubergeek77 ubergeek77 added the help wanted Extra attention is needed label Jun 7, 2024
@BlueEther
Copy link

If you need any testers just give me a yell here or @blueether@no.lastname.nz

I can have a quick look at the db script today but no promises

@pallebone
Copy link

I can test also

@pallebone
Copy link

pallebone commented Jun 13, 2024

Please look at this: https://github.com/pgautoupgrade/docker-pgautoupgrade

has worked in a test database

@ubergeek77
Copy link
Owner Author

That's probably going to save me a lot of time, I will very likely be using this. Thanks @pallebone !

@ubergeek77 ubergeek77 changed the title Working on 0.19.4 upgrade, please do not file new issues about it 0.19.4 upgrade - looking for testers Jun 19, 2024
@ubergeek77
Copy link
Owner Author

Hi everyone, I have been testing some changes for the 0.19.4 migration. I've updated the original post with instructions on how to test, and some requests for feedback.

If pgautoupgrade really is as magical as it seems, and it holds up even for the largest Lemmy databases, then we are in very good shape. I'm actually wondering why Lemmy doesn't ship something like this by default, it is very nice.

@pallebone
Copy link

pallebone commented Jun 19, 2024

I have some comments to add to this but will only be able to mention them tomorrow (too late for me now) especially regarding posgres ‘ I am tempted to ship this as the default here too’
This sounds like a bad idea as it required specific setting that you wont know in advance depending on compute and memory size of the host running the dockers. Let me check my setup and see how I have it vs what the devs recommend.

@pallebone
Copy link

pallebone commented Jun 19, 2024

"However, I still have some concerns about how this auto upgrade will work on large instances:

pgautoupgrade is great, but how stable is it on very large databases. And, how long does it take?"

Will take a while, but no way around it. My DB is 25GB so I can tell you how long it will take on that when you have a working script. I expect about 1 minute per GB so approx 25-35 minutes.

" pictrs was bumped to v0.5, which will also perform its own internal database migration. How stable is it, and how long will it take?"

It is imperative that the update script allows us to pause and or continue after each step. IE: first it must upgrade postgres, then wait. Then after we can check everything is working, we must be able to continue with lemmy update, then stop, and finally do the pictrs update. I dont recommend doing all 3 in one step as it will make troubleshooting very difficult and obtuse.

"In addition, the official Lemmy deployment now recommends this custom Postgres config:"

I dont reccomend setting "defaults" but rather only suggesting values. My customPostgresql looks like this:

max_connections = 256				# max_connections * work_mem = max memory used by connections eg: 256*8=2GB RAM for connections
shared_buffers = 2GB				# set to 25% of physical RAM
effective_cache_size = 4GB			# set to size of disk swap (zram does not count)
maintenance_work_mem = 1GB			# set to 1/8 of server RAM
checkpoint_completion_target = 0.9		# suggested value
checkpoint_timeout = 61min			# suggested value
wal_buffers = 128MB				# set to approx 5% of shared buffers, power of 2
default_statistics_target = 250			# suggested value either 100, 250 or 500
random_page_cost = 1.1				# suggested value
effective_io_concurrency = 200			# suggested value
work_mem = 8MB					# see max_connections. Larger value = faster SQL queries if they require large amount of memory, small value = less RAM usage
min_wal_size = 2GB				# suggested value, larger not normally required
max_wal_size = 8GB				# suggested value, larger not normally required
max_worker_processes = 6			# normally 6/4 of CPU threads
max_parallel_workers_per_gather = 3		# normally 3/4 of CPU threads
max_parallel_workers = 6			# normally 6/3 of CPU threads
max_parallel_maintenance_workers = 0		# for docker = 0, not docker = 3/4 CPU threads
synchronous_commit = off			# use this value unless troubleshooting
huge_pages = on					# required modification of system, will crash postgres if turned on. leave off unless you have hugepages.
autovacuum_vacuum_scale_factor = 0.05		# more agressive vacuum
autovacuum_vacuum_insert_scale_factor = 0.05	# more agressive vacuum
idle_in_transaction_session_timeout = 10860000	# timeout idle transactions after 181 minutes
idle_session_timeout = 10920000			# also timeout idle sessions at 182 minutes
temp_file_limit = 10GB				# suggested value
#statement_timeout = 1440000			# timeout statements that take longer than 24 minutes
statement_timeout = 14400000
lock_timeout = 1340000				# timeout statements that try to lock a table, index or row etc for more than 11 minutes

some differences include:
max_connections (devs have no clue why 200 works better, my config shows calculation).
shared_buffers (same as above)
effective_cache_size (they set arbitrary value)
maintenance_work_mem (again, no understanding of value)
wal_buffers (their value is not correct)
work_mem (their value will result in higher CPU)
max_worker_processes (depends on no of CPU cores)
max_parallel_workers_per_gather (as above)
max_parallel_workers (as above)
max_parallel_maintenance_workers (their vacuum will never run on their value as its a docker container, so no auto vacuum will ever be completed in their DB).

Since their values seem mostly arbitrary (probably guessed from trial and error rather than a deep understanding of the settings) I dont recommend them. I include my config above with how to derive the values.

"If you add the above two volume sizes together, did you have that much disk space free before the upgrade? "
Most likely you will need free space x2 of the postgres DB or in my case approx 50-60GB.
The sled DB is very small (300-400mb) so this is not a large db to upgrade.

Most of the update will revolve around postgres imho.

Pete

@ubergeek77
Copy link
Owner Author

Great feedback, thank you!

It is imperative that the update script allows us to pause and or continue after each step. IE: first it must upgrade postgres, then wait. Then after we can check everything is working, we must be able to continue with lemmy update, then stop, and finally do the pictrs update. I dont recommend doing all 3 in one step as it will make troubleshooting very difficult and obtuse.

Currently, there is no special upgrade script, it is just a normal Compose deployment that starts all services at once, like it always has. The only difference here is, I've changed the Postgres container to pgautoupgrade. The Pictrs container already performs a self migration.

I could try to start each container one at a time in an intelligent order, then wait for the container to appear healthy before moving on. I'm just not sure how to detect when a migration is in progress or complete. My current deployment was written to expect instant container crashes, so it considers a container that has been running consistently to be healthy.

I'll try to come up with something more reliable. But even in the current state, your deployment should work fine, Lemmy will just report errors in the frontend until the migrations are complete and the backend containers are operational.

Side note, Lemmy just released 0.19.5, and it doesn't require any compose changes. This means you can use the branch I listed above, and it will automatically grab 0.19.5 instead.

@pallebone
Copy link

Alright what are the steps to try an upgrade? I can maybe try this tomorrow.

@ubergeek77
Copy link
Owner Author

They're in the first message of this issue, near the bottom (preceded by a very large and obnoxious warning 😄 )

@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 19, 2024

I'm writing something in a different branch that will properly check Postgres via pg_isready before moving on.

I can do something similar with pictrs via their /healthz endpoint.

I'll look into doing proper checks for the Lemmy backend as well. I'll make sure this is all working properly before I make this an official update.

@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 19, 2024

I have made some significant changes to how Lemmy-Easy-Deploy does health checks based on @pallebone 's feedback.

Now, every service has a unique health check that it must pass before it is considered deployed. Each service deploys one at a time, provides log snippets every 15 seconds, and provides the opportunity to abort the deployment with CTRL+C without completely killing the containers.

Because of this change, the time limit on deployments has been disabled. If there is a fatal error, it will be up to the user to recognize that from the logs and press CTRL+C to abort. Otherwise, the failing service will continue to crash loop indefinitely until the user intervenes.

All of this will allow you to carefully monitor the progress of any migrations, and step in if needed.

This change is now available on the 0.19.5-migrationcheck branch. I have updated the commands in the first post to use this branch instead for testing.

@ubergeek77 ubergeek77 changed the title 0.19.4 upgrade - looking for testers 0.19.4/0.19.5 upgrade - looking for testers Jun 19, 2024
@pallebone
Copy link

Ok cool I will take a look at this tomorrow.

@airjer
Copy link

airjer commented Jun 20, 2024

Before

lemmy-easy-deploy_pictrs_data
247.4GB
lemmy-easy-deploy_postgres_data
39.18GB

After

lemmy-easy-deploy_pictrs_data 250.1GB
lemmy-easy-deploy_postgres_data 38.56GB

I had 50GB free.

~15 minutes for migrations. Everything seems good.

@ubergeek77
Copy link
Owner Author

Woah, only 15 minutes for that much data? And with not a lot of disk space left?

That's pretty incredible, I'm very happy to hear it went smoothly!

Thanks for testing!!

@airjer
Copy link

airjer commented Jun 20, 2024

Thank you for the awesome work!

@pallebone
Copy link

My experience:

16 min past hour - backup
19 min past hour - start deploy.sh
22 min past hour - completed downloading dockers, began updating.
postgres seemed almost instant, pictures then started at about 23 min past the hour.
29 min past hour:
pictrs crashed:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ SPANTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
lemmy-easy-deploy-pictrs-1 |
lemmy-easy-deploy-pictrs-1 |
lemmy-easy-deploy-pictrs-1 | 0: pict_rs::repo::migrate::do_migrate_hash_04
lemmy-easy-deploy-pictrs-1 | at src/repo/migrate.rs:289
lemmy-easy-deploy-pictrs-1 | 1: tokio::task::runtime.spawn with kind=local task.name= task.id=1566444 loc.file="/workspace/asonix/pict-rs/src/repo/migrate.rs" loc.line=112 loc.col=17
lemmy-easy-deploy-pictrs-1 | at /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.38.0/src/util/trace.rs:17
lemmy-easy-deploy-pictrs-1 |
lemmy-easy-deploy-pictrs-1 |
lemmy-easy-deploy-pictrs-1 | Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
lemmy-easy-deploy-pictrs-1 | Run with RUST_BACKTRACE=full to include source snippets.

This issue was resolved in pictrs 0.5.15 but deployed version was 0.5.

ctrl-c

Manually edited the docker compose file to upgrade pictrs to 0.5.15

ran this command:
docker-compose -f /lemmy/Lemmy-Easy-Deploy/live/docker-compose.yml -p "lemmy-easy-deploy" up -d && docker-compose -f /lemmy/Lemmy-Easy-Deploy/live/docker-compose.yml -p "lemmy-easy-deploy" logs -f | grep -C7 -e ERROR -e WARN

waited for pictrs to recover... took some time.
(Lemmy was up and running at this time, but pictrs continued to "fix" itself and migrate.)
41 min past hour - all is completed.

@pallebone
Copy link

Also changed in docker compose the postgres to postgres:16-alpine

@ubergeek77
Copy link
Owner Author

Thanks for finding and reporting that Pictrs issue!

I have been trying to keep parity with the "official" Lemmy deployment, and I noticed they reverted from Pictrs 0.5.4 back to 0.5 right before the release of 0.19.4:

LemmyNet/lemmy-ansible@c872f2d?diff=unified&w=1#diff-d3d33979648a9836685e314d1864481561d5130a294cce04be84b03c29abfb08

I'm not sure why this was, but if you ran into an issue like that, I'm definitely going to make pictrs 0.5.15 the default instead.

Appreciate you catching that!

@pallebone
Copy link

pallebone commented Jun 20, 2024

I checked the dockerhub and something else must have been wrong as 0.5 is a tag that pulls the latest 0.5 release so it would have pulled 0.5.16. Unsure why I had an issue in this case.
https://hub.docker.com/r/asonix/pictrs/tags

This might explain why they altered the tag to 0.5 - so it auto updates along the 0.5 branch.

@ubergeek77
Copy link
Owner Author

Hmmmm

You said the issue was resolved in 0.5.15, is there a pictrs issue you were keeping an eye on? I want to read through any similar reports

@pallebone
Copy link

pallebone commented Jun 20, 2024

Hmmmm

You said the issue was resolved in 0.5.15, is there a pictrs issue you were keeping an eye on? I want to read through any similar reports

Unfortunately this in on matrix chat where you can ask the developer for help (Tavi is the dev):

Screenshot from 2024-06-20 15-10-18

I changed to 0.5 now the upgrade is done and it seems to be working without error using tag 0.5.
Here is my full compose file at the moment:

root@lemmy01:/var/lib/docker# cat /lemmy/Lemmy-Easy-Deploy/live/docker-compose.yml
x-logging:
&default-logging
options:
max-size: '500m'
driver: json-file

services:

proxy:
build: ./caddy
mem_limit: 1024m
mem_reservation: 768m
memswap_limit: 1536m
env_file:
- ./caddy.env
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
ports:
- 80:80
- 443:443
depends_on:
- pictrs
- lemmy-ui
restart: always
logging: *default-logging

lemmy:
image: ghcr.io/ubergeek77/lemmy:0.19.5
mem_limit: 1024m
mem_reservation: 768m
memswap_limit: 1536m
volumes:
- ./lemmy.hjson:/config/config.hjson
depends_on:
- postgres
- pictrs
restart: always
logging: *default-logging

lemmy-ui:
image: ghcr.io/ubergeek77/lemmy-ui:0.19.5
mem_limit: 1024m
mem_reservation: 768m
memswap_limit: 1536m
environment:
- LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
- LEMMY_UI_LEMMY_EXTERNAL_HOST=localhost:1236
volumes:
- ./lemmy-ui-themes:/app/extra_themes
depends_on:
- lemmy
restart: always
logging: *default-logging

pictrs:
image: asonix/pictrs:0.5
mem_limit: 1024m
mem_reservation: 768m
memswap_limit: 1536m
user: 991:991
environment:
- PICTRS__MEDIA__ANIMATION__MAX_WIDTH=256
- PICTRS__MEDIA__ANIMATION__MAX_HEIGHT=256
- PICTRS__MEDIA__ANIMATION__MAX_AREA=65536
- PICTRS__MEDIA__ANIMATION__MAX_FRAME_COUNT=400
- PICTRS__MEDIA__VIDEO__ENABLE=true
- PICTRS__MEDIA__VIDEO__MAX_FILE_SIZE=20
- PICTRS__MEDIA__VIDEO_CODEC=vp9
env_file:
- ./pictrs.env
- ./customPictrs.env
volumes:
- pictrs_data:/mnt
restart: always
logging: *default-logging

postgres:
image: postgres:16-alpine
mem_limit: 1024m
mem_reservation: 768m
memswap_limit: 1536m
shm_size: 4g
environment:
- POSTGRES_USER=lemmy
- POSTGRES_DB=lemmy
env_file:
- ./postgres.env
volumes:
- postgres_data:/var/lib/postgresql/data
- ./customPostgresql.conf:/etc/postgresql.conf
restart: always
logging: *default-logging

volumes:
caddy_data:
caddy_config:
pictrs_data:
postgres_data:

@ubergeek77
Copy link
Owner Author

Thanks! That makes sense. It looks like it's only an issue with the migration code.

Is it possible you had a pre-downloaded pictrs:0.5 tag that was actually <=0.5.14, which was used instead?

That would explain why 0.5 is working for you now, and why 0.5.15 worked for your migration. I could be more aggressive with the image pulls to help this a little bit.

@pallebone
Copy link

pallebone commented Jun 20, 2024

Its technically possible I suppose but I am unclear how to check this now that I have fiddled around and so on. Either way the big stuff (postgres and lemmy) updated fine. pictrs actually can be completely broken and lemmy still works, just pictures dont work on the site so its not a big an issue, ie site is still up and you can try get pictrs working while the site is functional.

@ubergeek77
Copy link
Owner Author

Lemmy-Easy-Deploy 1.4.0 has been released! 🥳

Huge thanks to everyone here for helping me test and providing me with valuable insight.

If you helped test, please do not forget to switch back to the main branch:

git checkout main
git pull

@ubergeek77 ubergeek77 unpinned this issue Jun 21, 2024
@pallebone
Copy link

Thanks for all your help also :)

@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 21, 2024

Quick FYI - I forgot to change a variable name in 1.4.0, so please update to 1.4.1 or else your Pictrs might be a little broken (presumably anything requiring an API key)

If you just run ./deploy.sh, it should give you a prompt to update itself. Then if you run ./deploy.sh -f, it should re-deploy with the correct Pictrs API key in lemmy.hjson. The downtime should only be about a minute or two.

https://github.com/ubergeek77/Lemmy-Easy-Deploy/releases/tag/1.4.1

@pallebone
Copy link

How can I make this change manually for now?

@ubergeek77
Copy link
Owner Author

ubergeek77 commented Jun 21, 2024

You can edit ./live/lemmy.hjson to have your correct pictrs API key. You can find it in ./live/pictrs.env

But I'm not sure if Lemmy will pick up changes to that file automatically. The backend service may need to be restarted.

Also, I couldn't quite figure out what exactly requires authentication on the pictrs API, so if you aren't having issues with thumbnail generation or user submitted images, this can probably just wait.

@airjer
Copy link

airjer commented Jun 21, 2024

With the hot fix, you forgot to update the version number, so it endlessly updates.

@ubergeek77
Copy link
Owner Author

Thanks, sigh...

@pallebone
Copy link

Thanks I will take a look into this tomorrow as its late now.

@ubergeek77
Copy link
Owner Author

I changed the version number and re-tagged it as 1.4.1, if you hit "yes" on the update prompt it will sort itself out.

Thanks again for all the testing and reports everyone!

@pallebone
Copy link

Just thought I would test the script again and get this when I run it:
Lemmy-Easy-Deploy by ubergeek77 (v1.4.1)

Detected runtime: docker (Docker version 24.0.5, build ced0996)
Detected compose: docker compose (Docker Compose version v2.20.2)
Runtime state: OK

Current Backend Version: 0.19.1
Latest Backend Version: 0.19.5

Current Frontend Version: 0.19.1
Latest Frontend Version: 0.19.5

A Backend update is available!
BE: 0.19.1 --> 0.19.5

A Frontend update is available!
FE: 0.19.1 --> 0.19.5

--------------------------------------------------------------------|
| !!! WARNING !!! WARNING !!! WARNING !!! WARNING !!! WARNING !!! |
| |
| Updates to the Lemmy Backend perform a database migration! |
| |
| This is generally safe, but Lemmy bugs may cause issues. |
| |
| It is generally recommended to wait a day or two after a major |
| Lemmy update, to allow time for major bugs to be reported. |
| |
| If you update, and run into a new bug/issue in Lemmy, you will |
| NOT be able to roll back, unless you restore a database backup! |
| |
| LEMMY BACKEND UPDATES ARE ONE-WAY ONLY |
| |
| THIS IS YOUR ONLY OPPORTUNITY TO MAKE A BACKUP OF YOUR LEMMY DATA |
| |
| Lemmy data is stored in Docker Volumes, NOT the ./live folder |
| |
| Please consult the Docker docs for commands on making a backup: |
| https://docs.docker.com/storage/volumes/#back-up-a-volume |
| |
| The most important Volume to back up is named: |
| lemmy-easy-deploy_postgres_data |
| |

!!! WARNING !!! WARNING !!! WARNING !!! WARNING !!! WARNING !!!

Would you like to proceed with this deployment? [Y/n] n

Why does it detect 0.19.1?

@pallebone
Copy link

I went ahead an updated anyway since you are busy. After upgrade I edited the docker compose file as it leaves this tag for postgres:
pgautoupgrade/pgautoupgrade:16-alpine

I changed it to postgres:16-alpine

I dont think it should leave the upgrade tag as the running database as that will continually check and loop to see if there is an upgrade to do.

@ubergeek77
Copy link
Owner Author

The version check is done by reading ./live/version, which is written at the very end of a successful deployment. You had to interrupt your deployment to fix a pictrs issue and then manually redeployed, so the updated version string never got written.

As for pgautoupgrade, it doesn't continually do a version check to my knowledge. After a migration, it just runs Postgres at the very end of the script, and should be a drop in replacement for Postgres:

https://github.com/pgautoupgrade/docker-pgautoupgrade/blob/main/docker-entrypoint.sh

I intend to keep this as the Postgres image moving forward just for future proofing.

@pallebone
Copy link

Interesting, I dont agree with this change as it means we are no longer tracking the official postgres image and are trusting no issues crop up with a 3rd party project when it is not required to do so for any meaningful benefit once the upgrade is done. However since we can alter the tags ourselves that is acceptable, but you might want to make people aware so they can decide themselves.

@ubergeek77
Copy link
Owner Author

I would be willing to accept a PR that does an intelligent check to see if using this image is necessary, but this is the most frictionless option I have right now.

I know it's a third party image, but it's open source and functionally identical to the real postgres image. There should be no compatibility issues. If you're worried about supply chain attacks, I could consider locking it to an image fingerprint depending on the host architecture.

@pallebone
Copy link

Its ok, Im happy to accept your decision as its your project, and ultimately your reputation. Its also easy for me to change it myself so it does not cause me personally an issue. I assumed it was an error but if you are aware and happy, I am also happy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants