-
Notifications
You must be signed in to change notification settings - Fork 42
Database inconsistency between worker and api image for 2024.5.1 release? #40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I may try deploying against the latest rolling docker tag to see if that unblocks me. But it seems the release process is potentially flawed in allowing things to go out without being in lockstep on critical shared code so I wanted to raise the question directly. |
I am also seeing this behavior on my end. @trevjonez, what exactly did you end up pinning to? |
I used the rolling tag from the may 8th build. Pulled and retagged to suit my needs |
Interesting. If I use |
Did you verify all your migrations got ran? I've had to do it manually from the api container every time I update. This round it had issues and I had to edit the migration table and run it again to get it to go thru then finally undo the edit. There was a ton of trial and error over about an hour so I can't say exactly the steps I took. I believe it was some error about an out of order dependent migration. |
I do see some odd messages popping from the
I am a little hesitant to start messing with migrations - 🤞 someone from the team can chime in 😬 . |
yeah that was the one i hit as well. I believe what I did was renamed the thing to 0001_initial_temp. Then ran the migrations again to get a new error (something like can't modify the user-measurements because it had actually already ran the edit). put it back as 0001_initial then ran again. |
What worked for me:
|
The image The worker is crashing when it runs an SQL query with a field that does not exist, though the migration (with the same name) for it has run. I'm assuming the migration with the same name was run by self-hosted-api, but with the correct migration content. Looking at the worker image I notice the field name docker run -it --entrypoint cat docker.io/codecov/self-hosted-worker:24.5.1 /usr/local/lib/python3.12/site-packages/shared/django_apps/rollouts/migrations/0005_featureflag_is_active_featureflag_platform_and_more.py requirements.in contains a reference to the right commit for the shared lib though: $ docker run -it --entrypoint cat docker.io/codecov/self-hosted-worker:24.5.1 /worker/requirements.in
https://github.com/codecov/shared/archive/148b7ae3a6d4cdfc554ba9ca8b911c13e82d77b8.tar.gz#egg=shared
<snip> Is this a packaging issue in your image build? |
Pinning at |
After updating my selfhosted instance and struggling thru getting migrations to run I am left with the following errors:
After digging around for a bit I found this commit that renamed the column:
codecov/shared@d86d466#diff-552087a68d49f71285c998a00138656ec63de61039c6bbac698e935c4d9d40d1
Then looking at my own DB tables I am seeing the migration was ran.

But it seems that possibly the 2024.5.1 docker images for worker and api were built using different versions of
shared
?(worker repo isn't git tagged so I can really only guess what commit the image was built from)
https://github.com/codecov/worker/blob/5c7c8927010514c2e53f1b14728a57de9be3d102/requirements.txt#L375
https://github.com/codecov/codecov-api/blob/self-hosted-24.5.1/requirements.txt#L405
The text was updated successfully, but these errors were encountered: