-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAAS Charm Upgrade #11395
CAAS Charm Upgrade #11395
Conversation
3dca3b5
to
e8ac890
Compare
3cba305
to
7bbdc50
Compare
!!build!! |
Testing showed an error in show-status-log. Need to investigate.
|
Running a local upgrade-charm failed also. From model logs
|
Ahhhhh does the mariadb-k8s have a memory limit or is your system at memory limits? |
I have lots-o-memory free, no memory limits was explicitly set when deploying. I can do some more digging. |
After sleeping on it, I know what this is now. The main process of the container is exiting before the signalling process. So then the container runtime sends a SIGKILL to the process we are watching. I'll have a fix for this soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty sure we'll need a follow up PR to get the new local state attributes saved in the controller
- Moved CAAS unit init into uniter and added resolver op. Removed caasunitinit worker. - Removed noop upgrade operation. - Uniter deploys own charm bundle to fix race conditions with caasoperator. - Block action running on outdated charm. - Support charm upgrade on CAAS workloads. - WatchContainerStart can now watch containers with a regex pattern. - Fix unit init process exit race condition for k8s.
|
2 similar comments
|
|
Please provide the following details to expedite Pull Request review:
Checklist
Description of change
QA steps
Documentation changes
N/A
Bug reference
https://bugs.launchpad.net/juju/+bug/1866856