-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3.2 into 3.3 #15881
Merged
Merged
3.2 into 3.3 #15881
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
These versions of Ununtu are no longer supported in Juju 3 This check is handled by seriesSelector. Ensure that is is always used. A side effect is that this brings the bundle deployer in line with deploying charms with 'juju deploy'
The following fixes the issue when rebinding the address and a worker is starting the dbApp can be nil. To fix the problem we check if the underlying catacomb is dying. If that's the case then return that error. If it's not dying, force the worker to try again with the new sentinel error. As all errors where currently fatal, we add a check to ensure that if the error is ErrTryAgain, that a worker can come around and attempt to try the worker again. Upon the next time around if the catacomb is dying, then it will correctly die. If the dbApp is still nil, it will continue trying, otherwise it will create a worker. We're leaning into the way the runner works here to ensure that we can handle the case of a dbApp worker being nil. Also you can not put the worker creation outside of the start worker function, as this opens up possibilities of a worker being created twice and only one of them being used.
The following just updates the error message to be more helpful when a error happens.
As a driveby make const error a real const.
juju#15845 Backports juju#15780 to fix data race in dbaccessor.
juju#15852 Previous error message was ambiguous, especially with large bundles that have many applications/charmhub charms deployed. ## QA steps Unit tests and/or refresh bundle that downgrades a charmhub charm. ## Documentation changes N/A ## Bug reference https://bugs.launchpad.net/juju/+bug/1999700
juju#15854 A previous PR disabled the caasapplicationprovisioner to prevent it from dealing with controller concerns (since the controller app is a sidecar of the juju controller pods). This rolls that back partially to allow the caasapplicationprovisioner to work in a readonly mode, only updating status as it sees changes. ## QA steps Bootstrap k8s, switch to controller model, check status of juju controller app and unit is nice. ## Documentation changes N/A ## Bug reference N/A
juju#15849 This PR ensures: - the secret backend endpoint read-only; - disallowed to change the backend name if the backend is in use for any model; ## Checklist - [x] Code style: imports ordered, good names, simple structure, etc - [x] Comments saying why design decisions were made - [x] Go unit tests, with comments saying what you're testing - [ ] ~[Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing~ - [ ] ~[doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~ ## QA steps ```sh juju add-secret-backend myvault vault --config ./vault.yaml juju model-config secret-backend=myvault juju deploy snappass-test juju exec --unit snappass-test/0 -- secret-add owned-by=easyrsa-app secret://025ca458-bb1c-4640-8ba4-15ba62fa7559/cih7fueffbas7akd0lng juju update-secret-backend myvault name=myvault2 ERROR cannot rename a secret backend that is in use juju update-secret-backend myvault endpoint=http://10.180.97.1:8201 ERROR invalid config for provider "vault": cannot change immutable field "endpoint" ``` ## Documentation changes No ## Bug reference No
juju#15855 A couple of secret CI fixes. Drive-by: we should ignore vault non-reachable network error during model teardown.
Occassionally we fail to remove the secondary controller in the cross_controller cmr test. Explicilty remoing the SAAS before destroying helps to resolve this
…_in_cmr_test juju#15857 Occasionally we fail to remove the secondary controller in the cross_controller cmr test. Explicilty remoing the SAAS before destroying helps to resolve this https://jenkins.juju.canonical.com/job/test-cmr-test-offer-consume-lxd/731/ ## Checklist - ~[ ] Code style: imports ordered, good names, simple structure, etc~ - [x] Comments saying why design decisions were made - ~[ ] Go unit tests, with comments saying what you're testing~ - [x] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing - ~[ ] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~ ## QA steps ```sh ./main.sh -v -c lxd -p lxd cmr ```
juju#15861 Merges: - juju#15852 - juju#15857 No conflicts
The following adds some timeout + full status whilst it attempts to upgrade. We may need to crack open lxd to see the full server logs to see what's actually happening. For now, see if this helps.
…o-help-diagnose-issue juju#15862 The following adds some timeout + full status whilst it attempts to upgrade. We may need to crack open lxd to see the full server logs to see what's actually happening. For now, see if this helps. ## Checklist - [x] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing ## QA steps ```sh $ cd tests && ./main.sh -v upgrade test_upgrade_simplestream ``` I've run this multiple times and the output is the same. ``` ==> Checking for dependencies ==> Using Juju located at /home/simon/go/bin/juju ==> Running subtest test_upgrade_simplestream for upgrade suite ==> TEST BEGIN: upgrade (tmp.H1T) ==> Checking for dependencies ===> [ ] Running: simplestream metadata last stable ===> Using jujud version 3.2.2-ubuntu-amd64 ===> Testing against stable version 3.2.0 Finding agent binaries in /home/simon/go/src/github.com/juju/juju/tests/suites/upgrade/streams for stream released. | 17:19:30 INFO juju.cmd supercommand.go:56 running juju [3.2.2 1f8427f gc go1.20.3] | 17:19:30 INFO cmd cloudcredential.go:47 updating credential store | 17:19:31 INFO cmd authkeys.go:113 Adding contents of "/home/simon/.local/share/juju/ssh/juju_id_rsa.pub" to authorized-keys | 17:19:31 INFO cmd authkeys.go:113 Adding contents of "/home/simon/.ssh/id_ed25519.pub" to authorized-keys | Creating Juju controller "test-upgrade-stable-stream" on lxd/default | 17:19:31 INFO juju.cmd.juju.commands bootstrap.go:1026 combined bootstrap constraints: | 17:19:31 INFO cmd bootstrap.go:403 Loading image metadata | Looking for packaged Juju agent version 3.2.0 for amd64 | 17:19:31 INFO juju.environs.bootstrap tools.go:78 looking for bootstrap agent binaries: version=3.2.0 | 17:19:31 INFO juju.environs.bootstrap tools.go:80 found 1 packaged agent binaries | Located Juju agent version 3.2.0-ubuntu-amd64 at https://streams.canonical.com/juju/tools/agent/3.2.0/juju-3.2.0-linux-amd64.tgz | 17:19:31 INFO cmd bootstrap.go:580 Starting new instance for initial controller | To configure your system to better support LXD containers, please see: https://linuxcontainers.org/lxd/docs/master/explanation/performance_tuning/ | Launching controller instance(s) on lxd/default... 17:19:31 INFO juju.container.lxd container.go:294 starting new container "juju-1a89a5-0" (image "") 17:19:34 INFO juju.provider.lxd environ_broker.go:48 started instance "juju-1a89a5-0" | - juju-1a89a5-0 (arch=amd64) | 17:19:34 INFO juju.environs.bootstrap bootstrap.go:1005 newest version: 3.2.0 | 17:19:34 INFO juju.environs.bootstrap bootstrap.go:1012 failed to find 3.2.2 agent binaries, will attempt to use 3.2.0 | 17:19:34 INFO juju.environs.bootstrap bootstrap.go:1020 picked bootstrap agent binary version: 3.2.0 | Installing Juju agent on bootstrap instance | Waiting for address | Attempting to connect to 240.60.0.112:22 | Connected to 240.60.0.112 | 17:19:50 INFO juju.cloudconfig userdatacfg_unix.go:615 Fetching agent: curl -sSf --retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/3.2.0/juju-3.2.0-linux-amd64.tgz]> | Running machine configuration script... | Bootstrap agent now started | 17:20:32 INFO juju.juju api.go:340 API endpoints changed from [] to [240.60.0.112:17070] | Contacting Juju controller at 240.60.0.112 to verify accessibility... | 17:20:32 INFO juju.juju api.go:86 connecting to API addresses: [240.60.0.112:17070] | 17:20:36 INFO juju.api apiclient.go:702 connection established to "wss://240.60.0.112:17070/model/8ad7547d-3d62-46a0-8f02-c74da21a89a5/api" | Bootstrap complete, controller "test-upgrade-stable-stream" is now available | Controller machines are in the "controller" model | Now you can run | juju add-model <model-name> | to create a new model to deploy workloads. | 17:20:36 INFO cmd supercommand.go:555 command finished Added 'test-upgrade-stable' model on lxd/default with credential 'lxd' for user 'admin' Located charm "jameinel-ubuntu-lite" in charm-hub, revision 10 Deploying "jameinel-ubuntu-lite" from charm-hub charm "jameinel-ubuntu-lite", revision 10 in channel stable on ubuntu@20.04/stable [+] (attempt 0) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:20:40+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 1) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:20:45+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 2) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:20:50+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 3) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:20:56+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 4) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:01+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 5) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:06+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 6) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:11+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 7) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:17+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 8) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:22+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 9) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:27+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 10) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:32+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 11) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:37+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 12) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:43+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 13) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:48+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 14) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:53+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 [+] (attempt 15) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:21:58+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending pending ubuntu@20.04 Creating container [+] (attempt 16) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:22:04+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending juju-9d0eac-0 ubuntu@20.04 Container started [+] (attempt 17) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:22:09+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending juju-9d0eac-0 ubuntu@20.04 Container started [+] (attempt 18) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:22:14+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending juju-9d0eac-0 ubuntu@20.04 Container started [+] (attempt 19) polling status for .applications | select((.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["juju-status"] | .current == "idle") and (.["ubuntu-lite"] | .units | .["ubuntu-lite/0"] | .["workload-status"] | .current != "error")) | keys[0] => ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:22:19+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite waiting 0/1 jameinel-ubuntu-lite stable 10 no waiting for machine | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0 waiting allocating 0 240.60.0.61 waiting for machine | | Machine State Address Inst id Base AZ Message | 0 pending 240.60.0.61 juju-9d0eac-0 ubuntu@20.04 Running [+] Completed polling status for ubuntu-lite | Model Controller Cloud/Region Version SLA Timestamp | test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.0 unsupported 17:22:25+01:00 | | App Version Status Scale Charm Channel Rev Exposed Message | ubuntu-lite 20.04 active 1 jameinel-ubuntu-lite stable 10 no ready | | Unit Workload Agent Machine Public address Ports Message | ubuntu-lite/0* active idle 0 240.60.0.61 ready | | Machine State Address Inst id Base AZ Message | 0 started 240.60.0.61 juju-9d0eac-0 ubuntu@20.04 Running ==> Current juju version 3.2.0 best version: 3.2.2 started upgrade to 3.2.2 [+] (attempt 0) polling machines Model Controller Cloud/Region Version SLA Timestamp controller test-upgrade-stable-stream lxd/default 3.2.2 unsupported 17:22:30+01:00 App Version Status Scale Charm Channel Rev Exposed Message controller active 1 juju-controller 3.2/stable 14 no Unit Workload Agent Machine Public address Ports Message controller/0* active idle 0 240.60.0.112 Machine State Address Inst id Base AZ Message 0 started 240.60.0.112 juju-1a89a5-0 ubuntu@22.04 Running test-upgrade-stable-stream:admin/test-upgrade-stable (no change) best version: 3.2.2 started upgrade to 3.2.2 [+] (attempt 1) polling machines Model Controller Cloud/Region Version SLA Timestamp test-upgrade-stable test-upgrade-stable-stream lxd/default 3.2.2 unsupported 17:22:51+01:00 App Version Status Scale Charm Channel Rev Exposed Message ubuntu-lite 20.04 active 1 jameinel-ubuntu-lite stable 10 no ready Unit Workload Agent Machine Public address Ports Message ubuntu-lite/0* active idle 0 240.60.0.61 ready Machine State Address Inst id Base AZ Message 0 started 240.60.0.61 juju-9d0eac-0 ubuntu@20.04 Running ===> [ ✔ ] Success: simplestream metadata last stable (233s) suites/upgrade/streams.sh: line 1: 2941430 Killed tail -f "${TEST_DIR}/${TEST_CURRENT}.log" 2> /dev/null ==> SKIP: Asked to skip stream (previous) tests ==> TEST DONE: upgrade (233s) ==> Cleaning up ====> Cleaning up jujus ====> Introspection gathering ====> Introspection gathered ====> Removing offers ====> Removed offers ====> Destroying juju (test-upgrade-stable-stream) | WARNING This command will destroy the "test-upgrade-stable-stream" controller and all its resources | Destroying controller | Waiting for model resources to be reclaimed | Waiting for 1 model, 1 machine, 2 applications | Waiting for 1 model, 1 machine | Waiting for 1 model, 1 machine | Waiting for 1 model, 1 machine | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | Waiting for 1 model | All models reclaimed, cleaning up controller machines ====> Destroyed juju (test-upgrade-stable-stream) ====> Completed cleaning up jujus ====> Running clean up func: remove_upgrade_tools ==> Removing tools ==> Removed tools ====> Finished cleaning up func: remove_upgrade_tools ====> Running clean up func: remove_upgrade_metadata ==> Removing metadata ==> Removed metadata ====> Finished cleaning up func: remove_upgrade_metadata ====> Running clean up func: kill_server ==> Killing server ==> Killed server (PID is 2941572) ====> Finished cleaning up func: kill_server ==> Test result: success ==> Tests Removed: /home/simon/go/src/github.com/juju/juju/tests/tmp.H1T ==> TEST COMPLETE ```
juju#15860 This PR makes the role binding update faster if no update needs to be done.
juju#15864 juju#15328 made a miss-step in making the upgrader only run when the model is alive, since many things hang off the upgraded flag, including the cleaner worker, which needs to run during model destruction. ## QA steps `./main.sh -v -s '"test_block_commands,test_display_clouds,test_model_config,test_model_defaults,test_unregister"' cli test_local_charms` ## Documentation changes N/A ## Bug reference https://jenkins.juju.canonical.com/job/test-cli-test-local-charms-lxd/1336/consoleText
…ted_series_in_bundles juju#15825 These versions of Ubuntu are no longer supported in Juju 3 This check is handled by seriesSelector. Ensure that is is always used. A side effect is that this brings the bundle deployer in line with deploying charms with 'juju deploy' ## Checklist - [x] Code style: imports ordered, good names, simple structure, etc - ~[ ] Comments saying why design decisions were made~ - [x] Go unit tests, with comments saying what you're testing - ~[ ] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing~ - ~[ ] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~ ## QA steps Verify we can no longer deploy bundles with series bionic ``` $ cat bundle1.yaml name: tiny applications: ubuntu-plus: charm: ubuntu scale: 1 series: bionic $ juju deploy ./bundle1.yaml Located charm "ubuntu" in charm-hub, channel stable Executing changes: - upload charm ubuntu from charm-hub for series bionic with architecture=amd64 ERROR cannot deploy bundle: failed to upload charm "ubuntu": series: bionic not supported ``` Verify we can still properly deploy complex bundles ```sh juju deploy kubeflow ``` ## Bug reference https://bugs.launchpad.net/juju/+bug/2025163
juju#15866 Merges: - juju#15852 - juju#15854 - juju#15849 - juju#15855 - juju#15857 - juju#15861 - juju#15825 One conflict in `cmd/juju/application/deployer/bundlehandler.go`
juju#15863 Merge 2.9 -> 3.1: - juju#15860 Conflicts: - caas/kubernetes/provider/operator_test.go - caas/kubernetes/provider/package_test.go - caas/kubernetes/provider/rbac_test.go - caas/kubernetes/provider/k8s_test.go
…n-instance-poller Fix TestManageModelRunsInstancePoller by extending timeout.
AKS cleanup appears to now work with destroy-controller.
juju#15872 If you use `juju ssh` too fast after creating a machine, it may have addresses, but not yet have reported its host keys back to the controller. This allows the ssh command to wait a reasonable time for those keys. ## QA steps - Bootstrap LXD - Create model config yaml file ``` $ cat wrench.yaml cloudinit-userdata: | write_files: - content: | delay path: /var/lib/juju/wrench/hostkeyreporter ``` - `$ juju model-config wrench.yaml` - `$ juju add-machine --series jammy` - `$ juju ssh 0` - ssh should take up to a minute to resolve, but should connect successfully or return `ERROR retrieving SSH host keys for "0": keys not found` ## Documentation changes N/A ## Bug reference https://jenkins.juju.canonical.com/job/test-controller-test-metrics-lxd/2/consoleText
juju#15873 Extend TestEnvironDestroyTimeoutForce timeout as 1 millisecond is too short, as the code will consistently skip Destroy and go straight to force destruction. ## QA steps - install golang.org/x/tools/cmd/stress - `cd worker/undertaker` - `go test -race -c` - `stress ./undertaker.test -check.f=TestEnvironDestroyTimeoutForce` ## Documentation changes N/A ## Bug reference https://jenkins.juju.canonical.com/job/unit-tests-race-amd64/1346/consoleText
Conflicts: - cmd/juju/commands/main.go - cmd/juju/ssh/debugcode.go - cmd/juju/ssh/debugcode_test.go - cmd/juju/ssh/debughooks.go - cmd/juju/ssh/debughooks_test.go - cmd/juju/ssh/scp.go - cmd/juju/ssh/scp_unix_test.go - cmd/juju/ssh/ssh_container.go - cmd/juju/ssh/ssh_machine.go - cmd/juju/ssh/ssh_machine_test.go - cmd/juju/ssh/ssh_unix_test.go
juju#15874 Forward ports: - juju#15864 - juju#15870 - juju#15871 - juju#15872 - juju#15873 Conflicts: - cmd/juju/commands/main.go - cmd/juju/ssh/debugcode.go - cmd/juju/ssh/debugcode_test.go - cmd/juju/ssh/debughooks.go - cmd/juju/ssh/debughooks_test.go - cmd/juju/ssh/scp.go - cmd/juju/ssh/scp_unix_test.go - cmd/juju/ssh/ssh_container.go - cmd/juju/ssh/ssh_machine.go - cmd/juju/ssh/ssh_machine_test.go - cmd/juju/ssh/ssh_unix_test.go
juju#15875 Forward ports: - juju#15860 - juju#15864 - juju#15863 - juju#15870 - juju#15871 - juju#15872 - juju#15873 - juju#15874 No conflicts.
When merging forward a bug fix from 3.1 which disallows bionic from being deployed in bundles, this lead to many unit tests failing, since they were attempting to deploy with bionic Fix this by patching WorkloadBases to return jammy -> xenial as supported bases. An equivalent patch was already done to WorkloadSeries, but that is no longer used This fixed most of the tests, except for a few which would now default to picking bionic instead of focal
This allows us to patch in our own workload bases for tests The core package should be as generic as possible, so injecting dependencies like this should be done as much as possible
SimonRichardson
approved these changes
Jul 7, 2023
hmlanigan
approved these changes
Jul 7, 2023
/merge |
Merged
jujubot
added a commit
that referenced
this pull request
Jul 11, 2023
#15896 Forward ports: - #15845 - #15725 - #15852 - #15854 - #15849 - #15855 - #15857 - #15861 - #15862 - #15860 - #15864 - #15825 - #15866 - #15863 - #15870 - #15871 - #15872 - #15873 - #15874 - #15876 - #15875 - #15881 - #15727 - #15883 - #15884 - #15880 - #15879 - #15886 - #15887 - #15877 - #15888 - #15893 - #15894 Conflicts: - cmd/juju/ssh/debugcode_test.go - cmd/juju/ssh/debughooks_test.go - cmd/juju/ssh/scp_unix_test.go - cmd/juju/ssh/ssh_machine.go - cmd/juju/ssh/ssh_machine_test.go - cmd/juju/ssh/ssh_unix_test.go - worker/dbaccessor/worker.go - caas/kubernetes/provider/k8s_test.go - caas/kubernetes/provider/operator_test.go - caas/kubernetes/provider/package_test.go - caas/kubernetes/provider/rbac.go - cmd/juju/application/deployer/bundlehandler.go - cmd/juju/application/deployer/bundlehandler_test.go - cmd/jujud/agent/model/manifolds.go - core/bundle/changes/changes.go - worker/caasapplicationprovisioner/application.go - worker/caasapplicationprovisioner/application_test.go - worker/caasapplicationprovisioner/mock_test.go
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Merges:
One substantial conflict in
cmd/juju/application/deployer/bundlehandler.go
&cmd/juju/application/deployer/bundlehandler_test.go
, resulting in failing unit tests.The conflict relates to this PR:
These were fixed by the two tops commits
QA Steps
Ensure all unit tests pass
Ensure we can deploy a complex bundle