Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge 3.2 #15719

Merged
merged 121 commits into from
Jun 9, 2023
Merged

Merge 3.2 #15719

merged 121 commits into from
Jun 9, 2023

Conversation

tlm and others added 30 commits March 31, 2023 10:20
With Juju moving to sidecar charms running under Pebble it is
necessary to make sure that when units for an application are being
added or removed that they have their teardown hooks run to completion.

Once all units needing to be removed have become dead Juju will issue the scale down command to Kubernetes.

This commit does not deal with the units that can become dead through direct control of Kubernetes scale bypassing Juju.

It won't stop new units from coming up before the desired scale has been reached. This will be rectified in a separate commit.

Bug: https://bugs.launchpad.net/juju/+bug/1951415
Signed-off-by: Max Asnaashari <max.asnaashari@canonical.com>
Signed-off-by: Max Asnaashari <max.asnaashari@canonical.com>
Signed-off-by: Max Asnaashari <max.asnaashari@canonical.com>
Signed-off-by: Max Asnaashari <max.asnaashari@canonical.com>
Signed-off-by: Max Asnaashari <max.asnaashari@canonical.com>
run_refresh_channel_no_new_revision is intended to test charms are
correctly refreshed to a new channel even if the revision doesn't
change.

Use a test charm purpose built for this, as we cannot guarentee this
will be the case for any other charm
…tu_with_juju_qa_fixed_rev

juju#15636

run_refresh_channel_no_new_revision is intended to test charms are
correctly refreshed to a new channel even if the revision doesn't
change.

Use a test charm purpose built for this, as we cannot guarantee this
will be the case for any other charm

## Checklist

- [x] Code style: imports ordered, good names, simple structure, etc
- ~[ ] Comments saying why design decisions were made~
- ~[ ] Go unit tests, with comments saying what you're testing~
- [x] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing
- ~[ ] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~

## QA steps

```sh
./main.sh -v -c aws -p ec2 refresh test_basic
```
That is because new psql does not have TLS built-in. Fix is simple, integrate psql with certificates charm.
However, MM should not have container crash but go to the blocked state with clear error message to a user until TLS is available.
Thx to https://bugs.launchpad.net/charm-k8s-mattermost/+bug/1997540

Fixes static-analisys (formatting).
These charms managed by Juju team, and better fit CI goals on long
distance.

Removes unused dummy-sink-k8s charm.
jujubot and others added 24 commits June 2, 2023 07:44
juju#15654

This PR fixes secret backend refCount issues for:
- model import(increment on target controller)/export(decrement on source controller) during model migration;
- model removal;
- model migration abort;
- secret drain;

drive-by: 
- added upgrade steps to create at least one refCount for each secret backend;
- fixed a secret backend model config issue for model migration(including the backend ID in the model data for importing process to handle a potential backend renaming in the target controller);
- fixed a potential data race for the secret drain worker that could happen if the secret backend changed too frequently in a short time period.

## Checklist

- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- [ ] ~[Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing~
- [ ] ~[doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~

## QA steps

```sh
juju add-secret-backend myvault vault --config ./vault.yaml

juju deploy snappass-test

for i in {1..5}; do juju exec --unit snappass-test/0 -- secret-add --owner unit owned-by=easyrsa/0; done

for i in {1..5}; do juju exec --unit snappass-test/0 -- secret-add owned-by=easyrsa; done

juju model-config secret-backend=myvault

juju model-config secret-backend
myvault

export backend_id=$(juju show-secret-backend myvault | yq .myvault.id)

juju bootstrap microk8s k2

juju add-secret-backend myvault1 vault --config ./vault.yaml --import-id $backend_id

juju switch k1:t1

juju migrate k1:t1 k2

juju switch k2:t1

juju model-config secret-backend
myvault1

# check refCount in mongo on the source controller;
juju:PRIMARY> db.globalRefcounts.find({_id: "secretbackend#revisions#64782241fe5c050027770143"}).pretty()
{
 "_id" : "secretbackend#revisions#64782241fe5c050027770143",
 "refcount" : 0,
 "txn-revno" : NumberLong(11)
}

# check refCount in mongo on the target controller;
juju:PRIMARY> db.globalRefcounts.find({_id: "secretbackend#revisions#64782241fe5c050027770143"}).pretty()
{
 "_id" : "secretbackend#revisions#64782241fe5c050027770143",
 "refcount" : 8,
 "txn-revno" : NumberLong(3)
}

juju remove-secret-backend myvault1
ERROR backend "myvault1" still contains secret content

juju destroy-model -y --debug --destroy-storage k2:t1

# check refCount in mongo on target controller again;
juju:PRIMARY> db.globalRefcounts.find({_id: "secretbackend#revisions#64782241fe5c050027770143"}).pretty()
{
 "_id" : "secretbackend#revisions#64782241fe5c050027770143",
 "refcount" : 0,
 "txn-revno" : NumberLong(3)
}

juju remove-secret-backend myvault1

```

## Documentation changes

No

## Bug reference

No
Conflicts:
- cmd/containeragent/unit/manifolds.go
- cmd/containeragent/unit/manifolds_test.go
- tests/suites/machine/machine.sh
- tests/suites/refresh/refresh.sh
juju#15692

The consumer secrets watcher used in the remote relations worker for cmr is broken due to fixes made to the offer side remote application proxy. The offer uuid is not stored on the proxy any more as the offer uuid must be determined from the relation. To that end, the relation tag is now passed when setting up the watcher. A small cleanup of the auth context is also done, requiring that source model and offer uuids be passed into the auth calls.

To allow for backwards compatibility with older controllers in a multi-controller cmr scenario, we extract the offer uuid from the macaroon.

## QA steps

create a model for an offer

juju deploy juju-qa-dummy-source
juju offer dummy-source:sink
juju config dummy-source token=foo

create a model for the consumer 

juju deploy juju-qa-dummy-sink
juju relate dummy-sink controller.dummy-source

check that the apps are idle and active

create a secret in the offering model

juju exec --unit dummy-source/0 -- secret-add foo=bar
juju exec --unit dummy-source/0 -- secret-grant -r 0 secret://dcbb8270-42ff-4d15-8e42-843a6a8e49d8/chsn02ip43ljshc2j7j0

check that it can be read

juju exec --unit dummy-sink/0 -- secret-get secret://dcbb8270-42ff-4d15-8e42-843a6a8e49d8/chsn02ip43ljshc2j7j0

update the secret

juju exec --unit dummy-source/0 -- secret-set secret://dcbb8270-42ff-4d15-8e42-843a6a8e49d8/chsn02ip43ljshc2j7j0 foo=bar2

check the consuming charm has run the secret-changed hook and can ee the new value

 juju show-status-log dummy-sink/0
 juju exec --unit dummy-sink/0 -- secret-get secret://dcbb8270-42ff-4d15-8e42-843a6a8e49d8/chsn02ip43ljshc2j7j0 --peek


Also deploy a 3.1.2 controller and add consuming app and relate to offer.

## Bug reference

https://bugs.launchpad.net/bugs/2021969
juju#15693

Forward ports:
- juju#15636
- juju#15650
- juju#15652
- juju#15653
- juju#15633
- juju#15657
- juju#15660
- juju#15661
- juju#15662
- juju#15663
- juju#15313
- juju#15542
- juju#15674

Conflicts:
- cmd/containeragent/unit/manifolds.go
- cmd/containeragent/unit/manifolds_test.go
- tests/suites/machine/machine.sh
- tests/suites/refresh/refresh.sh
juju#15709

When creating the hook content for secret hooks, we were only setting the revision if the hook was secret-remove. However, it also needs to be set for secret-expired.

The logic was a little wrong - it has been changed so that if the hook info has revision set, it will always be used for any secret hook. We were also incorrectly setting revision for the secret-changed hook, so this has been fixed.

## QA steps

Deploy a charm and create a secret which expires in a minute.
Echo the JUJU_SECRET_REVISION env var in the secret-expired hook and see that it is set correctly.

## Bug reference

https://bugs.launchpad.net/juju/+bug/2023120
juju#15701

Upgrading from pospec charms to sidecar charms can be problematic when the unit numbers don't match the ordinal pod numbers. This one time change forces the users to scale the application to 0 before performing this particular charm upgrade. This has the side benefit of surfacing previous behaviour where the application would be unavailable during the charm upgrade from podspec to sidecar. This is now obvious because the admin must first scale the application down.

## QA steps

```
$ juju bootstrap microk8s
$ juju add-model a
$ juju deploy oidc-gatekeeper --revision 176 --channel ckf-1.7/stable
$ # wait for oidc-gatekeeper to settle down
$ juju remove-application oidc-gatekeeper
$ # wait for oidc-gatekeeper to disappear
$ juju deploy oidc-gatekeeper --revision 176 --channel ckf-1.7/stable
$ # wait for oidc-gatekeeper to settle down
$ juju refresh oidc-gatekeeper --channel latest/edge
Added charm-hub charm "oidc-gatekeeper", revision 213 in channel latest/edge, to the model
ERROR Upgrading from an older PodSpec style charm to a newer Sidecar charm requires that
the application be scaled down to 0 units.

Before refreshing the application again, you must scale it to 0 units and wait for
all those units to disappear before continuing.

 juju scale-application oidc-gatekeeper 0
$ juju scale-application oidc-gatekeeper 0
$ # wait for oidc-gatekeeper units to disappear
$ juju refresh oidc-gatekeeper --channel latest/edge
$ juju scale-application oidc-gatekeeper 1
$ # app should be upgraded
```

## Documentation changes

Document upgrade from podspec to sidecar charm that they need to be scaled to 0 first.

## Bug reference

https://bugs.launchpad.net/juju/+bug/2023117
…est-deploy-bundles-aws-in-3-1

juju#15681

This PR updates the test environment by replacing the usage of third-party charms with juju-team managed charms. 

## Checklist


- [x] Code style: imports ordered, good names, simple structure, etc
- [ ] Comments saying why design decisions were made

## QA steps

```sh
cd tests
./main.sh -v -p ec2 deploy run_deploy_cmr_bundle
```
…delFirewaller_openstack

juju#15573

This work has already been done for aws. Add OpenStack to the list of
providers which support this
juju#15329

As with aws, model ssh ingress is now managed by `ssh-allow` model config item.
Also, api-port is now only opened on the controller model 

This was mostly trivial, and involved removing a lot of responsibilities
from the OpenStack provider

This has involved updating the go-goose dependency

Also as a flyby fix an bug in config. ssh-allow was defaulting to the
wrong value if the entry was absent. Write a test for this case as well

## Checklist

- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- [ ] [Integration tests](https://github.com/juju/juju/tree/develop/tests), with comments saying what you're testing
- [x] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages

## QA steps

### Verify ssh-allow

Deploy a controller to opensatck, add a model and deploy a machine
```
$ juju bootstrap openstack stack
$ juju add-model m
$ juju add-machine
```

Then find the model-level security group name, put it in an envvar. For me:
```
$ export MODEL_GROUP=juju-83a82f66-bfd4-4bb0-812a-e3edfb67d64d-57d8be23-4f99-4de0-8946-bb9e89d21f78
```

And now verify ssh-allow default settings
```
$ juju model-config ssh-allow
0.0.0.0/0,::/0
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 22) | .remote_ip_prefix'
0.0.0.0/0
::/0
```

Now configure ssh-allow
```
$ juju model-config ssh-allow="192.168.0.0/24"
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 22) | .remote_ip_prefix'
192.168.0.0/24
```

Add a machine to verify this doesn't reset things
```
$ juju model-config ssh-allow="192.168.0.0/24"
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 22) | .remote_ip_prefix'
192.168.0.0/24
```

Add a new model pre-configured to check the config comes into effect even if no machines are present when it's set. (You will need to reset `MODEL_GROUP`
```
$ juju add-model m2 --config ssh-allow="192.168.2.0/24"
$ juju add-machine
(wait)
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 22) | .remote_ip_prefix'
192.168.0.0/24
```

### Verify multiple machines can be added to the same model
```
$ juju add-model m
$ juju add-machine
(wait)
$ juju add-machine
(wait)
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
m mstack microstack/microstack 3.2-beta4.1 unsupported 14:13:42+01:00

Machine State Address Inst id Base AZ Message
0 started 192.168.222.61 9b7c14e8-dd6f-4cb3-ad19-0ffa8e852f48 ubuntu@22.04 nova ACTIVE
1 started 192.168.222.54 5a51c5d4-af57-4231-b3d9-afc28736467a ubuntu@22.04 nova ACTIVE
```

### Verify `juju expose` still exposes
```
$ juju deploy postgresql
(wait)
$ juju expose postgresql
$ openstack security group show $MACHINE_GROUP
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2023-04-25T13:14:34Z |
| description | juju group |
| id | 1069e8a6-a3b1-4aee-8471-d02f48d395e1 |
| location | cloud='', project.domain_id=, project.domain_name='default', project.id='8910e2a31e8a4834a266bfd5dc99d80c', project.name='admin', region_name='', zone= |
| name | juju-20810b78-b20d-4543-837e-9715be4fcda1-01de2e1c-76a3-40a0-864f-694e16e2164b-2 |
| project_id | 8910e2a31e8a4834a266bfd5dc99d80c |
| revision_number | 3 |
| rules | created_at='2023-04-25T13:14:34Z', direction='egress', ethertype='IPv6', id='11996358-35f0-4316-90f4-9eef38a623fc', updated_at='2023-04-25T13:14:34Z' |
| | created_at='2023-04-25T13:14:34Z', direction='egress', ethertype='IPv4', id='22a74304-8426-42cf-b93b-2a8a8b85357e', updated_at='2023-04-25T13:14:34Z' |
| | created_at='2023-04-25T13:18:28Z', direction='ingress', ethertype='IPv4', id='a847c482-a5a7-4935-af77-dcd52267a652', port_range_max='5432', port_range_min='5432', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-04-25T13:18:28Z' |
| | created_at='2023-04-25T13:18:28Z', direction='ingress', ethertype='IPv6', id='ae88d48b-53ff-4cbb-94a4-9fafa9ae4e9a', port_range_max='5432', port_range_min='5432', protocol='tcp', remote_ip_prefix='::/0', updated_at='2023-04-25T13:18:28Z' |
| stateful | True |
| tags | [] |
| updated_at | 2023-04-25T13:18:28Z |
+-----------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```

### Verify api-port
```
$ juju bootstrap openstack openstack --config api-port=17777
(wait)
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 17777) | .remote_ip_prefix'
0.0.0.0/0
```

### Verify autocert-dns-name
```
$ juju bootstrap openstack openstack --config autocert-dns-name=example.com
(wait)
$ openstack security group show $MODEL_GROUP -f json | jq -r '.rules[] | select(.port_range_min == 80) | .remote_ip_prefix'
0.0.0.0/0
```

### Test with networks with port_security disabled

Create some OpenStack networks
```
$ openstack network create net1
$ openstack network create net2 --disable-port-security
```

Bootstrap to a netowrk with port_security disabled
```
juju bootstrap openstack openstack --config network=net2
```

Deploy a machine to net1
```
add-model m --config network=net2
juju add-machine
```
Verify the model sec group is not created

Change the config option to net1 and deploy another machine
```
juju model-config network=net1
juju add-machine
```
This should create the model sec group, attach it to the instance, and configure ssh to be open to the world

Change network back and add another machine
```
juju model-config network=net2
juju add-machine
```
This machine shouldn't be attached to a sec group

Then, reconfigure the model's ssh-allow
```
juju model-config ssh-allow=192.168.0.0/24
```
Then verify the model's sec group is re-configured
Conflicts:
- apiserver/errors/errors.go
- apiserver/facades/client/application/application.go
- apiserver/facades/client/application/application_unit_test.go
- apiserver/restrict_newer_client_test.go
- go.mod
- go.sum
- scripts/win-installer/setup.iss
- snap/snapcraft.yaml
- state/migration_import_test.go
- state/mocks/description_mock.go
- version/version.go
juju#15710

Forward ports:
- juju#15676
- juju#15672
- juju#15683
- juju#15673
- juju#15677
- juju#15691
- juju#15701

Conflicts:
- apiserver/errors/errors.go
- apiserver/facades/client/application/application.go
- apiserver/facades/client/application/application_unit_test.go
- apiserver/restrict_newer_client_test.go
- go.mod
- go.sum
- scripts/win-installer/setup.iss
- snap/snapcraft.yaml
- state/migration_import_test.go
- state/mocks/description_mock.go
- version/version.go
juju#15714

Fixes a cross model relations corner case:

Model 1
app A
offer app A

Model 2
app B
app C
offer app C

Model 1
relate A->C

Model 2
relate B->A

The unit relation hooks for C do not fire.
The root cause is that only one cmr token for app A is created and used for both the offer relation and consumer relation. So model 2 gets a duplicate remote entity token and the resolution of token to entity tag fails for the C->A relation.

The fix is to create the token for offering relations keyed on application offer tag, not application tag. This allows the same app to be in both an offering and consuming relation to a given model since different tokens are created for the different roles of the app.

There was also an unused app token parameter in the ingress address change watcher which was removed. This is lucky as it avoids the need to look up offers in the firewaller worker.

## QA steps

I used modified dummy-source and dummy-sink charms with extra endpoints to set up a cross model scenario as per the description. After the relations were created, show-status-log on each of the various units showed relation created and joined hooks being run. Previously, the relation joined hooks for one of the units would have been missing.


## Bug reference

https://bugs.launchpad.net/juju/+bug/2022855
juju#15716

Merge 2.9

[Merge pull request](juju@6390036) juju#15714 [from wallyworld/offer-consume=sameapp](juju@6390036)

Conflicts was new auth param.

```
# Conflicts:
# apiserver/common/crossmodel/crossmodel.go
# apiserver/facades/controller/crossmodelrelations/crossmodelrelations.go
# apiserver/facades/controller/remoterelations/remoterelations.go
#
```

## QA steps

See PRs
juju#15717

Merge 3.1

juju#15676 [from wallyworld/newer-clients-migrate](juju@7c1f884)
juju#15672 [from hpidcock/bump-juju-description-v3.0.15](juju@acec126)
juju#15683 [from hpidcock/fix-persistent-storage-test](juju@460dd21)
juju#15673 [from barrettj12/check-merge](juju@5c253c3)
juju#15677 [from barrettj12/invalid-offer](juju@3cb3f8b)
juju#15654 [from ycliuhw/fix/backendRefCount](juju@4e5ae3c)
juju#15692 [from wallyworld/fix-secrets-cmr](juju@a1fb0c4)
[t](juju@840bc09) juju#15709 [from wallyworld/hook-secret-revison](juju@840bc09)
juju#15701 [from hpidcock/fix-upgrade-podspec-sidecar](juju@d465c93)
juju#15681 [from anvial/JUJU-3882-fix-test-deploy-test-…](juju@f54c1ad)
juju#15714 [from wallyworld/offer-consume=sameapp](juju@6390036)

Conflicts were upgrade steps - the 3.1.3 step has been moved to 3.2.1.
Also an auth tweak to crossmodelrelaltions.

```
# Conflicts:
# apiserver/common/crossmodel/auth_test.go
# apiserver/facades/controller/crossmodelrelations/crossmodelrelations.go
# rpc/params/apierror.go
# state/upgrades.go
# state/upgrades_test.go
# upgrades/backend.go
#
```

## QA steps

See PRs
@wallyworld
Copy link
Member Author

/merge

@jujubot jujubot merged commit 78b635d into juju:3.3 Jun 9, 2023
18 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants