Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge 2.9 #14078

Merged
merged 66 commits into from May 26, 2022
Merged

Merge 2.9 #14078

merged 66 commits into from May 26, 2022

Conversation

wallyworld
Copy link
Member

@wallyworld wallyworld commented May 25, 2022

Merge 2.9

#14049 JUJU-1129 Refactor resource client api to be consistent with other juju client api
#14051 JUJU-1125 Improve error message for local OCI image resource
#14048 JUJU-1119 work around golang http proxy caching by default - add more logging
#14055 Added two examples to juju status help
#14058 JUJU-1129 Refactor payload client facades to match standard practice
#14059 Update add-unit help to clarify lp#1971956
#14060 Increase the number of open file descriptors
#14057 JUJU-1157 Check that an AgentStreamKey value exists
#14062 Remove the componentisation of resources and payloads
#14054 Update debug-log help text
#14064 JUJU-1159 Improved help model-defaults message
#14066 Bump actions/upload-artifact from 3.0.0 to 3.1.0
#14070 JUJU-1129 Move resources files to a single top level package
#14071 Update the model-defaults help text

Conflicts were charm and resource import paths, deleted resource files, and struct initialisations.

# Conflicts:
#       api/client/resources/client.go
#       api/client/resources/client/base_test.go
#       api/client/resources/client/client_upload_test.go
#       apiserver/facades/controller/caasapplicationprovisioner/provisioner_test.go
#       caas/kubernetes/provider/k8s.go
#       caas/kubernetes/provider/k8s_test.go
#       cmd/containeragent/initialize/package_test.go
#       cmd/containeragent/unit/package_test.go
#       cmd/juju/application/bundle_test.go
#       cmd/juju/application/deployer/bundlehandler.go
#       cmd/juju/application/deployer/deployer.go
#       cmd/juju/application/refresh.go
#       cmd/juju/commands/main.go
#       cmd/juju/commands/main_test.go
#       core/watcher/package_test.go
#       go.sum
#       payload/api/private/helpers_test.go
#       payload/context/base_test.go
#       payload/context/register_test.go
#       payload/context/utils.go
#       resource/charmhub.go
#       resource/charmstore.go
#       resource/context/context.go
#       resource/context/internal/content.go
#       resource/context/internal/content_test.go
#       resource/context/internal/resourcedir.go
#       resource/context/internal/stub_test.go
#       resource/context/utils.go
#       resource/context/utils_test.go
#       resource/repositories/mocks/mocks.go
#       resource/repositories/operations.go
#       resource/repositories/operations_test.go
#       resource/repositories/repository.go
#       resource/resource.go
#       resource/resourceadapters/interfaces.go
#       resource/resourceadapters/mocks/opener_mock.go
#       resource/resourceadapters/opener.go
#       resource/unit.go
#       snap/snapcraft.yaml
#       state/mocks/resources_mock.go
#       worker/caasapplicationprovisioner/application_test.go
#       worker/uniter/manifold.go
#       worker/uniter/runner/context/contextfactory.go
#       worker/uniter/runner/jujuc/mocks/context_mock.go
#       worker/uniter/runner/jujuc/payload-register.go
#       worker/uniter/runner/jujuc/restricted.go
#       worker/uniter/runner/jujuc/server.go
#       worker/uniter/uniter.go
#       worker/uniter/uniter_test.go
#       worker/uniter/util_test.go

QA steps

See PRs

mthaddon and others added 30 commits February 15, 2022 10:20
juju#14049

The first on several PRs to bring resources into line with how facades and api clients are commonly implemented.
This PR starts with the api client layer. A lot of the changes are file moves and test fixes to use gomock. Some http helpers defined in api layer but used in apiserver layer were moved across. The agent api client is moved out to the agent sub-package.

## QA steps

bootstrap a machine controller
deploy the dummy-resource charm
use juju resources to see the resource metadata
It will result in a hook error because the bar resource is not supplied
Attach the missing resource
`juju attach-resource dummy-resource bar=baz.txt`
The status message now shows the resource has been downloaded with `resource-get`
ssh into the charm and inspect the resource content
use juju resources to inspect the resource metadata

bootstrap a k8s controller
deploy the mysql-k8s charm
It will stay allocating because there's no oci image in the store
Attach the mysql-image
`juju attach-resource mysql-k8s mysql-image=mariadb`
The charm will complete deployment
use juju resources to inspect the resource metadata
juju#14051

Fixes [lp#1958253](https://bugs.launchpad.net/juju/+bug/1958253). We now check whether the file provided to `XXXXXX-image` is json, yaml or binary, and return the corresponding error.

I've also rewritten the `TestDeployDockerResource*` tests to be table-driven, which made it easy to add new tests covering my changes.

### QA steps

```sh
$ juju deploy kubernetes-dashboard --resource dashboard-image=./invalid.json
ERROR resource "dashboard-image": json parsing: ...
$ juju deploy kubernetes-dashboard --resource dashboard-image=./invalid.yaml
ERROR resource "dashboard-image": yaml: ...
$ juju deploy kubernetes-dashboard --resource dashboard-image=./dashboard.tar
ERROR resource "dashboard-image": expected json or yaml file
```
Allow for http dump functionality in the juju http package to be used
juju#14048

Pick up changes to the juju/http package: http.ProxyFromEnvironment only gathers the environment proxy settings once per process and reuses the values. This is difficult for juju as the agent proxy settings can get changed on the fly. Setup to get the current proxy environment variable when running commands.

Added loggers to other places where juju/http client was used, to enable httpdump output to be logged when required.

The new version of juju/http adds log output of actual proxy values being used: set the juju model logging-config to `#http=TRACE` or `juju.http.middleware=TRACE`

## QA steps

Follow the steps in the bug.

# the following will enabling logging of the actual proxy settings found for use.
$ juju model-config logging-config="juju.http.middlware=TRACE" -m controller

## Bug reference

https://bugs.launchpad.net/juju/+bug/1973738
LP1973811. The key above doesn't exist in the controller config map, so
our agent stream handling wasn't proper, nor was the warning output. Set
the modelString value if empty or does not exist.
juju#14055

Add two simple examples to the juju status help text that show how to filter by application/unit status. I actually had no idea you could do this...

## Bug reference

https://bugs.launchpad.net/juju/+bug/1727639
macos tests require more file descriptors then the default for mongo to
behave.
Increase the number of open file descriptors.
juju#14059

Updates the `juju add-unit` help text to clarify the issue in [lp#1971956](https://bugs.launchpad.net/juju/+bug/1971956).
juju#14058

Like what was done for resources, the payload api and cli machinery is refactored to match standard practice. API clients for cli and agents are moved to api package and cli is moved to cmd/juju.
A lot of the changes are file renames and edits.

## QA steps

deploy a charm with payloads, eg
```
payloads:
 ingress-resource:
 type: crd
 api-token:
 type: secret
```

```
$ juju exec --unit test/0 "payload-register crd ingress-resource 1234 tag1,tag2"

$ juju payloads
[Unit Payloads]
Unit Machine Payload class Status Type Id Tags 
test/0 1 ingress-resource running crd 1234 tag1,tag2 

$ juju exec --unit test/0 "payload-status-set ingress-resource 1234 stopped"

$ juju payloads
[Unit Payloads]
Unit Machine Payload class Status Type Id Tags 
test/0 1 ingress-resource stopped crd 1234 tag1,tag2 
```
LP 1972001, reduce duplication of example commands and add
explanations of the examples.
Fix LP1935726, panic with invalid placement. Found while fixing on the
client side. As method here can be called from other than the juju
client, validate pointer before using.
Fix LP1935726, a placement spec of empty string is a nil pointer, don't
add to the slice of Placement. Panics and bad things happen later in the
client code and server if so. Fixing on the server side also.
jameinel and others added 24 commits May 23, 2022 16:24
This just fixes a model default line that wasn't properly formatted. 

What it was actually saying was to reset the model-defaults of `default-series` in the cloud `test-mode`. This changes it to indicate that we want to reset `default-series` and `test-mode` for all clouds.
juju#14064

LP 1972001, reduce duplication of example commands and add
explanations of the examples.

## QA steps


```sh
juju help model-defaults
```

## Bug reference
https://bugs.launchpad.net/juju/+bug/1972001
…ions/upload-artifact-3.1.0

Bump actions/upload-artifact from 3.0.0 to 3.1.0
Resolve that they updated the discussion of entries, but didn't fix the
content.
juju#14070

Move files from resources sub-packages to a top level resources package and adjust imports.
Also move the resource deploy code used only by the deploy CLI to the application deployer package to pace it next to the other deploy related code.

Next PR will clean up some of the remaining abstractions.

## QA steps

deploy a charm with resources
…elp-tweak

Update the model-defaults help text
Change back to backup as a potential action in snap description
juju#14069

When running juju refresh, even if the charm itself is not updated, we need to still allow new resources to be refreshed.
This tweaks the logic used in the refresh CLI to allow just resources to be updated.

## QA steps

deploy a charm with a resource, eg `ch:prometheus-k8s` from beta
`juju refresh prometheus-k8s --resource=--resource prometheus-image=prom/prometheus`

Do the same with a cs charm, eg `cs:~juju/mariadb-k8s`

## Bug reference

https://bugs.launchpad.net/juju/+bug/1954462
juju#14073

This PR includes two changes:

- set application default status to waiting;
- use SetOperatorStatus to update the app status for caasapplicationprovisioner and caasunitprovisioner facade

## Checklist

 - [ ] ~Requires a [pylibjuju](https://github.com/juju/python-libjuju) change~
 - [ ] ~Added [integration tests](https://github.com/juju/juju/tree/develop/tests) for the PR~
 - [ ] ~Added or updated [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) related to packages changed~
 - [x] Comments answer the question of why design decisions were made

## QA steps

```console
$ juju add-model t1

$ juju deploy cs:~juju/mariadb-k8s-3
Located charm "mariadb-k8s" in charm-store, revision 3
Deploying "mariadb-k8s" from charm-store charm "mariadb-k8s", revision 3 in channel stable on kubernetes

$ juju deploy snappass-test
Located charm "snappass-test" in charm-hub, revision 9
Deploying "snappass-test" from charm-hub charm "snappass-test", revision 9 in channel stable on focal

$ juju status --relations --color --storage -m k1:t1
Model Controller Cloud/Region Version SLA Timestamp
t1 k1 microk8s/localhost 2.9.30 unsupported 12:46:33+10:00

App Version Status Scale Charm Channel Rev Address Exposed Message
mariadb-k8s res:mysql_image@e6ea77e active 1 mariadb-k8s stable 3 10.152.183.239 no
snappass-test active 1 snappass-test stable 9 10.152.183.9 no

Unit Workload Agent Address Ports Message
mariadb-k8s/0* active idle 10.1.97.231 3306/TCP
snappass-test/0* active idle 10.1.97.232 snappass started

Storage Unit Storage id Type Pool Mountpoint Size Status Message
mariadb-k8s/0 database/0 filesystem kubernetes /var/lib/mysql 35MiB attached Successfully provisioned volume pvc-d5ded5dd-5da6-431b-9c30-99d7a260d3d9
```

## Documentation changes

No

## Bug reference

https://bugs.launchpad.net/juju/+bug/1975457
juju#14061

`--build-agent` is purely for local development purposes, so disallow to use it for snap juju client.

## Checklist

 - [ ] ~Requires a [pylibjuju](https://github.com/juju/python-libjuju) change~
 - [ ] ~Added [integration tests](https://github.com/juju/juju/tree/develop/tests) for the PR~
 - [ ] ~Added or updated [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) related to packages changed~
 - [x] Comments answer the question of why design decisions were made

## QA steps

```console
$ which juju
/snap/bin/juju

$ juju bootstrap lxd k1 --build-agent
Creating Juju controller "k1" on lxd/localhost
Building local Juju agent binary version 2.9.31 for amd64
ERROR failed to bootstrap model: cannot package bootstrap agent binary: can not build agent for official build

```

## Documentation changes

No

## Bug reference

https://bugs.launchpad.net/juju/+bug/1812191
When Juju users call add-credential with their own LXD config.yaml they
need to make sure they make no mistakes with the paths to the various
pki components required.

It's possible for a user to set the wrong file for the server
certificate and for Juju to accept it and try and start using it for
secure communications with a LXD daemon.

What this change does is help to identify when mistakes like this have
been made by inspecting the LXD server certificate to make sure it
exists and that the certificate has been signed for server
authentication.

This is about the sum of all the checks we can perform on this
certificate without hard coding internal logic of LXD into Juju that
could change and introduce it's own further complications.
juju#14040

When Juju users call add-credential with their own LXD config.yaml they
need to make sure they make no mistakes with the paths to the various
pki components required.

It's possible for a user to set the wrong file for the server
certificate and for Juju to accept it and try and start using it for
secure communications with a LXD daemon. The LXD client doesn't
perform any checks on pki credentials like this so we have to do it ourselves.

What this change does is help to identify when mistakes like this have
been made by inspecting the LXD server certificate to make sure it
exists and that the certificate has been signed for server
authentication.

This is about the sum of all the checks we can perform on this
certificate without hard coding internal logic of LXD into Juju that
could change and introduce it's own further complications.

One other change was made here to how we process credentials in
add-credential where file credentials are expanded on the command
before being passed onto providers for finalisation.

## Checklist

 - [x] Requires a [pylibjuju](https://github.com/juju/python-libjuju) change
 - [x] Added [integration tests](https://github.com/juju/juju/tree/develop/tests) for the PR
 - [x] Added or updated [doc.go](https://discourse.jujucharms.com/t/readme-in-packages/451) related to packages changed
 - [x] Comments answer the question of why design decisions were made

## QA steps

Test valid credentials succeed. Setup a remote LXD to properly test.
```sh
# 1. Get your LXD client certificate and key
cp ~/snap/lxd/common/config/client.crt /tmp/client.cert
cp ~/snap/lxd/common/config/client.key /tmp/client.key

# 2. Upload the client certificate in /tmp/client.cert to the remote LXD server and add it to the trust store
scp /tmp/client.cert <remote-host>:/tmp/client.cert
ssh <remote-host>
lxc config trust add /tmp/client.cert

# 3. Get the server certificate while on the remote host from step 2
lxc info # copy the certificate contents to your local machine at /tmp/server.cert

# 4. Create a new lxd cloud in juju on the local host with the cloud name server-cert
juju add-cloud

# 5. Create the credentials.yaml file to test with at /tmp/credentials.yaml
#credentials:
# server-cert:
# test:
# auth-type: certificate
# client-cert: /tmp/client.cert
# client-key: /tmp/client.key
# server-cert: /tmp/server.cert

#6. Add the credentials to Juju with
juju add-credential server-cert -f /tmp/credentials.yaml

#7 Bootstrap to the new cloud
juju bootstrap server-cert
```

If the above all works it's now time to check that bad certificates fail. Steps 1-4 can be reused here from above if you delete the credentials added to Juju from step 6 above.
```sh
# 1. Get your LXD client certificate and key
cp ~/snap/lxd/common/config/client.crt /tmp/client.cert
cp ~/snap/lxd/common/config/client.key /tmp/client.key

# 2. Upload the client certificate in /tmp/client.cert to the remote LXD server and add it to the trust store
scp /tmp/client.cert <remote-host>:/tmp/client.cert
ssh <remote-host>
lxc config trust add /tmp/client.cert

# 3. Get the server certificate while on the remote host from step 2
lxc info # copy the certificate contents to your local machine at /tmp/server.cert

# 4. Create a new lxd cloud in juju on the local host with the cloud name server-cert
juju add-cloud

# 5. Create the credentials.yaml file to test with at /tmp/credentials.yaml
#credentials:
# server-cert:
# test:
# auth-type: certificate
# client-cert: /tmp/client.cert
# client-key: /tmp/client.key
# server-cert: /tmp/client.cert

#6. Add the credentials to Juju with. This step should fail with an appropriate error message telling you the server certificate is invalid
juju add-credential server-cert -f /tmp/credentials.yaml
```

## Documentation changes

N/A

## Bug reference

https://bugs.launchpad.net/juju/+bug/1973834
juju#14065

Ensure that the slice of Placements only holds valid pointers. Otherwise you see panics in both the client and server side of the deploy code.

## QA steps

```console
# should not panic, though won't succeed in a new model.
juju deploy -n 3 --to 0,1,2, --config vip=10.0.0.101 keystone
```

## Bug reference

https://bugs.launchpad.net/juju/+bug/1935726
juju#14074

The resources package used too many layers of abstraction.
This PR simplifies the implementation of the resource opener, which fetches a resource from the store and caches in state.

## QA steps

See QA steps for this previous PR

juju#14049

Essentially, deploy charms with resources and run various hook commands.
juju#14068

- Remove conflicts for arch for GCE, Azure and OpenStack.
- Dynamically resolve conflict on AWS using instance type information where `arch` constraint matches instance type architecture info.

## QA steps

```
juju bootstrap aws
juju deploy ubuntu --constraints="arch=arm64 instance-type=c6g.large"
```

## Documentation changes

N/A

## Bug reference

https://bugs.launchpad.net/juju/+bug/1970462
Copy link
Member

@benhoyt benhoyt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely didn't review this merge line by line, but looks good at a skim.

@wallyworld
Copy link
Member Author

/merge

@jujubot jujubot merged commit de38097 into juju:develop May 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet