Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.9 into develop #13143

Merged
merged 47 commits into from Jul 8, 2021
Merged

2.9 into develop #13143

merged 47 commits into from Jul 8, 2021

Commits on Jun 22, 2021

  1. Ensure all DB errors when inserting logs are always logged to logsink

    This improves visibility in cases such as LP1930899 where
    underprovisioned mongodb instance prevented log entries from being
    persisted with no additional error output.
    
    As the DB error was returned back to the caller (log sender worker), the
    worker would keep restarting thus preventing models from logging
    anything.
    
    While this commit does not fix the underlying problem (underprovisioned
    mongo) it should at least make the cause of the problem more obvious to
    users that are trying to diagnose similar issues by inspecting the
    logsink.log file contents.
    achilleasa committed Jun 22, 2021
    Configuration menu
    Copy the full SHA
    9982c5d View commit details
    Browse the repository at this point in the history
  2. Merge pull request juju#13097 from achilleasa/2.8-logsink-error-if-pe…

    …rsisting-logs-to-db-fails
    
    juju#13097
    
    This improves visibility in cases such as LP1930899 where
    underprovisioned mongodb instance prevented log entries from being
    persisted with no additional error output.
    
    As the DB error was returned back to the caller (log sender worker), the
    worker would keep restarting thus preventing models from logging
    anything.
    
    While this commit does not fix the underlying problem (underprovisioned
    mongo) it should at least make the cause of the problem more obvious to
    users that are trying to diagnose similar issues by inspecting the
    logsink.log file contents.
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1930899
    
    (NOTE: the PR does not fix ^ but helps diagnose the root cause of the issue)
    jujubot committed Jun 22, 2021
    Configuration menu
    Copy the full SHA
    f505b12 View commit details
    Browse the repository at this point in the history

Commits on Jun 24, 2021

  1. Support ssh/scp --proxy for containers with FAN addresses

    The juju ssh --proxy command uses the controller as a jumpbox and
    invokes 'nc' to forward ssh traffic to the target machine. While this
    approach works fine for regular machines, it fails (for substrates as
    ec2) for container machines (lxd/kvm) when they only have a FAN address.
    
    This is generally acceptable as juju only requires for machines to be
    able to connect to the controller and not vice-versa. In particular, ec2
    does not allow FAN connectivity from the controller model to any other
    Juju model (and vice versa).
    
    To workaround this limitation, we now first check whether the machine we
    are trying to connect to is a container. If that's the case, we identify
    the address of the machine that hosts the container and modify the ssh
    proxy command to use a 2-level proxying scheme: we ssh to the
    controller, then ssh to the machine hosting the container and then run
    'nc' to route the ssh traffic to the intended machine.
    achilleasa committed Jun 24, 2021
    Configuration menu
    Copy the full SHA
    cbc9e30 View commit details
    Browse the repository at this point in the history

Commits on Jun 25, 2021

  1. Merge pull request juju#13108 from achilleasa/2.8-support-ssh-with-pr…

    …oxy-to-fan-subnets
    
    juju#13108
    
    The juju ssh --proxy command uses the controller as a jumpbox and
    invokes 'nc' to forward ssh traffic to the target machine. While this
    approach works fine for regular machines, it fails (for substrates as
    ec2) for container machines (lxd/kvm) when they only have a FAN address.
    
    This is generally acceptable as juju only requires for machines to be
    able to connect to the controller and not vice-versa. In particular, ec2
    does not allow FAN connectivity from the controller model to any other
    Juju model (and vice versa).
    
    To workaround this limitation, we now first check whether the machine we
    are trying to connect to is a container. If that's the case, we identify
    the address of the machine that hosts the container and modify the ssh
    proxy command to use a 2-level proxying scheme: we ssh to the
    controller, then ssh to the machine hosting the container and then run
    'nc' to route the ssh traffic to the intended machine.
    
    Caveats: this workaround cannot be applied in a scenario where we 
    `juju scp` to copy files between two different **container remotes**
    as each remote in this case would need a custom ssh proxy command.
    
    ## QA steps
    
    Bootstrap to ec2. Then:
    
    ```sh
    $ juju deploy mysql --to lxd
    
    # should fail
    $ juju ssh mysql/0
    ERROR cannot connect to any address: [252.40.132.122:22]
    
    # should work with this patch
    $ juju ssh --proxy mysql/0 hostname
    juju-12d453-0-lxd-0
    Connection to 252.40.132.122 closed.
    
    # try scp
    $ echo "BTC" > wallet
    $ juju scp --proxy -- -3 wallet mysql/0:
    $ juju scp --proxy -- -3 wallet 0:
    
    # Verify the files have been uploaded
    $ juju ssh --proxy 0 cat wallet
    BTC
    Connection to 172.31.40.132 closed.
    
    $ juju ssh --proxy 0/lxd/0 cat wallet
    BTC
    Connection to 252.40.132.122 closed.
    ```
    
    ## Bug reference
    https://bugs.launchpad.net/juju/+bug/1932547
    jujubot committed Jun 25, 2021
    Configuration menu
    Copy the full SHA
    c973d15 View commit details
    Browse the repository at this point in the history

Commits on Jul 1, 2021

  1. Facades: Allow empty charm config values

    The following changes allow the charm config as a map and the charm
    config as a yaml to work as intended. The code previously assumed that
    there was always a yaml and that the charm config map would always
    overwrite it.
    The problem occurs when there isn't a charm yaml and it would blindly
    overwrite values of the yaml. This caused empty strings to be converted
    to nil, which then caused the --reset flow to be triggered. The --reset
    flow essentially says, use the config defaults when you see nil, by
    deleting any user config. That's something that we didn't want to
    trigger. The change causes us to exit early if there isn't a yaml.
    
    There is a much deeper problem and one that we will need to understand.
    API versioning should essentially not touch the old code OR at the very
    least have enough testing to ensure that any new changes keeps the old
    behaviour. 100% code coverage in these scenarios would be beneficial to
    ensure that we didn't trip up here. The problem with attempting to get
    to that level, is the amount of levels you have to test to just get that
    far. From API server all the way down to state, it's a large task and
    maybe not worth it?
    SimonRichardson committed Jul 1, 2021
    Configuration menu
    Copy the full SHA
    9413e44 View commit details
    Browse the repository at this point in the history
  2. Facades: Parse via the config

    To correctly parse other values that aren't strings, we should parse via
    the config using ParseSettingsStrings.
    SimonRichardson committed Jul 1, 2021
    Configuration menu
    Copy the full SHA
    8a65070 View commit details
    Browse the repository at this point in the history

Commits on Jul 2, 2021

  1. Merge pull request juju#13130 from SimonRichardson/set-config-empty-s…

    …tring
    
    juju#13130
    
    The following changes allow the charm config as a map and the charm
    config as a YAML to work as intended. The code previously assumed that
    there was always a YAML and that the charm config map would always
    overwrite it.
    
    The problem occurs when there isn't a charm YAML and it would blindly
    overwrite values. This caused empty strings to be converted
    to `nil`, causing the --reset flow to be triggered. The --reset
    flow sets the config defaults when it sees nil.
    
    There is a much deeper problem and one that we will need to understand.
    API versioning should essentially not touch old code _or_ at the very
    least have enough testing to ensure that any new changes keeps the old
    behaviour. 100% code coverage in these scenarios would be beneficial to
    ensure that we didn't trip up here. The problem with attempting to get
    to that level, is the amount of levels you have to test to just get that
    far. From API server all the way down to state, it's a large task and
    maybe not worth it?
    
    ## QA steps
    
    Save bundle.yaml
    
    ```sh
    cat >bundle.yaml<<EOF
     applications:
     haproxy:
     charm: cs:haproxy
     channel: stable
     options:
     services: ""
    EOF
    ```
    
    ```sh
    $ juju bootstrap lxd test
    $ juju deploy ./bundle.yaml
    $ juju config haproxy services
    
    $ juju config haproxy services=""
    WARNING the configuration setting "services" already has the value ""
    $ juju config haproxy services
    
    
    ```
    
    Notice that the default config shouldn't be shown.
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1934151
    jujubot committed Jul 2, 2021
    Configuration menu
    Copy the full SHA
    c139c11 View commit details
    Browse the repository at this point in the history
  2. Kubernetes: Wait for pod ready timeout

    The following moves the timeout deadline from 1 minute to 10 minutes. If
    a pod hasn't become ready in that time, I think it's safe to say it
    won't ever!
    
    The change is simple, just add a longer duration.
    SimonRichardson committed Jul 2, 2021
    Configuration menu
    Copy the full SHA
    3d145f5 View commit details
    Browse the repository at this point in the history
  3. Merge pull request juju#13133 from SimonRichardson/pod-ready-timeout

    juju#13133
    
    The following moves the timeout deadline from 1 minute to 10 minutes. If
    a pod hasn't become ready in that time, I think it's safe to say it
    won't!
    
    The change is simple, just add a longer duration.
    
    ## QA steps
    
    See: https://discourse.charmhub.io/t/error-getting-started-with-juju-microk8s/3855/3
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1934494
    jujubot committed Jul 2, 2021
    Configuration menu
    Copy the full SHA
    7b18431 View commit details
    Browse the repository at this point in the history

Commits on Jul 4, 2021

  1. Configuration menu
    Copy the full SHA
    35e712e View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    dd8b18f View commit details
    Browse the repository at this point in the history

Commits on Jul 5, 2021

  1. Configuration menu
    Copy the full SHA
    06d8b93 View commit details
    Browse the repository at this point in the history
  2. OAuth 2.8 credentials support in 2.9.

    This change updates the Juju facades to return Kubernetes credential
    that pre 2.9 releases can understand. This has been done through a
    facade bump for all users of the CloudSpec facade.
    tlm committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    c3d9838 View commit details
    Browse the repository at this point in the history
  3. Merge pull request juju#13128 from tlm/oauth-cred-k8s-lp1929910

    juju#13128
    
    OAuth 2.8 credentials support in 2.9.
    
    This change updates the Juju facades to return Kubernetes credential that a pre 2.9 releases can understand. This has been done through a facade bump for all users of the CloudSpec facade.
    
    
    ## Checklist
    
     - [x] Requires a [pylibjuju](https://github.com/juju/python-libjuju) change
     - [x] Added [integration tests](https://github.com/juju/juju/tree/develop/tests) for the PR
     - [x] Added or updated [doc.go](https://discourse.jujucharms.com/t/readme-in-packages/451) related to packages changed
     - [x] Comments answer the question of why design decisions were made
    
    ## QA steps
    
    1. Deploy a Juju 2.8 controller with a new model and deploy a workload.
    2. Upgrade the controller to this PR
    3. Check that the workload agent and support components have no errors and are communicating with the Kubernetes api.
    4. Check controller logs for errors
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1929910
    jujubot committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    25e8028 View commit details
    Browse the repository at this point in the history
  4. Merge pull request juju#13129 from ycliuhw/fix/lp-1934180

    juju#13129
    
    *Delete cluster role bindings before creating and ensure the resource has been deleted completely before creating;*
    
    Driveby: refactor all caas code to use client-go pointer helper functions;
    
    ## Checklist
    
     - [ ] ~Requires a [pylibjuju](https://github.com/juju/python-libjuju) change~
     - [ ] ~Added [integration tests](https://github.com/juju/juju/tree/develop/tests) for the PR~
     - [ ] ~Added or updated [doc.go](https://discourse.jujucharms.com/t/readme-in-packages/451) related to packages changed~
     - [x] Comments answer the question of why design decisions were made
    
    ## QA steps
    
    ```console
    $ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug --resource mysql_image=mariadb
    
    $ yml2json /tmp/charm-builds/mariadb-k8s/reactive/k8s_resources.yaml| jq '.kubernetesResources.serviceAccounts'
    [
     {
     "name": "rbac-foo",
     "roles": [
     {
     "rules": [
     {
     "apiGroups": [
     ""
     ],
     "verbs": [
     "get",
     "watch",
     "list"
     ],
     "resources": [
     "pods"
     ]
     }
     ],
     "global": true,
     "name": "pod-cluster-role"
     }
     ],
     "automountServiceAccountToken": true
     }
    ]
    
    $ mkubectl -nt1 get clusterrolebinding.rbac.authorization.k8s.io/rbac-foo-t1-pod-cluster-role -o yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
     annotations:
     controller.juju.is/id: 1bb4c340-1536-4281-815b-ad893b5fcd73
     model.juju.is/id: b3bcd724-b4e0-4bb1-80e2-af346cbba787
     creationTimestamp: "2021-07-02T06:11:17Z"
     labels:
     app.juju.is/created-by: controller
     app.kubernetes.io/managed-by: juju
     app.kubernetes.io/name: mariadb-k8s
     model.juju.is/name: t1
     managedFields:
     - apiVersion: rbac.authorization.k8s.io/v1
     fieldsType: FieldsV1
     fieldsV1:
     f:metadata:
     f:annotations:
     .: {}
     f:controller.juju.is/id: {}
     f:model.juju.is/id: {}
     f:labels:
     .: {}
     f:app.kubernetes.io/managed-by: {}
     f:app.kubernetes.io/name: {}
     f:model.juju.is/name: {}
     f:roleRef:
     f:apiGroup: {}
     f:kind: {}
     f:name: {}
     f:subjects: {}
     manager: juju
     operation: Update
     time: "2021-07-02T06:11:17Z"
     name: rbac-foo-t1-pod-cluster-role
     resourceVersion: "166248"
     selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/rbac-foo-t1-pod-cluster-role
     uid: ace7f3f1-6d2f-4cd3-aca1-b9211126d1d0
    roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: ClusterRole
     name: t1-pod-cluster-role
    subjects:
    - kind: ServiceAccount
     name: rbac-foo
     namespace: t1
    
    # change podspec a bit then build and upgrade charm
    $ juju upgrade-charm mariadb-k8s --path /tmp/charm-builds/mariadb-k8s/ --debug --resource mysql_image=mariadb
    
    $ mkubectl -nt1 get clusterrolebinding.rbac.authorization.k8s.io/rbac-foo-t1-pod-cluster-role -o yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
     annotations:
     controller.juju.is/id: 1bb4c340-1536-4281-815b-ad893b5fcd73
     model.juju.is/id: b3bcd724-b4e0-4bb1-80e2-af346cbba787
     creationTimestamp: "2021-07-02T06:12:30Z"
     labels:
     app.juju.is/created-by: controller
     app.kubernetes.io/managed-by: juju
     app.kubernetes.io/name: mariadb-k8s
     model.juju.is/name: t1
     managedFields:
     - apiVersion: rbac.authorization.k8s.io/v1
     fieldsType: FieldsV1
     fieldsV1:
     f:metadata:
     f:annotations:
     .: {}
     f:controller.juju.is/id: {}
     f:model.juju.is/id: {}
     f:labels:
     .: {}
     f:app.kubernetes.io/managed-by: {}
     f:app.kubernetes.io/name: {}
     f:model.juju.is/name: {}
     f:roleRef:
     f:apiGroup: {}
     f:kind: {}
     f:name: {}
     f:subjects: {}
     manager: juju
     operation: Update
     time: "2021-07-02T06:12:30Z"
     name: rbac-foo-t1-pod-cluster-role
     resourceVersion: "166401"
     selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/rbac-foo-t1-pod-cluster-role
     uid: 6ccb09f0-f931-408a-a478-f2c7a5ad817c
    roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: ClusterRole
     name: t1-pod-cluster-role
    subjects:
    - kind: ServiceAccount
     name: rbac-foo
     namespace: t1
    
    ```
    
    ## Documentation changes
    
    No
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1934180
    jujubot committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    43b8197 View commit details
    Browse the repository at this point in the history
  5. Add Equinix Metal entrypoint to cloud package

    This commit adds EM API entrypoint to the cloud package and it generates
    the updated go file.
    
    Signed-off-by: root <jasmin.gacic@gmail.com>
    Signed-off-by: Gianluca Arbezzano <ciao@gianarb.it>
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    d518de2 View commit details
    Browse the repository at this point in the history
  6. Implement Equinix Metal provider configuration and credentials

    This commit contains the struct and functions needed to correctly hook
    Equinix Metal API authentication to Juju
    
    Signed-off-by: root <jasmin.gacic@gmail.com>
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    f456dde View commit details
    Browse the repository at this point in the history
  7. Stub Equinix Metal environ/instance implementation

    This is an empty stub for the Equinix Metal Environ and instance
    implementation. It
    returns panics and notImplemented errors
    
    Signed-off-by: Gianluca Arbezzano <ciao@gianarb.it>
    gianarb authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    c68771b View commit details
    Browse the repository at this point in the history
  8. Equinix device abstraction (equinixDevice type) and methods

     This commit contains the device implementation for Equinix Metal Device
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    a51603a View commit details
    Browse the repository at this point in the history
  9. Implement Environ equinix-specific helpers

    This commit implement a couple of functions needed to bridge JuJu
    requirements against the Equinix Metal API.
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    8365efb View commit details
    Browse the repository at this point in the history
  10. Cloudinit renderer

    This commit contains the logic to render a valid User Data for Equinix
    Metal device
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    6bb2ad3 View commit details
    Browse the repository at this point in the history
  11. Implement the environ provider

    This commit bridges the Equinix Metal API to JuJu environ interface
    gianarb authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    28f88e1 View commit details
    Browse the repository at this point in the history
  12. Hook gocheck testsuite for Equinix Metal provider

    This commit hooks the Equinix Metal provider to gocheck testsuite.
    
    Signed-off-by: Gianluca Arbezzano <ciao@gianarb.it>
    gianarb authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    8831ca8 View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    17db150 View commit details
    Browse the repository at this point in the history
  14. Initialise Equinix Metal provider

    This commit contains the init file to register the Equinix Metal
    provider to JuJu
    
    Signed-off-by: root <jasmin.gacic@gmail.com>
    Signed-off-by: Gianluca Arbezzano <ciao@gianarb.it>
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    0796322 View commit details
    Browse the repository at this point in the history
  15. Add subnets support to Equinix Metal provider

    This commit adds supports for juju subnet for the Equinix Metal
    provider.
    
    Signed-off-by: root <jasmin.gacic@gmail.com>
    jasmingacic authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    97ae725 View commit details
    Browse the repository at this point in the history
  16. Configuration menu
    Copy the full SHA
    6dd8ccf View commit details
    Browse the repository at this point in the history
  17. Configuration menu
    Copy the full SHA
    12b24a2 View commit details
    Browse the repository at this point in the history
  18. Configuration menu
    Copy the full SHA
    99d07aa View commit details
    Browse the repository at this point in the history
  19. Fix packngo stanza

    gianarb authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    a5c5ddf View commit details
    Browse the repository at this point in the history
  20. Configuration menu
    Copy the full SHA
    f4ea610 View commit details
    Browse the repository at this point in the history
  21. Equinix: Allocate public ip constraint

    Allow the disabling of public ip addresses if the constraint is set.
    This is a workaround for lack of a firewall for now.
    SimonRichardson authored and achilleasa committed Jul 5, 2021
    Configuration menu
    Copy the full SHA
    6fa6d20 View commit details
    Browse the repository at this point in the history
  22. Configuration menu
    Copy the full SHA
    8393b57 View commit details
    Browse the repository at this point in the history

Commits on Jul 6, 2021

  1. Merge pull request juju#13134 from achilleasa/2.9-backport-equinix-pr…

    …ovider-support
    
    juju#13134
    
    This PR backports juju#12983 to 2.9 so it can be made available as part of the next point release.
    
    ## QA steps
    
    ```console
    # clone this branch and compile binaries
    $ cp cloud/fallback-public-cloud.yaml ~/.local/share/juju/public-clouds.yaml
    # This command shows Equinix as cloud provider now
    $ juju clouds --client --all
    # Add credentials
    $ juju add-credential equinix
    # bootstrap the controller
    $ juju bootstrap equinix/am test-equinix 
    ```
    
    At this point should be juju business as usual, what I do is enable-ha and check controller status with:
    
    ```console
    $ juju enable-ha
    $ juju status -m controller
    ```
    jujubot committed Jul 6, 2021
    Configuration menu
    Copy the full SHA
    2b67b0a View commit details
    Browse the repository at this point in the history
  2. Cache: Cache coherence

    The cache would fail to notify if you changed from a value and back to
    the original value. This is because the underlying hash value was never
    written. This has two consequences:
    
     1. The cache could never emit a change of the original value
     2. Because we never cached the new value, if the new value was the same
     as the old one, we would still emit the event.
    
    The fix is easy, just save the hash after the notify and we can have
    better cache coherency.
    SimonRichardson committed Jul 6, 2021
    Configuration menu
    Copy the full SHA
    c725de1 View commit details
    Browse the repository at this point in the history

Commits on Jul 7, 2021

  1. Configuration menu
    Copy the full SHA
    fe8bdf7 View commit details
    Browse the repository at this point in the history
  2. Merge pull request juju#13135 from wallyworld/cass-image-repo-port

    juju#13135
    
    A small fix for an issue found by QA folks when setting up a custom docker image repo. If the repo URL configured using `caas-image-repo` has a port, the parsing fails.
    
    ## QA steps
    
    There's a unit test which ensures the expected behaviour. There's no 2.9.8 images but do this
    
    ` juju bootstrap microk8s test --config caas-image-repo=docker.io:5000/jujusolutions`
    
    and it won't find the image but you can check the correct image URL:
    
    ```
    microk8s.kubectl -n controller-test get pod/controller-0 -o json | jq .spec.containers[0].image
    "docker.io:5000/jujusolutions/jujud-operator:2.9.8"
    ```
    
    Also bootstrap a standard microk8s controller to regression test default behaviour.
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1934707
    jujubot committed Jul 7, 2021
    Configuration menu
    Copy the full SHA
    185fac6 View commit details
    Browse the repository at this point in the history
  3. Merge pull request juju#13137 from SimonRichardson/cache-coherence

    juju#13137
    
    The cache would fail to notify if you changed from a value and back to
    the original value (flip flopping). This is because the underlying hash value 
    was never written. 
    
    This has two consequences:
    
     1. The cache could never emit a change of the original value
     2. Because we never cached the new value, if the new value was the same
     as the old one, we would still emit the event.
    
    The fix is easy, just save the hash after the notify and we can have
    better cache coherency.
    
    ## QA steps
    
    If you change the model-config logging-config to a new value and back to the 
    old value, it should now correctly go back to the old value. Flip-flopping of values 
    in the logging-config exposes this relatively easily.
    
    ```sh
    $ juju bootstrap lxd test
    $ juju model-config logging-config="<root>=WARN"
    $ juju model-config logging-config="<root>=DEBUG"
    ```
    
    View `$ juju debug-log -m controller` to ensure that the flip-flop happens.
    jujubot committed Jul 7, 2021
    Configuration menu
    Copy the full SHA
    2ecb859 View commit details
    Browse the repository at this point in the history

Commits on Jul 8, 2021

  1. Configuration menu
    Copy the full SHA
    b06dca2 View commit details
    Browse the repository at this point in the history
  2. Merge pull request juju#13140 from hpidcock/pull-secrets

    juju#13140
    
    Adds image pull secrets for sidecar charms with private images such as from charmhub.
    
    ## QA steps
    
    Deploy charm with private repo images.
    Check image pull secrets exist and images pull.
    Refresh charm with public repo images.
    Check image pull secrets are cleaned up.
    
    ## Documentation changes
    
    N/A
    
    ## Bug reference
    
    https://bugs.launchpad.net/juju/+bug/1934416
    jujubot committed Jul 8, 2021
    Configuration menu
    Copy the full SHA
    f4b4ab4 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    9d9bb07 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    b360f96 View commit details
    Browse the repository at this point in the history
  5. Merge pull request juju#13142 from SimonRichardson/2.8-into-2.9-cache

    juju#13142
    
    Merge forward 2.8 into 2.9
    
    2ecb859 (upstream/2.8, origin/2.8, 2.8) Merge pull request juju#13137 from SimonRichardson/cache-coherence
    c973d15 (achilleasa/2.8) Merge pull request juju#13108 from achilleasa/2.8-support-ssh-with-proxy-to-fan-subnets
    f505b12 Merge pull request juju#13097 from achilleasa/2.8-logsink-error-if-persisting-logs-to-db-fails
    
    Conflicts:
    
    CONFLICT (content): Merge conflict in cmd/juju/commands/ssh_unix_test.go
    CONFLICT (content): Merge conflict in cmd/juju/commands/ssh_machine.go
    CONFLICT (content): Merge conflict in cmd/juju/commands/ssh.go
    CONFLICT (content): Merge conflict in cmd/juju/commands/scp.go
    jujubot committed Jul 8, 2021
    Configuration menu
    Copy the full SHA
    6ad00ba View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    1d61d73 View commit details
    Browse the repository at this point in the history
  7. Facades: remove old code

    SetApplicationsConfig has been removed in preference to SetConfigs, so
    the test are no longer required.
    SimonRichardson committed Jul 8, 2021
    Configuration menu
    Copy the full SHA
    b798a86 View commit details
    Browse the repository at this point in the history
  8. Cloudconfig: Fix test

    2.9 introduced different return types for GetControllerImagePath and I
    missed that in the merge, this fixes that.
    SimonRichardson committed Jul 8, 2021
    Configuration menu
    Copy the full SHA
    0b47094 View commit details
    Browse the repository at this point in the history
  9. Integration tests: shfmt tests

    I have no idea why this wasn't picked up, probably a CI day activity.
    For now fix this so we can land it.
    SimonRichardson committed Jul 8, 2021
    Configuration menu
    Copy the full SHA
    47adcdb View commit details
    Browse the repository at this point in the history