Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.24 updates #38113

Merged
merged 7 commits into from
Jun 29, 2022
Merged

v1.24 updates #38113

merged 7 commits into from
Jun 29, 2022

Conversation

kinarashah
Copy link
Member

@kinarashah kinarashah commented Jun 27, 2022

vendor updates:

  • upstream k8s components
  • rancher:
    • client-go
    • steve
    • norman
    • wrangler
    • rke
    • aks-operator
    • helm
  • pinning go.opentelemetry.io/ packages because the latest ones cause build time errors w 1.24 changes
  • forked aws-iam-authenticator because upstream hasn't merged fix for 1.24
  • using beta for capi, cuz GA version yet to be released for 1.24

other changes:

  • golang bumped to go1.17
  • local k3s bumped to 1.24
  • replaced deprecated clusterName field
  • code changes to handle secret creation for service account token, it's no longer created automatically for 1.24
  • enable cri_dockerd by default for 1.24
  • remove containerd from zypper install, use the correct version from k3s image instead

#37711

jiaqiluo
jiaqiluo previously approved these changes Jun 29, 2022
Copy link
Member

@jiaqiluo jiaqiluo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

pkg/apis/go.mod Outdated Show resolved Hide resolved
@kinarashah kinarashah dismissed stale reviews from jiaqiluo, brandond, and a-blender via 6882fb6 June 29, 2022 16:32
rosskirkpat
rosskirkpat previously approved these changes Jun 29, 2022
jiaqiluo
jiaqiluo previously approved these changes Jun 29, 2022
@kinarashah
Copy link
Member Author

@jiaqiluo - addressed the go-client version in pkg/apis.
@rosskirkpat - squashed the commit to read secret from client.
Sorry for force pushing, it removed reviews so would need approvals again.

I'll consider @brandond and @annablender's approvals as is, didn't change anything around that code. Thanks much for reviewing!

@kinarashah kinarashah merged commit 8819343 into rancher:release/v2.6 Jun 29, 2022
@kinarashah kinarashah deleted the 124 branch June 29, 2022 17:45
KevinJoiner pushed a commit to KevinJoiner/rancher that referenced this pull request Jan 23, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit 37c7ee1)
KevinJoiner pushed a commit to KevinJoiner/rancher that referenced this pull request Jan 24, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113
cmurphy added a commit to jakefhyde/rancher that referenced this pull request Jan 27, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit 37c7ee1)
(cherry picked from commit f08c50f)
cmurphy added a commit to jakefhyde/rancher that referenced this pull request Jan 27, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit a30a596)
MbolotSuse pushed a commit to jakefhyde/rancher that referenced this pull request Feb 8, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit 37c7ee1)
(cherry picked from commit f08c50f)
MbolotSuse pushed a commit to jakefhyde/rancher that referenced this pull request Feb 9, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit a30a596)
vivek-shilimkar pushed a commit to vivek-shilimkar/rancher that referenced this pull request Mar 6, 2023
The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit a30a596)
Abhijithang added a commit to verrazzano/rancher that referenced this pull request Apr 14, 2023
* Revert "Fix for cluster yaml"

This reverts commit 6c477a5.

* Enhance local auth

* test for Enhance local auth

* Fixing bugs with auth providers

* Tests for fixing bugs with auth providers

* Ensure cattle token secret has unique name

The "cattle" service account on a downstream cluster is the account that
Rancher uses to connect as an admin to the downstream cluster. Without
this change, if the cattle service account's token is deleted, the
cluster agent will regenerate it identically. This is a problem because
it makes rotation of the token nontrivial.

We can't craft the JWT ourselves or influence what claims are included
in it, that is done within kubernetes. The only way to change the
resulting JWT is to change the values kubernetes uses for claims. The
only option is to make the secret name unique[1]. All other claims come
from the service account, which we do not want to have to change in
order to rotate the token.

This change addresses the problem by using GenerateName when creating
the secret so that it will be unique every time. However, since the name
is no longer predictable, this causes problems when Rancher tries to
look up the token. We now need to look up the name of the secret from
the service account object. A further complication is that for
kubernetes 1.24, the secret name is no longer stored on the service
account, so now we set it explicitly. An extra benefit of this approach
is that we no longer create multiple tokens for service accounts on k8s
<1.24, since creating the token is skipped if it is found on the service
account.

This change refactors any code that was creating a service account token
to use the serviceaccounttoken.EnsureSecretForServiceAccount function in
order to be consistent everywhere. The function is updated to use a
backoff routine instead of an infinite loop to check the state of the
secret. It is flexible enough to use controller caches for callers with
access to that, and falls back to regular clients for remote callers
such as the agent.

See also the change that introduced this functionality in Rancher[2].

[1] https://github.com/kubernetes/kubernetes/blob/v1.25.2/pkg/serviceaccount/legacy.go#L39
[2] rancher#38113

(cherry picked from commit a30a596)

* Fix secretmigrator conditions and copies

Make the setting of conditions consistent. Conditions do not need to be
set explicitly when DoUntilTrue is used, but the object returned from
DoUntilTrue does need to be explicitly updated. Use copies of the object
until the object is ready to be updated.

* Move ACI credentials to cluster and ctr secrets

* Make secretmigrator assemblers safe

The function comments on the secretmigrator assembler functions
indicated they would never change the original object, but this was not
true. This change ensures the objects are deepcopied within the function
so make it consistent with their documentation. It also removes
now-unnecessary deepcopies from the calling functions. Also corrects a
badly copy-pasted comment on one assembler function.

* Fix service account token secret backoff timing

Without this change, if a user is created, is added to a project or
cluster, and then makes a request in rapid succession (as in a CI case),
it is highly likely that the PRTB or CRTB controller will not have
finished adding the user's rolebindings to the project or cluster in
between steps 2 and 3. The cause is the refactoring of the service
account token generator in 37c7ee1 which added a wait to the service
account creator to ensure the token was populated before returning. The
wait loop would check the current secret object, then refresh the
secret, then wait 2 seconds before checking that secret again. The
original secret object was certainly not populated, and the first
refresh was only a few nanoseconds afterward and so the first-time
refreshed secret was also almost certainly not populated, so it was not
until the second refresh that the secret was populated and ready. With
the loop timing set to 2 seconds, this meant the wait took a full 4
seconds.

This change reorders the wait loop to refresh the secret first thing, to
avoid an extra loop, and reduces the step period to 2 milliseconds. This
is enough time for the token to populate on the 2nd or 3rd retry and
makes it much more likely the controller can finish setting up
rolebindings before the user needs them.

* Fixing git issues

* Tests for fixing git issues

* Update cluster service account token on change

Previously, if the cattle token was rotated on an RKE2 downstream, the
token would not propagate to the local cluster. This was not an issue
for RKE clusters because the service account token is updated through
the clusterdeploy controller and kontainer-engine driver. RKE2 clusters
only ever had the token set in the tunnel authorizer, and only upon
cluster creation. This change updates the tunnel authorizer to compare
and update the token if necessary. The local secret is fully rotated
rather than updated in-place in order to ensure the user context
controller is triggered so that user controllers are automatically
refreshed.

(cherry picked from commit ba7af6d)

* [2.7] Bump SAML library

* Minor refactoring

* Implement migration for RKE fields

* Minor refactoring

* Migrate RKE fields for cluster templates

* Add more tests

* Fix secretmigrator condition and tests

Use the correct condition and Do* function for the new RKE field
migration. We don't need to explicitly set the condition to True when
DoUntilTrue is used.

Fix the unit tests to check that the object doesn't change when the
migration is recalled, to show that the conditions aren't changed every
controller sync.

(cherry picked from commit b233d70)

* Remove rancher-runtime references

* Update system agent to 0.3.2 (un-rc)

* Forcing adoption of resource for rancher-webhook

The rancher webook now includes the mutating webhook as part of
the chart. In order to avoid errors when upgrading the webhook,
we need to force adoption of the active resources

* Check release images using skopeo

* Add CRTB, PRTB, GRB, User cleanup when an auth provider is disabled

* Handle the error when listing groups with Azure AD

* Revert "Add pspEnablement value to rancher ConfigMap and pass it to webhook chart"

This reverts commit c789e73.

* prefix the names of builtin PSACTs with 'rancher-'

* bump rancher-webhook to 2.0.2+up0.3.2-rc17

* OpenLDAP re-bind SA as fallback

* Wrap the returned errors

* Udpate comments

* Check if secret already exists before migrating

* Removing the "required" attribute for podSecurityPolicyTemplateId

* Remove the Manage PodSecurityAdmissionConfigurationTemplates role

* go generate

* bump ui tag to v2.7.2-rc2

* Encoded private registry auth config

* Allow for 503 when creating or mutating local cluster and retry (rancher#40337)

Signed-off-by: Chris Kim <oats87g@gmail.com>

* Delete webhook-related resources on cluster detachment

* pass specific values to SUC chart to enable/disable psp

* init test cases for rancher prime

* removing default value for corral repo to reduce overhead

* Use the unstructured client to update auth config objects

* bump ui tag to v2.7.2-rc3

* Add a cleanup entry for KeyCloak OIDC client secret

* Bump Wrangler to fix CVEs

Signed-off-by: Guilherme Macedo <guilherme.macedo@suse.com>

* Updating to Fleet v0.6.0-rc.3

* delay planner execution based off of SUC status

* merge etcd backup restore tests

* Updating to Fleet v0.6.0-rc.4

* Bump Rancher-webhook to v0.3.2-rc18

* Fix the issue that PSACT is not restored on RKE1 clusters.

Previously, when we restore the snapshot on an RKE1 cluster and choose "restore cluster config, kubernetes version and etcd", the PSACT is not restored.
It happens because the field ".spec.defaultPodSecurityAdmissionConfigurationTemplateName" is not set back to the value from the backup.
This fix is to set the field to the value from the backup.

* Skip provtest CI when local repo branch is used

* Fix the issue that PSACT is not restored on RKE2/K3S clusters.

* Bump Harvester node drvier to v0.6.2

Signed-off-by: futuretea <Hang.Yu@suse.com>

* go get github.com/rancher/rke v1.4.4-rc1

* Fix custom clusters not deleting controlplane nodes

* Update wrangler to v1.1.0

* Update rancher/shell image version (rancher#40613)

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/pkg/settings/setting.go"

Made with ❤️️ by updatecli

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/tests/v2/validation/charts/monitoring.go"

Made with ❤️️ by updatecli

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/tests/framework/extensions/clusters/import.go"

Made with ❤️️ by updatecli

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* Bump golang.org/x/net from 0.2.0 to 0.7.0 (rancher#40595)

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.2.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](golang/net@v0.2.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump cryptography from 3.4.7 to 39.0.1 in /tests/validation (rancher#40454)

Bumps [cryptography](https://github.com/pyca/cryptography) from 3.4.7 to 39.0.1.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@3.4.7...39.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* [dev-v2.7] Bump aks-operator to 1.1.0-rc5

Fixes: rancher#40214

* Bump golang.org/x/net from 0.2.0 to 0.7.0 in /pkg/apis (rancher#40594)

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.2.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](golang/net@v0.2.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump cryptography from 3.4.7 to 39.0.1 in /tests/integration (rancher#40453)

Bumps [cryptography](https://github.com/pyca/cryptography) from 3.4.7 to 39.0.1.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](pyca/cryptography@3.4.7...39.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump golang.org/x/text from 0.3.7 to 0.3.8 in /pkg/client (rancher#40661)

Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](golang/text@v0.3.7...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Allow retry on encryption config migration and local cluster creation for clusters if error is a 409, 500, or 503 (rancher#40666)

Signed-off-by: Chris Kim <oats87g@gmail.com>

* Bump rancher-webhook to v0.3.2-rc19

* Adds kind/tech-debt label to stale bot ignore list

* Bump steve version

* Fix the issue where the returned default version for rke2/k3s is not the highest because versions are sorted as a list of strings.

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/tests/v2/validation/charts/monitoring.go"

Made with ❤️️ by updatecli

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/tests/framework/extensions/clusters/import.go"

Made with ❤️️ by updatecli

* chore: Updated the content of the file "/tmp/updatecli/github/rancher...

... /rancher/pkg/settings/setting.go"

Made with ❤️️ by updatecli

* Changing webhook in remote to use the right image

The rancher webhook is now deployed in downstream clusters. This
caused an issue where, in airgap scenarios, the webhook and the
pods that were deploying the webhook would not use the specified
airgap registry. This change allows the agent to pass down an
override image, when applicable.

* Regenerating test mock and fixing broken tests

Regenerates a mock used for testing, fixes tests broken by an
interface change, and adds a new test case for installing the
webhook in the case an override image is present

* update rke to v1.4.4-rc2

* Validate kubeconfig on retrieval in kubeconfigmanager (rancher#40745)

Signed-off-by: Chris Kim <oats87g@gmail.com>

* Use one instance of the kubeconfig manager to eliminate potential for racing (rancher#40670)

Signed-off-by: Chris Kim <oats87g@gmail.com>

* add rke2 specific safe concat, update SUC concat

* refactor to remove testingT from session call

* go generate

* prepare for rc5

* add the missing sync of the field DefaultPodSecurityAdmissionConfigurationTemplateName between the mgmt cluster and provisioning cluster for completeness, although it doesn't affect anything functionally.

* update the descriptions of PSACT rancher-restricted and rancher-privileged to be more specific

* update uidashboardindex and uiindex settings to reference release-2.7.2

* go get github.com/rancher/rke v1.4.4-rc3

* Preserve dynamic schema on machine pool

* Address review comments

* Add DynamicSchemaSpec to provisioning cluster

* update the file delivery feature in v2prov such that only the configmap/secret having the expected content will be delivered

* Add dynamic schema spec to machine pools

* Updating README with v2.6.11

* update the SUC version in the Dockerfile

* Bump Harvester node drvier to v0.6.3

Signed-off-by: futuretea <Hang.Yu@suse.com>

* Fix json marshaling

* Update webhook

* bump rancher-webhook to 0.3.2-rc21

* Ignore obj being as helm release obj if data is too small

* Address review comments

* Bump aks-operator to rc6

* Add new scripts folder for standalone scripts

* newimage for sidekick

* update test image

* go get github.com/rancher/rke v1.4.4-rc5

* bump ui tag to v2.7.2-rc6

* add mirrored-sig-storage-snapshot-controller and mirrored-sig-storage-snapshot-controller-webhook to the image source code origins

* Log waiting for dynamic schema instead of error

* Ensure cluster controllers are always reenqueued

Before 23d89f9, when a cluster entered the managementapi user
controller, it was guaranteed to enqueue all cluster controllers at the
end of the sync unless it encountered an error. With 23d89f9, the sync
would exit early in many cases, thereby failing to enqueue the cluster
controllers. When a cluster is deleted, one of these cluster controllers
is supposed to call Stop on the cluster manager and thereby stop the
context, but since this was not always happening, the context sometimes
remained running and would mean the healthsyncer controller continued to
run and interminably emit errors for the now missing cluster.

This change addresses the problem by adding calls to enqueue the cluster
controllers after every success return.

It also prevents reenqueuing nil clusters by copying the cluster objects
and never nil-ifying the clusters on errors.

To improve readability and debuggability, all errors returned from the
sync method are wrapped, and the private const `_all_` is replaced by
the imported `AllKey` from wrangler, to make it clear that this key has
meaning to wrangler and is not a random string.

* [dev-2.7] Bump aks-operator to 102.0.0+up1.1.0-rc7

* Reset auth config's fields when admins disable auth provider through API

* Update tests to use rancher shell image's setting value.

* Bump rancher-webhook to v0.3.2-rc22

* Add cattle-elemental-system to list of system namespaces

This is needed to be able to backup Elemental resources.

Signed-off-by: Loic Devulder <ldevulder@suse.com>

* unset the value of AppliedPodSecurityPolicyTemplateName on cluster 1.25 and above where the PSP feature is removed

* Fix the issue that DefaultPodSecurityPolicyTemplateName is not restored on RKE2/K3S clusters

* update error message

* bump ui tag to v2.7.2-rc7

* bump rke to v1.4.4-rc6

* Update gke.tf

Updating to a newer Google Provider version to see is this resolves GKE HA Deploy errors

* Include additional resource checks

* Fix directory path for hardening K3s

* Revert "Merge pull request rancher#40391 from MbolotSuse/ldap-rebind-2.7"

This reverts commit f2faf1a, reversing
changes made to 8835c43.

* add kubelet certificate ip delivery to agent

* This is a standalone script for final rc checks that was previously stored locally and now adding to the repo.

* Update norman

* Add validation test for post kdm OOB release checks

* Add validation test for etcd backup and restore with cluster config

* go get github.com/rancher/rke v1.4.4-rc7

* Adding automation script to validate PSA rbac

Signed-off-by: Anupama <38144301+anupama2501@users.noreply.github.com>

* bump ui tag to v2.7.2-rc8

* Add retry to cleanup

* Wait for user removal from project to finish

* Add script to populate ECR with Rancher images

* Un-rc rancher-webhook and rancher-csp-adapter

Bump rancher-webhook to v0.3.2 and rancher-csp-adapter to v2.0.1

* Use dynamic clients get

* Fix for local and wait on remove

Now short circuits before waiting on condition that will never
be true for the "local" cluster, if the passed cluster is
"local". Now waits for clusterolebindings to be deleted when
removing user from a cluster.

* Add ability to list crbs

* Updating to Fleet v0.6.0

* Bump aks and eks opeator to final versions

* Refactoring cluster provisioning for rke1,rke2 and k3s cluster creation to be used in other test suites

* Add etcd/cp shared node test cases to provisioning suite

* initial registries setup and validation

* Adding restricted admin P0 usecases to the current rbac automation suite

* Watch interface needs an admin client. when using tests if we are trying to use a standard user client, watch interface gets a forbidden error.

* go get github.com/rancher/rke v1.4.4

* Updated Dockerfile and Jenkinsfile to account for new Go path. Update some formatting/lint issues with some provisioning tests.

* Add missing resource checks to RKE2 custom cluster

* prepare for rc9

---------

Signed-off-by: Chris Kim <oats87g@gmail.com>
Signed-off-by: Guilherme Macedo <guilherme.macedo@suse.com>
Signed-off-by: futuretea <Hang.Yu@suse.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Loic Devulder <ldevulder@suse.com>
Signed-off-by: Anupama <38144301+anupama2501@users.noreply.github.com>
Co-authored-by: Michael Bolot <michael.bolot@suse.com>
Co-authored-by: Colleen Murphy <colleen.murphy@suse.com>
Co-authored-by: Max Sokolovsky <genexpr@protonmail.com>
Co-authored-by: Jake Hyde <jakefhyde@gmail.com>
Co-authored-by: Sebastiaan van Steenis <mail@superseb.nl>
Co-authored-by: Israel Gomez <israel.gomez@suse.com>
Co-authored-by: Sebastiaan van Steenis <superseb@users.noreply.github.com>
Co-authored-by: Simon Bernier St-Pierre <simon.stpierre@suse.com>
Co-authored-by: Simon Bernier St-Pierre <sbstp@users.noreply.github.com>
Co-authored-by: Jiaqi Luo <6218999+jiaqiluo@users.noreply.github.com>
Co-authored-by: Jonathan Rial <jonathan.rial@hefr.ch>
Co-authored-by: Chad Roberts <chad.roberts@suse.com>
Co-authored-by: Chris Kim <30601846+Oats87@users.noreply.github.com>
Co-authored-by: Jake Hyde <33796120+jakefhyde@users.noreply.github.com>
Co-authored-by: Harrison Affel <harrisonaffel@gmail.com>
Co-authored-by: Caleb Warren <calebwarren10@yahoo.com>
Co-authored-by: Guilherme Macedo <guilherme.macedo@suse.com>
Co-authored-by: Tim Hardeck <thardeck@suse.com>
Co-authored-by: vivek-infracloud <vivek.shilimkar@infracloud.io>
Co-authored-by: Caleb Bron <cbron@users.noreply.github.com>
Co-authored-by: futuretea <Hang.Yu@suse.com>
Co-authored-by: rishabhmsra <36376154+rishabhmsra@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Andy Blendermann <andyblendermann@gmail.com>
Co-authored-by: Ricardo Weir <ricardo.weir@suse.com>
Co-authored-by: Rancher Security Bot <119513217+rancher-security-bot@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michal Jura <mjura@suse.com>
Co-authored-by: Michal Jura <mjura@users.noreply.github.com>
Co-authored-by: Kevin Joiner <kevinjoiner@users.noreply.github.com>
Co-authored-by: Kevin Joiner <10265309+KevinJoiner@users.noreply.github.com>
Co-authored-by: Caleb Bron <caleb@rancher.com>
Co-authored-by: Paulo Gomes <paulo.gomes@suse.com>
Co-authored-by: Kinara Shah <kinara@rancher.com>
Co-authored-by: Vivek Shilimkar <83208989+vivek-infracloud@users.noreply.github.com>
Co-authored-by: Nancy Butler <42977925+mantis-toboggan-md@users.noreply.github.com>
Co-authored-by: Sergey Nasovich <85187633+snasovich@users.noreply.github.com>
Co-authored-by: Geet Samra <amangeet.samra@suse.com>
Co-authored-by: Geet Samra <99695266+geethub97@users.noreply.github.com>
Co-authored-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
Co-authored-by: Arvind Iyengar <arvind.iyengar@rancher.com>
Co-authored-by: Markus Walker <markus.walker.25@gmail.com>
Co-authored-by: Colleen Murphy <cmurphy@users.noreply.github.com>
Co-authored-by: caliskanugur <iamugurcaliskan@gmail.com>
Co-authored-by: Loic Devulder <ldevulder@suse.com>
Co-authored-by: Klaus Kämpf <kkaempf@suse.de>
Co-authored-by: Sowmya Viswanathan <viswanathan.sowmya@gmail.com>
Co-authored-by: Jameson McGhee <jameson.mcghee1@gmail.com>
Co-authored-by: thaneunsoo <tim.han@suse.com>
Co-authored-by: rishabh <rishabh@infracloud.io>
Co-authored-by: Anupama <38144301+anupama2501@users.noreply.github.com>
Co-authored-by: fleet-bot <fleet@suse.de>
Co-authored-by: Izaac Zavaleta <izaac@rancher.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants