Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to find exact match for v1.Namespace by [kind, name, singularName, shortNames] #351

Closed
rabin-io opened this issue Jan 30, 2022 · 19 comments · Fixed by #371
Closed

Comments

@rabin-io
Copy link

rabin-io commented Jan 30, 2022

SUMMARY

When trying to create a namespace for a new operator with OpenShift 4.9.15, I get this error message, when I run the playbook from a RHEL 8.2 node, but from my local machine running Fedora 35, I don't have this issue.

TASK [olm-operator : Create Namespace for OLM operator] *************************************************
fatal: [localhost]: FAILED! => changed=false 
  msg: Failed to find exact match for v1.Namespace by [kind, name, singularName, shortNames]
ISSUE TYPE
  • Bug Report
COMPONENT NAME

kubernetes.core.k8s

ANSIBLE VERSION
  • From RHEL 8.2 node (with the problem)
ansible [core 2.12.1]
  config file = /root/workspace/deploy-cnv-4.9-on-ibmc-upi-ryasharz/cnv-qe-automation/ocp/bm-upi/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /root/.local/lib/python3.8/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /root/.local/bin/ansible
  python version = 3.8.0 (default, Mar  9 2020, 18:02:46) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
  jinja version = 3.0.3
  libyaml = True
  • From fedora 35 node
ansible [core 2.12.1]
  config file = ${HOME}/src/ansible-openshift-upi-install/ansible.cfg
  configured module search path = ['${HOME}/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = ${HOME}/.local/lib/python3.10/site-packages/ansible
  ansible collection location = ${HOME}/.ansible/collections:/usr/share/ansible/collections
  executable location = ${HOME}/.local/bin/ansible
  python version = 3.10.2 (main, Jan 17 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
  jinja version = 3.0.3
  libyaml = True
COLLECTION VERSION
# ${HOME}/.local/lib/python3.10/site-packages/ansible_collections
Collection      Version
--------------- -------
kubernetes.core 2.2.3

# ${HOME}/.ansible/collections/ansible_collections
Collection      Version
--------------- -------
kubernetes.core 2.2.2  
CONFIGURATION
CACHE_PLUGIN(${WORKDIR}/bm-upi/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(${WORKDIR}/bm-upi/ansible.cfg) = ./.ansible/facts_cache
CACHE_PLUGIN_TIMEOUT(${WORKDIR}/bm-upi/ansible.cfg) = 86400
DEFAULT_GATHERING(${WORKDIR}/bm-upi/ansible.cfg) = smart
DEFAULT_HASH_BEHAVIOUR(${WORKDIR}/bm-upi/ansible.cfg) = merge
DEFAULT_HOST_LIST(${WORKDIR}/bm-upi/ansible.cfg) = ['${WORKDIR}/bm-upi/inventory/beaker.yaml']
DEFAULT_LOAD_CALLBACK_PLUGINS(${WORKDIR}/bm-upi/ansible.cfg) = True
DEFAULT_MANAGED_STR(${WORKDIR}/bm-upi/ansible.cfg) = This file is managed by Ansible.%n
template: {file}
date    : %Y-%m-%d %H:%M:%S
by      : {uid}@{host}
DEFAULT_STDOUT_CALLBACK(${WORKDIR}/bm-upi/ansible.cfg) = yaml
RETRY_FILES_SAVE_PATH(${WORKDIR}/bm-upi/ansible.cfg) = ${WORKDIR}/bm-upi/.ansible/retry

OS / ENVIRONMENT
  • The failing node is a RHEL 8.2
STEPS TO REPRODUCE
---
- hosts: localhost
  gather_facts: false
  vars:
  tasks:

    - name: Create Namespace for OLM operator
      kubernetes.core.k8s:
        definition:
          apiVersion: v1
          kind: Namespace
          metadata:
            name: local-storage-operator
            labels: "{{ ns_labels | default(omit) }}"
            annotations: "{{ ns_annotations | default(omit) }}"
EXPECTED RESULTS

Apply/Create the Namespace or just skip if the Namespace is already exists.

ACTUAL RESULTS

Fail with the message

TASK [Create Namespace for OLM operator] ******************************************************************************************************
fatal: [localhost]: FAILED! => changed=false 
  msg: Failed to find exact match for v1.Namespace by [kind, name, singularName, shortNames]

https://gist.github.com/rabin-io/ac9e6f81c377e037804096bb61647ac9

@Akasurde
Copy link
Member

@rabin-io Thanks for reporting this issue. Could you please mention the Kubernetes library version like -

pip list | grep kuber
kubernetes                         12.0.1
kubernetes-validate                1.19.0

@rabin-io
Copy link
Author

On both nodes

kubernetes               12.0.1

@rabin-io
Copy link
Author

I tried to diff the zipped AnsiballZ_k8s.py files, to see if there is anything different in the generated output of the modules, but they seems to be the same.

grep 'ZIPDATA = ' /tmp/rhel.AnsiballZ_k8s.py | awk '{print($3)}' | tr -d '"' | base64 -d > /tmp/r/rhel.zip
grep 'ZIPDATA = ' /tmp/fedora.AnsiballZ_k8s.py | awk '{print($3)}' | tr -d '"' | base64 -d > /tmp/f/fedora.zip
# for folders r and f unzip each file
diff -y f r
Common subdirectories: f/ansible and r/ansible
Common subdirectories: f/ansible_collections and r/ansible_collections
Only in f: fedora.zip
Only in r: rhel.zip

@gravesm
Copy link
Member

gravesm commented Feb 1, 2022

Per our debugging session, I'm going to close this as it seems to be working now. Feel free to reopen if needed.

@gravesm gravesm closed this as completed Feb 1, 2022
@Akasurde
Copy link
Member

Akasurde commented Feb 1, 2022

@gravesm Could you please add the debugging steps and output here so that we record them for future use?

@gravesm
Copy link
Member

gravesm commented Feb 1, 2022

The steps just involved me watching it get run. That was enough to make it work. My best guess is that there was probably something cached somewhere that was causing the problem and it eventually just expired. I don't have any other explanation for it. There's no reason I can see why it would have failed in the first place.

@rabin-io
Copy link
Author

rabin-io commented Feb 1, 2022

@gravesm I think you are right, I think it started working once I created a new user, and run the playbook from it. Running the playbook from the root user (yes I know, bad idea) still reproduce the error.

Another thing I like to add here, is the snippet code which can help to debug the problem, based on the location my playbook failed, and the stack trace.

The full traceback is:                                                                                                                                  
  File "/tmp/ansible_kubernetes.core.k8s_payload_78xg0aj9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_uti
ls/common.py", line 243, in find_resource                                                                                                               
    return self.client.resources.get(api_version=api_version, short_names=[kind])                                                                       
  File "/tmp/ansible_kubernetes.core.k8s_payload_78xg0aj9/ansible_kubernetes.core.k8s_payload.zip/ansible_collections/kubernetes/core/plugins/module_ut$
ls/client/discovery.py", line 140, in get                                                                                                               
    results = self.search(**kwargs)                                                                                                                     
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 237, in search                                                  
    results = self.__search(self.__build_search(**kwargs), self.__resources, [])                                                                        
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 283, in __search                                                
    matches.extend(self.__search([key] + parts[1:], resources, reqParams))
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 269, in __search
    return self.__search(parts[1:], resourcePart, reqParams + [part] )
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 283, in __search
    matches.extend(self.__search([key] + parts[1:], resources, reqParams))
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 269, in __search
    return self.__search(parts[1:], resourcePart, reqParams + [part] )
  File "/root/.local/lib/python3.8/site-packages/kubernetes/dynamic/discovery.py", line 261, in __search
    raise ResourceNotFoundError
import kubernetes
from kubernetes.dynamic import DynamicClient

kubernetes.config.load_kube_config()
config = kubernetes.client.Configuration().get_default_copy()
client = DynamicClient(kubernetes.client.ApiClient(config))
resource = client.resources.get(api_version="v1", kind="Namespace")

@gravesm
Copy link
Member

gravesm commented Feb 1, 2022

Could you try running the following in a playbook and post the output?

- kubernetes.core.k8s_cluster_info:
  register: output

- debug:
    msg: "{{ output.apis.v1 }}"

@Akasurde
Copy link
Member

Akasurde commented Feb 2, 2022

Thanks @gravesm @rabin-io

@rabin-io
Copy link
Author

rabin-io commented Feb 2, 2022

Hi @gravesm, see the output below,
what wired is that I re-deployed the cluster last night, and I didn't encounter this issue.
but today, I rerun the job, and it failed again with the above message


PLAY [localhost] ***************************************************************

TASK [kubernetes.core.k8s_cluster_info] ****************************************
ok: [localhost]

TASK [debug] *******************************************************************
ok: [localhost] => 
  msg:
    Binding:
      categories: []
      name: bindings
      namespaced: true
      preferred: true
      short_names: []
      singular_name: binding
    ComponentStatus:
      categories: []
      name: componentstatuses
      namespaced: false
      preferred: true
      short_names:
      - cs
      singular_name: componentstatuse
    ConfigMap:
      categories: []
      name: configmaps
      namespaced: true
      preferred: true
      short_names:
      - cm
      singular_name: configmap
    Endpoints:
      categories: []
      name: endpoints
      namespaced: true
      preferred: true
      short_names:
      - ep
      singular_name: endpoint
    Event:
      categories: []
      name: events
      namespaced: true
      preferred: true
      short_names:
      - ev
      singular_name: event
    LimitRange:
      categories: []
      name: limitranges
      namespaced: true
      preferred: true
      short_names:
      - limits
      singular_name: limitrange
    List:
      categories: []
      name: null
      namespaced: null
      preferred: null
      short_names: []
      singular_name: null
    Namespace:
      categories: []
      name: namespaces
      namespaced: false
      preferred: true
      short_names:
      - ns
      singular_name: namespace
    Node:
      categories: []
      name: nodes
      namespaced: false
      preferred: true
      short_names:
      - 'no'
      singular_name: node
    PersistentVolume:
      categories: []
      name: persistentvolumes
      namespaced: false
      preferred: true
      short_names:
      - pv
      singular_name: persistentvolume
    PersistentVolumeClaim:
      categories: []
      name: persistentvolumeclaims
      namespaced: true
      preferred: true
      short_names:
      - pvc
      singular_name: persistentvolumeclaim
    Pod:
      categories:
      - all
      name: pods
      namespaced: true
      preferred: true
      short_names:
      - po
      singular_name: pod
    PodTemplate:
      categories: []
      name: podtemplates
      namespaced: true
      preferred: true
      short_names: []
      singular_name: podtemplate
    ReplicationController:
      categories:
      - all
      name: replicationcontrollers
      namespaced: true
      preferred: true
      short_names:
      - rc
      singular_name: replicationcontroller
    ResourceQuota:
      categories: []
      name: resourcequotas
      namespaced: true
      preferred: true
      short_names:
      - quota
      singular_name: resourcequota
    Secret:
      categories: []
      name: secrets
      namespaced: true
      preferred: true
      short_names: []
      singular_name: secret
    Service:
      categories:
      - all
      name: services
      namespaced: true
      preferred: true
      short_names:
      - svc
      singular_name: service
    ServiceAccount:
      categories: []
      name: serviceaccounts
      namespaced: true
      preferred: true
      short_names:
      - sa
      singular_name: serviceaccount

TASK [Create Namespace for OLM operator] ***************************************
fatal: [localhost]: FAILED! => changed=false 
  msg: Failed to find exact match for v1.Namespace by [kind, name, singularName, shortNames]

@gravesm
Copy link
Member

gravesm commented Feb 2, 2022

Reopening this one. We'll need to find a way to reliably reproduce this. There's no obvious reason why it's randomly failing. The cluster info shows that the namespace resource exists.

@gravesm gravesm reopened this Feb 2, 2022
@rabin-io
Copy link
Author

rabin-io commented Feb 3, 2022

background info

  • The playbook run on a Jenkins agent node, running RHEL (8.5)
  • The job deploy OpenShift cluster every few hours for testing.
  • The Jenkins agent node is dedicated for this job only.

Insights

One solution we found is related to the created CACHE file, under the /tmp folder prefiex with k8srcp-*, they remain on the node between deployment, so we have an old cache file, from an older deployment. Clearing them before running the job, seems to avoid the error.

sudo  find /tmp -maxdepth 1 -name "k8srcp*" -ls -delete

One more thing, not sure if it is related, running packet capture to monitor the API calls of the modules, show that the code is querying a api endpoint which return 404, and there is 4 attempts, and then the connection terminate. Can it be that the ResourceNotFound exception is thrown from HTTP status code ?

image

@gravesm
Copy link
Member

gravesm commented Feb 3, 2022

Thanks @rabin-io for the wireshark capture. I think this explains things. I manually forced the kubernetes client to throw a 404 when trying to fetch resource information to write to the cache and am able to get the same failure behavior you are experiencing. I strongly suspect these 404 errors you are seeing in the wireshark capture are responsible.

I don't know why openshift is returning 404 for these ceph apis, though. In the k8s client, when generating the resource cache, there's an initial request to get the list of apis (you should see a GET request for /apis) and then a subsequent request for each of the api groups. These requests are only done once to generate the cache in /tmp. Further k8s client calls will use this resource cache and skip these initial calls. I would think if the ceph api does not exist then it shouldn't be in the initial list of apis (that first /apis request), but that does not seem to be the case. I don't know enough about openshift to know whether this is intentional behavior or a bug. My suspicion is that this is happening shortly after you spin up a new openshift cluster and the api has not fully populated, but I don't know.

If you wanted to try and further investigate, it might be worth seeing if this reliably happens right after the cluster gets spun up. That would be further evidence of a possible race condition. You could try and reproduce this at the openshift level by just directly making calls against the openshift api. First, a GET request to /apis to confirm that ceph.rook.io is in that list and then another GET to /apis/ceph.rook.io/v1 to see if you get a 404.

@rabin-io
Copy link
Author

rabin-io commented Feb 3, 2022

Can it be that the module is trying to "refresh" cache based on the cache last values ?

As part of the deployment of openshift I do install odf/ceph, so it possible the cache file is updated, and when I reset the cluster and redeploy it again, the cache file contain the old reference to the ceph api which it try to query.

@gravesm
Copy link
Member

gravesm commented Feb 3, 2022

@rabin-io After talking with @fabianvf I think we may have a workaround for this in #364. If you'd like, you can try using the fix and see if that addresses the failure.

@rabin-io
Copy link
Author

rabin-io commented Feb 6, 2022

@gravesm I will, but at the moment I have a problem in my infrastructure which cussing problem with the nodes boot process, so I can't test this right know. But I will the moment I resolve my boot issue.

Thanks for the quick respond.

softwarefactory-project-zuul bot pushed a commit that referenced this issue Feb 10, 2022
Use resource prefix when apiVersion is v1

SUMMARY
When getting a resource from the core api group, the prefix was not
passed, leading the lookup to happen in all api groups. This broad
search is not really necessary and leads to problems in some corner
cases, for example, when an api is deleted after the api group list is
cached.
This fix uses the 'api' prefix when the apiVersion is 'v1', as this is
almost certainly what the user wants. As a fallback, to retain backwards
compatibility, the old behavior is used if the first lookup failed to
find a resource. Given that the module defaults to 'v1' for the
apiVersion, there are likely many cases where a resource, such as
StatefulSet, is used while failing to provide an apiVersion. While
technically incorrect, this has worked in most cases, so we probably
shouldn't break this behavior.
Fixes #351
ISSUE TYPE

Bugfix Pull Request

COMPONENT NAME
changelogs/fragments/364-use-resource-prefix.yaml
plugins/module_utils/common.py
@rabin-io
Copy link
Author

Hi @gravesm , I can confirm that your branch did work, I run 3 deploys, with/o my workaround and none of them failed.

Thanks again.

@houshym
Copy link

houshym commented Mar 11, 2022

Hi
I have the same error. I did not get a resolution for this issue. when I ran it from my macOS it works but for my colleague's macOS. just I have ansible [core 2.12.1] and another workstation is 2.12.2
ansible [core 2.12.2]

config file = None

configured module search path = ['/Users/myuser/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']

ansible python module location = /usr/local/Cellar/ansible/5.3.0/libexec/lib/python3.10/site-packages/ansible

ansible collection location = /Users/myuser/.ansible/collections:/usr/share/ansible/collections

executable location = /usr/local/bin/ansible

python version = 3.10.2 (main, Feb 2 2022, 06:19:27) [Clang 13.0.0 (clang-1300.0.29.3)]

jinja version = 3.0.3

libyaml = True

@rabin-io
Copy link
Author

@houshym did you try removing the temp/cache files and try again ?

sudo  find /tmp -maxdepth 1 -name "k8srcp*" -ls -delete

StinkyBenji pushed a commit to StinkyBenji/ansible-tekton-demo that referenced this issue Nov 1, 2023
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[kubernetes.core](https://togithub.com/ansible-collections/kubernetes.core)
| galaxy-collection | minor | `2.2.3` -> `2.4.0` |

---

### Release Notes

<details>
<summary>ansible-collections/kubernetes.core (kubernetes.core)</summary>

###
[`v2.4.0`](https://togithub.com/ansible-collections/kubernetes.core/blob/HEAD/CHANGELOG.rst#v240)

[Compare
Source](https://togithub.com/ansible-collections/kubernetes.core/compare/2.3.2...2.4.0)

\======

## Major Changes

- refactor K8sAnsibleMixin into module_utils/k8s/
([ansible-collections/kubernetes.core#481).

## Minor Changes

- Adjust k8s_user_impersonation tests to be compatible with Kubernetes
1.24
([ansible-collections/kubernetes.core#520).
- add support for dry run with kubernetes client version >=18.20
([ansible-collections/kubernetes.core#245).
-   added ignore.txt for Ansible 2.14 devel branch.
- fixed module_defaults by removing routing hacks from runtime.yml
([ansible-collections/kubernetes.core#347).
- helm - add support for -set-file, -set-json, -set and -set-string
options when running helm install
([ansible-collections/kubernetes.core#533).
- helm - add support for helm dependency update
([ansible-collections/kubernetes.core#208).
- helm - add support for post-renderer flag
([ansible-collections/kubernetes.core#30).
- helm - add support for timeout cli parameter to allow setting Helm
timeout independent of wait
([ansible-collections/kubernetes.core#67).
- helm - add support for wait parameter for helm uninstall command.
(https://github.com/ansible-collections/kubernetes/core/issues/33).
- helm - support repo location for helm diff
([ansible-collections/kubernetes.core#174).
- helm - when ansible is executed in check mode, return the diff between
what's deployed and what will be deployed.
- helm, helm_plugin, helm_info, helm_plugin_info, kubectl - add support
for in-memory kubeconfig.
([ansible-collections/kubernetes.core#492).
- helm_info - add hooks, notes and manifest as part of returned
information
([ansible-collections/kubernetes.core#546).
- helm_info - add release state as a module argument
([ansible-collections/kubernetes.core#377).
- helm_info - added possibility to get all values by adding
get_all_values parameter
([ansible-collections/kubernetes.core#531).
- helm_plugin - Add plugin_version parameter to the helm_plugin module
([ansible-collections/kubernetes.core#157).
-   helm_plugin - Add support for helm plugin update using state=update.
- helm_repository - Ability to replace (overwrite) the repo if it
already exists by forcing
([ansible-collections/kubernetes.core#491).
- helm_repository - add support for pass-credentials cli parameter
([ansible-collections/kubernetes.core#282).
- helm_repository - added support for `host`, `api_key`,
`validate_certs`, and `ca_cert`.
- helm_repository - mark `pass_credentials` as no_log=True to silence
false warning
([ansible-collections/kubernetes.core#412).
- helm_template - add name (NAME of release) and disable_hook as
optional module arguments
([ansible-collections/kubernetes.core#313).
- helm_template - add show_only and release_namespace as module
arguments
([ansible-collections/kubernetes.core#313).
- helm_template - add support for -set-file, -set-json, -set and
-set-string options when running helm template
([ansible-collections/kubernetes.core#546).
- k8s - add no_proxy support to k8s\*
[ansible-collections/kubernetes.core#272).
- k8s - add support for server_side_apply.
([ansible-collections/kubernetes.core#87).
- k8s - add support for user impersonation.
(https://github.com/ansible-collections/kubernetes/core/issues/40).
- k8s - allow resource definition using metadata.generateName
([ansible-collections/kubernetes.core#35).
- k8s lookup plugin - Enable turbo mode via environment variable
([ansible-collections/kubernetes.core#291).
- k8s, k8s_scale, k8s_service - add support for resource definition as
manifest via.
([ansible-collections/kubernetes.core#451).
- k8s_cp - remove dependency with 'find' executable on remote pod when
state=from_pod
([ansible-collections/kubernetes.core#486).
- k8s_drain - Adds `delete_emptydir_data` option to
`k8s_drain.delete_options` to evict pods with an `emptyDir` volume
attached
([ansible-collections/kubernetes.core#322).
- k8s_exec - select first container from the pod if none specified
([ansible-collections/kubernetes.core#358).
- k8s_exec - update deprecation warning for `return_code`
([ansible-collections/kubernetes.core#417).
- k8s_json_patch - minor typo fix in the example section
([ansible-collections/kubernetes.core#411).
- k8s_log - add the `all_containers` for retrieving all containers' logs
in the pod(s).
- k8s_log - added the `previous` parameter for retrieving the previously
terminated pod logs
([ansible-collections/kubernetes.core#437).
- k8s_log - added the `tail_lines` parameter to limit the number of
lines to be retrieved from the end of the logs
([ansible-collections/kubernetes.core#488).
- k8s_rollback - add support for check_mode.
(https://github.com/ansible-collections/kubernetes/core/issues/243).
- k8s_scale - add support for check_mode.
(https://github.com/ansible-collections/kubernetes/core/issues/244).
- kubectl - wait for dd command to complete before proceeding
([ansible-collections/kubernetes.core#321).
- kubectl.py - replace distutils.spawn.find_executable with shutil.which
in the kubectl connection plugin
([ansible-collections/kubernetes.core#456).

## Bugfixes

- Fix dry_run logic - Pass the value dry_run=All instead of dry_run=True
to the client, add conditional check on kubernetes client version as
this feature is supported only for kubernetes >= 18.20.0
([ansible-collections/kubernetes.core#561).
- Fix kubeconfig parameter when multiple config files are provided
([ansible-collections/kubernetes.core#435).
- Helm - Fix issue with alternative kubeconfig provided with
validate_certs=False
([ansible-collections/kubernetes.core#538).
- Various modules and plugins - use vendored version of
`distutils.version` instead of the deprecated Python standard library
`distutils`
([ansible-collections/kubernetes.core#314).
- add missing documentation for filter plugin
kubernetes.core.k8s_config_resource_name
([ansible-collections/kubernetes.core#558).
- common - Ensure the label_selectors parameter of \_wait_for method is
optional.
-   common - handle `aliases` passed from inventory and lookup plugins.
- helm_template - evaluate release_values after values_files, insuring
highest precedence (now same behavior as in helm module).
([ansible-collections/kubernetes.core#348)
-   import exception from `kubernetes.client.rest`.
- k8s - Fix issue with check_mode when using server side apply
([ansible-collections/kubernetes.core#547).
- k8s - Fix issue with server side apply with kubernetes release
'25.3.0'
([ansible-collections/kubernetes.core#548).
- k8s_cp - add support for check_mode
([ansible-collections/kubernetes.core#380).
- k8s_drain - fix error caused by accessing an undefined variable when
pods have local storage
([ansible-collections/kubernetes.core#292).
- k8s_info - don't wait on empty List resources
([ansible-collections/kubernetes.core#253).
- k8s_info - fix issue when module returns successful true after the
resource cache has been established during periods where communication
to the api-server is not possible
([ansible-collections/kubernetes.core#508).
- k8s_log - Fix module traceback when no resource found
([ansible-collections/kubernetes.core#479).
- k8s_log - fix exception raised when the name is not provided for
resources requiring.
([ansible-collections/kubernetes.core#514)
- k8s_scale - fix waiting on statefulset when scaled down to 0 replicas
([ansible-collections/kubernetes.core#203).
- module_utils.common - change default opening mode to read-bytes to
avoid bad interpretation of non ascii characters and strings, often
present in 3rd party manifests.
- module_utils/k8s/client.py - fix issue when trying to authenticate
with host, client_cert and client_key parameters only.
- remove binary file from k8s_cp test suite
([ansible-collections/kubernetes.core#298).
- use resource prefix when finding resource and apiVersion is v1
([ansible-collections/kubernetes.core#351).

## New Modules

- helm_pull - download a chart from a repository and (optionally) unpack
it in local directory.

###
[`v2.3.2`](https://togithub.com/ansible-collections/kubernetes.core/compare/2.3.1...2.3.2)

[Compare
Source](https://togithub.com/ansible-collections/kubernetes.core/compare/2.3.1...2.3.2)

###
[`v2.3.1`](https://togithub.com/ansible-collections/kubernetes.core/blob/HEAD/CHANGELOG.rst#v231)

[Compare
Source](https://togithub.com/ansible-collections/kubernetes.core/compare/2.3.0...2.3.1)

\======

## Bugfixes

- Catch exception raised when the process is waiting for resources
([ansible-collections/kubernetes.core#407).
- Remove `omit` placeholder when defining resource using template
parameter
([ansible-collections/kubernetes.core#431).
- k8s - fix the issue when trying to delete resources using
label_selectors options
([ansible-collections/kubernetes.core#433).
- k8s_cp - fix issue when using parameter local_path with file on
managed node.
([ansible-collections/kubernetes.core#421).
- k8s_drain - fix error occurring when trying to drain node with
disable_eviction set to yes
([ansible-collections/kubernetes.core#416).

###
[`v2.3.0`](https://togithub.com/ansible-collections/kubernetes.core/blob/HEAD/CHANGELOG.rst#v230)

[Compare
Source](https://togithub.com/ansible-collections/kubernetes.core/compare/2.2.3...2.3.0)

\======

## Minor Changes

- add support for dry run with kubernetes client version >=18.20
([ansible-collections/kubernetes.core#245).
- fixed module_defaults by removing routing hacks from runtime.yml
([ansible-collections/kubernetes.core#347).
- helm - add support for timeout cli parameter to allow setting Helm
timeout independent of wait
([ansible-collections/kubernetes.core#67).
- helm - add support for wait parameter for helm uninstall command.
(https://github.com/ansible-collections/kubernetes/core/issues/33).
- helm - support repo location for helm diff
([ansible-collections/kubernetes.core#174).
- helm - when ansible is executed in check mode, return the diff between
what's deployed and what will be deployed.
- helm_info - add release state as a module argument
([ansible-collections/kubernetes.core#377).
- helm_plugin - Add plugin_version parameter to the helm_plugin module
([ansible-collections/kubernetes.core#157).
-   helm_plugin - Add support for helm plugin update using state=update.
- helm_repository - add support for pass-credentials cli parameter
([ansible-collections/kubernetes.core#282).
- helm_repository - added support for `host`, `api_key`,
`validate_certs`, and `ca_cert`.
- helm_template - add show_only and release_namespace as module
arguments
([ansible-collections/kubernetes.core#313).
- k8s - add no_proxy support to k8s\*
[ansible-collections/kubernetes.core#272).
- k8s - add support for server_side_apply.
([ansible-collections/kubernetes.core#87).
- k8s - add support for user impersonation.
(https://github.com/ansible-collections/kubernetes/core/issues/40).
- k8s - allow resource definition using metadata.generateName
([ansible-collections/kubernetes.core#35).
- k8s lookup plugin - Enable turbo mode via environment variable
([ansible-collections/kubernetes.core#291).
- k8s_drain - Adds `delete_emptydir_data` option to
`k8s_drain.delete_options` to evict pods with an `emptyDir` volume
attached
([ansible-collections/kubernetes.core#322).
- k8s_exec - select first container from the pod if none specified
([ansible-collections/kubernetes.core#358).
- k8s_rollback - add support for check_mode.
(https://github.com/ansible-collections/kubernetes/core/issues/243).
- k8s_scale - add support for check_mode.
(https://github.com/ansible-collections/kubernetes/core/issues/244).
- kubectl - wait for dd command to complete before proceeding
([ansible-collections/kubernetes.core#321).

## Bugfixes

- Various modules and plugins - use vendored version of
`distutils.version` instead of the deprecated Python standard library
`distutils`
([ansible-collections/kubernetes.core#314).
- common - Ensure the label_selectors parameter of \_wait_for method is
optional.
- helm_template - evaluate release_values after values_files, insuring
highest precedence (now same behavior as in helm module).
([ansible-collections/kubernetes.core#348)
-   import exception from `kubernetes.client.rest`.
- k8s_drain - fix error caused by accessing an undefined variable when
pods have local storage
([ansible-collections/kubernetes.core#292).
- k8s_info - don't wait on empty List resources
([ansible-collections/kubernetes.core#253).
- k8s_scale - fix waiting on statefulset when scaled down to 0 replicas
([ansible-collections/kubernetes.core#203).
- module_utils.common - change default opening mode to read-bytes to
avoid bad interpretation of non ascii characters and strings, often
present in 3rd party manifests.
- remove binary file from k8s_cp test suite
([ansible-collections/kubernetes.core#298).
- use resource prefix when finding resource and apiVersion is v1
([ansible-collections/kubernetes.core#351).

## New Modules

-   k8s_taint - Taint a node in a Kubernetes/OpenShift cluster

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View
repository job log
[here](https://developer.mend.io/github/StinkyBenji/ansible-tekton-demo).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4zMS41IiwidXBkYXRlZEluVmVyIjoiMzcuMzEuNSIsInRhcmdldEJyYW5jaCI6Im1haW4ifQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants