-
-
Notifications
You must be signed in to change notification settings - Fork 34
Switch to using k8s_exec module instead of shell module + kubectl #8
Comments
Working on testing this out again in a new branch (I'll push it up shortly) so I can post an update to operator-framework/operator-sdk#2204 |
Also need to complete #18 first, just to make sure I'm on the latest version of the operator for testing purposes. |
Still getting:
Using base image version v0.12.0, as well as v0.14.0 |
Steps to reproduce:
The logs will end (after the first failed operator playbook run) with something like:
|
Updated the upstream issue. I'm going to do some digging in the proxy that's set up in the operator and see if there's an easy fix or not. |
This issue has been marked 'stale' due to lack of recent activity. If there is no further activity, the issue will be closed in another 30 days. Thank you for your contribution! Please read this blog post to see the reasons why I mark issues as stale. |
This issue is no longer marked for closure. |
This should be fixable now—see operator-framework/operator-sdk#2204 and PR operator-framework/operator-sdk#2204 |
Now that the operator is being maintained in the Ansible namespace, I am not going to maintain this project anymore (as I consider it a historic artifact which was used to as a base for the ansible-namespace version, and I'm redirecting all issues and PRs to the Ansible-maintained version: https://github.com/ansible/awx-operator Moved this issue to: ansible/awx-operator#3 |
In #5, I discovered the
k8s_exec
module from this PR (ansible/ansible#55029) does not work when running inside an Ansible-based Operator due to some proxy request handling the Operator does for Kubernetes API requests.The gist of the problem is
k8s_exec
uses a websocket to communicate with Kubernetes to run anexec
command, but the proxy does not handle the101
handshake response correctly (instead returning a200
), which results in a failure of thek8s_exec
module.I was going to try to get that issue fixed in #5, but as a workaround, I'm currently using
kubectl
, which is installed in the operator image with the following line:This is a little fragile, as it means the
kubectl
currently shipping with this operator is locked into a specific version (which likely won't cause issues, but isn't wonderful especially if it could be used as an attack vector if a vulnerability is found with whatever the current version is).So for this issue to be complete, the following should be done:
roles/tower/library/k8s_exec.py
based on this PR.main.yml
to usek8s_exec
instead ofshell
+kubectl
.The text was updated successfully, but these errors were encountered: