-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install juju-bundle and kubectl #23
Conversation
Support for juju-bundle is added to pytest-operator in [PR #31][] so this ensures that it's installed, to support that. It also installs kubectl, or aliases it to microk8s.kubectl if using that provider, so that tests which need it can use it without becoming tightly coupled to the provider. [PR #31]: charmed-kubernetes/pytest-operator#31
@@ -165,6 +166,13 @@ async function run() { | |||
core.endGroup(); | |||
} | |||
core.exportVariable('CONTROLLER_NAME', controller_name); | |||
core.startGroup("Install kubectl"); | |||
if (provider === "microk8s") { | |||
await exec.exec("sudo snap alias microk8s.kubectl kubectl"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check to see if a kubectl
binary already exists?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. The expectation is that the jobs will be run on a clean VM / container that need to be set up each time, so we don't assume that any of the dependencies, like the other snaps, are already present.
if (provider === "microk8s") { | ||
await exec.exec("sudo snap alias microk8s.kubectl kubectl"); | ||
} else { | ||
await exec.exec("sudo snap install kubectl --classic"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we need to do the equivalent of microk8s config > ~/.kube/config
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because this path is only used when the provider is not microk8s. The action doesn't support other K8s providers, so it would probably be more correct to just not install it in that case, but I assume we might add support for a CK provider at some point and it would also be helpful in other tests that depend on K8s but deploy it themselves so I figured it was easier / nicer to just go ahead and say that it's provided either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, though not merging due to test failures
Test failures are env issues with the self-hosted runners and not related to the PR. 🤠 |
Workflows which expect kubectl to be runnable without the microk8s group break after #23 due to the alias needing said group. Always using the actual kubectl snap and copying the microk8s config over instead avoids the issue.
* Use explicit kubectl rather than alias for mk8s Workflows which expect kubectl to be runnable without the microk8s group break after #23 due to the alias needing said group. Always using the actual kubectl snap and copying the microk8s config over instead avoids the issue. * Fix microk8s.config requiring group as well * Skip model / controller cleanup entirely for microk8s It fails frequently and can take an inordinately long time even when it does succeed, and we expect the entire environment to be cleaned up when the runner finishes anyway.
Support for juju-bundle is added to pytest-operator in PR #31 so this ensures that it's installed, to support that. It also installs kubectl, or aliases it to microk8s.kubectl if using that provider, so that tests which need it can use it without becoming tightly coupled to the provider.