Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install juju-bundle and kubectl #23

Merged
merged 1 commit into from
Oct 26, 2021
Merged

Conversation

johnsca
Copy link
Contributor

@johnsca johnsca commented Oct 7, 2021

Support for juju-bundle is added to pytest-operator in PR #31 so this ensures that it's installed, to support that. It also installs kubectl, or aliases it to microk8s.kubectl if using that provider, so that tests which need it can use it without becoming tightly coupled to the provider.

Support for juju-bundle is added to pytest-operator in [PR #31][] so
this ensures that it's installed, to support that. It also installs
kubectl, or aliases it to microk8s.kubectl if using that provider, so
that tests which need it can use it without becoming tightly coupled to
the provider.

[PR #31]: charmed-kubernetes/pytest-operator#31
@johnsca johnsca requested a review from knkski October 7, 2021 13:52
@@ -165,6 +166,13 @@ async function run() {
core.endGroup();
}
core.exportVariable('CONTROLLER_NAME', controller_name);
core.startGroup("Install kubectl");
if (provider === "microk8s") {
await exec.exec("sudo snap alias microk8s.kubectl kubectl");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to check to see if a kubectl binary already exists?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. The expectation is that the jobs will be run on a clean VM / container that need to be set up each time, so we don't assume that any of the dependencies, like the other snaps, are already present.

if (provider === "microk8s") {
await exec.exec("sudo snap alias microk8s.kubectl kubectl");
} else {
await exec.exec("sudo snap install kubectl --classic");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we need to do the equivalent of microk8s config > ~/.kube/config here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because this path is only used when the provider is not microk8s. The action doesn't support other K8s providers, so it would probably be more correct to just not install it in that case, but I assume we might add support for a CK provider at some point and it would also be helpful in other tests that depend on K8s but deploy it themselves so I figured it was easier / nicer to just go ahead and say that it's provided either way.

Copy link

@knkski knkski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though not merging due to test failures

@johnsca
Copy link
Contributor Author

johnsca commented Oct 26, 2021

Test failures are env issues with the self-hosted runners and not related to the PR. 🤠

@johnsca johnsca merged commit 5e21f24 into main Oct 26, 2021
@johnsca johnsca deleted the johnsca/juju-bundle-and-kubectl branch October 26, 2021 17:17
johnsca added a commit that referenced this pull request Oct 26, 2021
Workflows which expect kubectl to be runnable without the microk8s group
break after #23 due to the alias needing said group. Always using the
actual kubectl snap and copying the microk8s config over instead avoids
the issue.
johnsca added a commit that referenced this pull request Oct 27, 2021
* Use explicit kubectl rather than alias for mk8s

Workflows which expect kubectl to be runnable without the microk8s group
break after #23 due to the alias needing said group. Always using the
actual kubectl snap and copying the microk8s config over instead avoids
the issue.

* Fix microk8s.config requiring group as well

* Skip model / controller cleanup entirely for microk8s

It fails frequently and can take an inordinately long time even when it
does succeed, and we expect the entire environment to be cleaned up when
the runner finishes anyway.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants