Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for loadbalancer interface #153

Merged
merged 13 commits into from May 19, 2021

Conversation

johnsca
Copy link
Contributor

@johnsca johnsca commented Apr 6, 2021

This adds support for the loadbalancer interface so that cloud-native LBs can be provided by the integrator charms. Additionally, it simplifies the confusing way the relations between the masters and workers change depending on whether kubeapi-load-balancer is being used or not by making that use the same lb-provider endpoint and always forwarding the API endpoint URLs via the kube-control relation.

Part of lp:1921776
Depends on:

metadata.yaml Outdated
@@ -29,6 +29,9 @@ peers:
interface: kube-masters
provides:
kube-api-endpoint:
# kube-api-endpoint is deprecated. Its functionality has been rolled into
# the kube-control interface. The relation endpoint will be removed in a
# future release.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we lose something by deprecating kube-api-endpoint. It was possible for users to stand up a reverse-proxy (e.g. apache/nginx) in front of the Kubernetes API. Do we know for sure that users aren't dependent on this?

Copy link
Contributor Author

@johnsca johnsca Apr 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The relation pattern doesn't affect whether you can put a reverse proxy in front of the API server; in fact, that's exactly what the kubapi-load-balancer charm is doing. It just changes how the information about that information is disseminated between the charms.

For comparison, let me outline the current situation vs this proposal.

Current

The current setup depends on whether you're using kubeapi-load-balancer or not:

Without kubeapi-load-balancer

  • k8s-master ← kube-api-endpoint → k8s-worker (communicates API endpoint address to workers)
  • k8s-master ← kube-control → k8s-worker (communicates all other control plane info to workers)
  • k8s-master ← lb-provider → cloud integrator, (maybe) haproxy, etc (optional; provides some form of LB / proxy / HA for the API server)

With kubeapi-load-balancer

  • k8s-master ← loadbalancer → kubeapi-load-balancer (communicates the proxy API endpoint address to the master)
  • k8s-master ← kube-api-endpoint → kubeapi-load-balancer (communicates the actual API endpoint addresses to the proxy)
  • k8s-master ← kube-control → k8s-worker (communicates all the other control plane info to the workers)
  • kubeapi-load-balancer ← kube-api-endpoint → k8s-worker (communicates the proxy API endpoint address to the workers)

Proposal

For all cases it would be:

  • k8s-master ← kube-control → k8s-worker (communicates all control plane info, including API endpoint, to the workers)
  • k8s-master ← lb-provider → kubeapi-load-balancer, cloud integrator, (maybe) haproxy, etc (optional; provides some form of LB / proxy / HA for the API server)

In this case, the kubeapi-load-balancer becomes just another optional LB provider.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I get that. My concern is that kubeapi-load-balancer might not be the only reverse proxy people are using. Are we going to add new loadbalancer relations to charms like haproxy and apache2 as well? Just how much work are we creating by making this change?

reactive/kubernetes_master.py Outdated Show resolved Hide resolved
reactive/kubernetes_master.py Outdated Show resolved Hide resolved
reactive/kubernetes_master.py Outdated Show resolved Hide resolved
Copy link
Contributor

@Cynerva Cynerva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After relating kubernetes-master:loadbalancer-internal and kubernetes-master:loadbalancer-external to kubeapi-load-balancer:lb-consumers:

unit-kubernetes-master-0: 15:54:04 ERROR unit.kubernetes-master/0.juju-log loadbalancer-internal:18: Hook error:
Traceback (most recent call last):
  File "/var/lib/juju/agents/unit-kubernetes-master-0/.venv/lib/python3.8/site-packages/charms/reactive/__init__.py", line 74, in main
    bus.dispatch(restricted=restricted_mode)
  File "/var/lib/juju/agents/unit-kubernetes-master-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 390, in dispatch
    _invoke(other_handlers)
  File "/var/lib/juju/agents/unit-kubernetes-master-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 359, in _invoke
    handler.invoke()
  File "/var/lib/juju/agents/unit-kubernetes-master-0/.venv/lib/python3.8/site-packages/charms/reactive/bus.py", line 181, in invoke
    self._action(*args)
  File "/var/lib/juju/agents/unit-kubernetes-master-0/charm/reactive/kubernetes_master.py", line 2087, in build_kubeconfig
    internal_url = kubernetes_master.get_api_url(internal_endpoints)
  File "lib/charms/layer/kubernetes_master.py", line 154, in get_api_url
    return urls[kubernetes_common.get_unit_number() % len(urls)]
ZeroDivisionError: integer division or modulo by zero

@johnsca johnsca requested a review from Cynerva April 22, 2021 14:44
@johnsca johnsca force-pushed the johnsca/lp/1921776/add-loadbalancer-interface branch from 059414e to 7df111e Compare May 4, 2021 21:28
@johnsca johnsca marked this pull request as draft May 7, 2021 21:27
@johnsca johnsca force-pushed the johnsca/lp/1921776/add-loadbalancer-interface branch from 7df111e to a1682ae Compare May 10, 2021 20:30
@johnsca
Copy link
Contributor Author

johnsca commented May 11, 2021

Here's a manual test run with this plus #156 and charmed-kubernetes/layer-kubernetes-common#17 cherry-picked in, and a worker built from charmed-kubernetes/charm-kubernetes-worker#84 pre-deployed into the model. Note that to get the test to work with the pre-deployed worker, you have to use the master branch of python-libjuju (for juju/python-libjuju#488) until the next version is released to PyPI:

tox -re integration --notest
.tox/integration/bin/pip uninstall juju
.tox/integration/bin/pip install --no-cache https://github.com/juju/python-libjuju/archive/refs/heads/master.zip#egg=juju

The lint failures are fixed on the auth PRs.

@johnsca
Copy link
Contributor Author

johnsca commented May 13, 2021

New manual integration test results after rebase from master.

@johnsca johnsca force-pushed the johnsca/lp/1921776/add-loadbalancer-interface branch from 8ed9273 to 6710736 Compare May 18, 2021 20:49
@Cynerva
Copy link
Contributor

Cynerva commented May 19, 2021

I walked through the upgrade path for this. The port served by kubeapi-load-balancer changed from 443 to 6443, which meant I had to redownload the kubeconfig from kubernetes-master. Make sure to include that in the documentation upgrade notes.

Copy link
Contributor

@Cynerva Cynerva left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

"kubernetes-master:kube-api-endpoint", "kubernetes-worker:kube-api-endpoint"
)
await ops_test.model.wait_for_idle(wait_for_active=True, timeout=10 * 60)
_check_status_messages(ops_test)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe remove the relation at the end of this test. That way future tests won't be testing with weird relations in play.

req = lb_provider.get_request("api-server-" + lb_type)
req.protocol = req.protocols.tcp
api_port = kubernetes_master.STANDARD_API_PORT
req.port_mapping = {api_port: api_port}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth considering mapping 443 to api_port since that's how it has been with kubeapi-load-balancer in the past.

@Cynerva Cynerva merged commit a69a6c1 into master May 19, 2021
@Cynerva Cynerva deleted the johnsca/lp/1921776/add-loadbalancer-interface branch May 19, 2021 18:24
Cynerva pushed a commit to charmed-kubernetes/bundle that referenced this pull request May 24, 2021
* Support loadbalancer interface between master and API LB

Simplify the relations between the master, worker, and API LB charms in
favor of the newer `loadbalancer` interface relation.

Part of [lp:1897818][]
Depends on:
  * juju-solutions/loadbalancer-interface#13
  * juju-solutions/interface-kube-control#33
  * charmed-kubernetes/charm-kubernetes-control-plane#153
  * charmed-kubernetes/charm-kubernetes-worker#84
  * charmed-kubernetes/charm-kubeapi-load-balancer#11

[lp:1897818]: https://bugs.launchpad.net/charmed-kubernetes-testing/+bug/1897818

* Split k8s-master:lb-provider relation into external / internal

* Update core and converged bundle fragments
johnsca added a commit that referenced this pull request Aug 26, 2021
The new LB support in #153 unintentionally changed the endpoint that the
charm client uses to talk to the API server making it use the internal
LB address instead of always talking to the API server locally. This
introduced an issue during bootstrap where the initial local-only token
would try to be used on other API servers and fail.

Fixes [lp:1941763](https://bugs.launchpad.net/charm-kubernetes-master/+bug/1941763)
Cynerva pushed a commit that referenced this pull request Aug 26, 2021
* Ensure charm only accesses local API server

The new LB support in #153 unintentionally changed the endpoint that the
charm client uses to talk to the API server making it use the internal
LB address instead of always talking to the API server locally. This
introduced an issue during bootstrap where the initial local-only token
would try to be used on other API servers and fail.

Fixes [lp:1941763](https://bugs.launchpad.net/charm-kubernetes-master/+bug/1941763)

* Ensure all local charm kubectl usage uses the local url
Cynerva pushed a commit that referenced this pull request Aug 26, 2021
* Ensure charm only accesses local API server

The new LB support in #153 unintentionally changed the endpoint that the
charm client uses to talk to the API server making it use the internal
LB address instead of always talking to the API server locally. This
introduced an issue during bootstrap where the initial local-only token
would try to be used on other API servers and fail.

Fixes [lp:1941763](https://bugs.launchpad.net/charm-kubernetes-master/+bug/1941763)

* Ensure all local charm kubectl usage uses the local url
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants