New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s: support federation, merge config.JUJU_MODEL into separate contexts in a single config #61

Open
ktsakalozos opened this Issue Apr 7, 2017 · 14 comments

Comments

Projects
None yet
4 participants
@ktsakalozos
Collaborator

ktsakalozos commented Apr 7, 2017

When you create multiple clusters you get multiple config.<JUJU_MODEL> files under .kube.

As a result if you do a kubectl config get-contexts you do not see your cluster. You actually may see an old cluster.

In the case where you want to have a federation of clusters this becomes rather problematic.

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget Apr 7, 2017

Contributor

The thought here is that you run ~/bin/kubect.JUJU_MODEL so that you always get the correct cluster for whatever model you are inspecting. Are you saying this is not happening?

Contributor

battlemidget commented Apr 7, 2017

The thought here is that you run ~/bin/kubect.JUJU_MODEL so that you always get the correct cluster for whatever model you are inspecting. Are you saying this is not happening?

@ktsakalozos

This comment has been minimized.

Show comment
Hide comment
@ktsakalozos

ktsakalozos Apr 7, 2017

Collaborator

The kubernetes user is expected to be able to do a kubectl config get-contexts to list all the clusters he has available and set the active one with kubectl config use-context my-cluster-name. This is particularly useful in the context of a cluster federation. In a federation you have many clusters and you can issue commands to each one by setting the --context param in kubectl (eg kubectl --context=federation create -f ingress/k8shserver.yaml). The approach we have now with ~/bin/kubect.JUJU_MODEL is ok for a single cluster environment. Federation was an alpha feature in v1.5 but is available now.

Collaborator

ktsakalozos commented Apr 7, 2017

The kubernetes user is expected to be able to do a kubectl config get-contexts to list all the clusters he has available and set the active one with kubectl config use-context my-cluster-name. This is particularly useful in the context of a cluster federation. In a federation you have many clusters and you can issue commands to each one by setting the --context param in kubectl (eg kubectl --context=federation create -f ingress/k8shserver.yaml). The approach we have now with ~/bin/kubect.JUJU_MODEL is ok for a single cluster environment. Federation was an alpha feature in v1.5 but is available now.

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget Apr 7, 2017

Contributor

Ok, can we discuss the best course of action to handle this in our spells? Bring in @marcoceppi, @chuckbutler , @johnsca for more input

Contributor

battlemidget commented Apr 7, 2017

Ok, can we discuss the best course of action to handle this in our spells? Bring in @marcoceppi, @chuckbutler , @johnsca for more input

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget Apr 7, 2017

Contributor

First question I have is: Should we be merging all configs into a single ~/.kube/config file?

Contributor

battlemidget commented Apr 7, 2017

First question I have is: Should we be merging all configs into a single ~/.kube/config file?

@mbruzek

This comment has been minimized.

Show comment
Hide comment
@mbruzek

mbruzek Apr 7, 2017

I think we should discuss having one config file with many contexts. We can use the kubectl config subcommand to create contexts for each environment rather than having separate files. It is my understanding now that kubectl is snapped up we are installing that via snap. The spell could be modified to add these contexts to the config (using the kubectl config subcommand). The tough part would be removing configs that are conjure-down or removed by juju.

mbruzek commented Apr 7, 2017

I think we should discuss having one config file with many contexts. We can use the kubectl config subcommand to create contexts for each environment rather than having separate files. It is my understanding now that kubectl is snapped up we are installing that via snap. The spell could be modified to add these contexts to the config (using the kubectl config subcommand). The tough part would be removing configs that are conjure-down or removed by juju.

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget Apr 7, 2017

Contributor

@mbruzek ok this sounds good, I think we can use the same naming convention that we do now fo r the separate files and just clean those up on a conjure-down (this feature needs to be added in a generic way depending on what deployments are there)

Contributor

battlemidget commented Apr 7, 2017

@mbruzek ok this sounds good, I think we can use the same naming convention that we do now fo r the separate files and just clean those up on a conjure-down (this feature needs to be added in a generic way depending on what deployments are there)

@battlemidget battlemidget changed the title from K8s configs are not merged to k8s: support federation, merge config.JUJU_MODEL into separate contexts in a single config Apr 7, 2017

@marcoceppi

This comment has been minimized.

Show comment
Hide comment
@marcoceppi

marcoceppi Apr 7, 2017

I have a script for this already, I'll pastebin it shortly.

marcoceppi commented Apr 7, 2017

I have a script for this already, I'll pastebin it shortly.

@marcoceppi

This comment has been minimized.

Show comment
Hide comment
@marcoceppi

marcoceppi Apr 7, 2017

#!/usr/bin/python3
#
# Copyright 2017 Canonical, Ltd. Authored by Marco Ceppi
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.

import os
import sys
import yaml


def get_kubeconfig(cfg):
    if not os.path.exists(cfg):
        return

    with open(cfg) as f:
        return yaml.safe_load(f.read())


def merge_cfg(cfg, new_cfg):
    if not isinstance(cfg, dict) or not isinstance(new_cfg, dict):
        return None
    for k, v in new_cfg.iteritems():
        if k not in cfg:
            cfg[k] = v
        elif isinstance(v, list):
            cfg[k] = cfg[k] + v
        else:
            cfg[k] = merge_cfg(cfg[k], v)

    return cfg


def merge_config(kubecfg_file, newcfg_file, name):
    kubecfg = get_kubeconfig(kubecfg_file)
    newcfg = get_kubeconfig(newcfg_file)

    if not kubecfg or not newcfg:
        print(kubecfg, kubecfg_file)
        return 1

    newcfg['clusters'][0]['name'] = '{}-cluster'.format(name)
    newcfg['users'][0]['name'] = '{}-user'.format(name)
    newcfg['contexts'][0]['name'] = name
    newcfg['contexts'][0]['context']['cluster'] = '{}-cluster'.format(name)
    newcfg['contexts'][0]['context']['user'] = '{}-user'.format(name)

    kubecfg = merge_cfg(kubecfg, newcfg)

    with open(kubecfg_file, 'r+') as f:
        f.seek(0)
        f.write(yaml.dump(kubecfg, default_flow_style=False))
        f.truncate()


if __name__ == '__main__':
    KUBEHOME = os.path.expanduser('~/.kube')
    KUBECONFIG_FILE = os.path.join(KUBEHOME, 'config')

    if len(sys.argv) != 3:
       print('Where da args at?' if len(sys.argv) < 3 else 'Too many args bruv')
       sys.exit(1)

    NEWCONFIG_FILE = os.path.expanduser(sys.argv[1])
    NAME = sys.argv[2]

    rc = merge_config(KUBECONFIG_FILE, NEWCONFIG_FILE, NAME) or 0
    sys.exit(rc)

This is what we use at the linux foundation when creating a large number of clusters we expect one user to interact with. The first config that gets setup is run through sed to replace ubuntu and juju with appropriate names. Each subsequent ~/.kube/config.... are then merged into that one with this script.

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

marcoceppi commented Apr 7, 2017

#!/usr/bin/python3
#
# Copyright 2017 Canonical, Ltd. Authored by Marco Ceppi
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.

import os
import sys
import yaml


def get_kubeconfig(cfg):
    if not os.path.exists(cfg):
        return

    with open(cfg) as f:
        return yaml.safe_load(f.read())


def merge_cfg(cfg, new_cfg):
    if not isinstance(cfg, dict) or not isinstance(new_cfg, dict):
        return None
    for k, v in new_cfg.iteritems():
        if k not in cfg:
            cfg[k] = v
        elif isinstance(v, list):
            cfg[k] = cfg[k] + v
        else:
            cfg[k] = merge_cfg(cfg[k], v)

    return cfg


def merge_config(kubecfg_file, newcfg_file, name):
    kubecfg = get_kubeconfig(kubecfg_file)
    newcfg = get_kubeconfig(newcfg_file)

    if not kubecfg or not newcfg:
        print(kubecfg, kubecfg_file)
        return 1

    newcfg['clusters'][0]['name'] = '{}-cluster'.format(name)
    newcfg['users'][0]['name'] = '{}-user'.format(name)
    newcfg['contexts'][0]['name'] = name
    newcfg['contexts'][0]['context']['cluster'] = '{}-cluster'.format(name)
    newcfg['contexts'][0]['context']['user'] = '{}-user'.format(name)

    kubecfg = merge_cfg(kubecfg, newcfg)

    with open(kubecfg_file, 'r+') as f:
        f.seek(0)
        f.write(yaml.dump(kubecfg, default_flow_style=False))
        f.truncate()


if __name__ == '__main__':
    KUBEHOME = os.path.expanduser('~/.kube')
    KUBECONFIG_FILE = os.path.join(KUBEHOME, 'config')

    if len(sys.argv) != 3:
       print('Where da args at?' if len(sys.argv) < 3 else 'Too many args bruv')
       sys.exit(1)

    NEWCONFIG_FILE = os.path.expanduser(sys.argv[1])
    NAME = sys.argv[2]

    rc = merge_config(KUBECONFIG_FILE, NEWCONFIG_FILE, NAME) or 0
    sys.exit(rc)

This is what we use at the linux foundation when creating a large number of clusters we expect one user to interact with. The first config that gets setup is run through sed to replace ubuntu and juju with appropriate names. Each subsequent ~/.kube/config.... are then merged into that one with this script.

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget Apr 7, 2017

Contributor

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

+1

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

We could definitely do this in a step.

Ok, so what I'll do is review this script and see how to best incorporate it into the spells and solicit you guys feedback. I'd like to start on this early next week so that we can have something finalized (at the very least) by EOW.

Contributor

battlemidget commented Apr 7, 2017

There are some implications, however, since we can't just use the model name as it's usually over 35 characters long (that is woefully long to hand type every time and no autocomplete on kubectl). Instead we should aim to have a context which is short and sweet.

I'd prefer models and context to be named something like cdk-XXX or k8s-XXX where XXX is a short rand char to avoid collisions.

+1

Alternatively, having the user provide a name for the cluster after the deployment but before downloading bins might be helpful. They've already done the hard parts (installing the cluster) it's just a matter of providing an alias for this deploy. The model would remain conjure-up... but the context / user / cluster names would be based on the alias.

We could definitely do this in a step.

Ok, so what I'll do is review this script and see how to best incorporate it into the spells and solicit you guys feedback. I'd like to start on this early next week so that we can have something finalized (at the very least) by EOW.

@battlemidget battlemidget self-assigned this Apr 7, 2017

@mbruzek

This comment has been minimized.

Show comment
Hide comment
@mbruzek

mbruzek Apr 7, 2017

Actually @marcoceppi wouldn't it be better if we just distilled these into actual kubectl config commands? That way if the data format changes we could be less broken.

kubectl config --kubeconfig={0} set-cluster {1} --server={2} --certificate-authority={3} --embed-certs=true
kubectl config --kubeconfig={0} set-credentials {1} --client-key={2} --client-certificate={3} --embed-certs=true
kubectl config --kubeconfig={0} set-context {1} --cluster={2} --user={3}

Something like this ^

mbruzek commented Apr 7, 2017

Actually @marcoceppi wouldn't it be better if we just distilled these into actual kubectl config commands? That way if the data format changes we could be less broken.

kubectl config --kubeconfig={0} set-cluster {1} --server={2} --certificate-authority={3} --embed-certs=true
kubectl config --kubeconfig={0} set-credentials {1} --client-key={2} --client-certificate={3} --embed-certs=true
kubectl config --kubeconfig={0} set-context {1} --cluster={2} --user={3}

Something like this ^

@marcoceppi

This comment has been minimized.

Show comment
Hide comment
@marcoceppi

marcoceppi Apr 7, 2017

Considering you have to parse the kube config file that we download to again update a new one it seems that both methods are fraught with problems. What happens when the command line changes? It feels simpler to just programatically merge files. Neither method solves future proofing.

marcoceppi commented Apr 7, 2017

Considering you have to parse the kube config file that we download to again update a new one it seems that both methods are fraught with problems. What happens when the command line changes? It feels simpler to just programatically merge files. Neither method solves future proofing.

@ktsakalozos

This comment has been minimized.

Show comment
Hide comment
@ktsakalozos

ktsakalozos Apr 10, 2017

Collaborator

This issue+PR could be related: kubernetes/kubernetes#38791

And here is nother tool to merge config files: https://github.com/Collaborne/load-kubeconfig

Collaborator

ktsakalozos commented Apr 10, 2017

This issue+PR could be related: kubernetes/kubernetes#38791

And here is nother tool to merge config files: https://github.com/Collaborne/load-kubeconfig

@ktsakalozos

This comment has been minimized.

Show comment
Hide comment
@ktsakalozos

ktsakalozos May 8, 2017

Collaborator

@battlemidget did you have the chance to look at the script provided?

Thanks

Collaborator

ktsakalozos commented May 8, 2017

@battlemidget did you have the chance to look at the script provided?

Thanks

@battlemidget

This comment has been minimized.

Show comment
Hide comment
@battlemidget

battlemidget May 8, 2017

Contributor

not yet, it's on my todo

Contributor

battlemidget commented May 8, 2017

not yet, it's on my todo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment