Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review apps #62

Merged
merged 9 commits into from
Apr 27, 2018
Merged

Review apps #62

merged 9 commits into from
Apr 27, 2018

Conversation

ashkan18
Copy link
Contributor

Problem

We want to be able to easily deploy different images of the same app on k8s.

Possible Solution

Adding following command:

hokusai staging review_app create <name of the app>

What this command does is, it copies staging.yaml, adds a new namespace definition for this app instance and updates the namespace in yaml to point to newly created one and stores the result in <app_name>.yaml.
The idea is, we provide this yaml file and then we need to manually push the image with proper tag and create this deployment.

@ashkan18 ashkan18 requested review from saolsen and izakp April 19, 2018 14:36
with open(original_yaml_file, 'r') as stream:
try:
yaml_content = list(yaml.load_all(stream))
# @TODO: replace namespace: default with new namespace name
Copy link
Contributor

@izakp izakp Apr 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not make any assumptions that the user configures their app in the "default" namespace - it should support any given namespace. So will need to traverse keys and replace any namespace keys with the new one

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe need to dig into the Kubernetes object specs a bit more here but AFAIK all objects support the key metadata.namespace so it might just be enough to iterate through all top-level objects defined in the source yml and simply overwrite this key

@joeyAghion
Copy link
Contributor

In general I think hokusai commands should wrap the underlying operations as transparently as possible. Why not just add a --namespace argument to the existing create command rather than a new, triply-nested one?

new_namespace = {"apiVersion": "v1", "kind": "Namespace", "meta": { "name": str(app_name) }}
yaml_content = [new_namespace] + yaml_content
with open("hokusai/%s.yml" % app_name, 'w') as output:
yaml.dump_all(yaml_content, output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this will reorder keys in the yaml serializations of the k8s objects (as dict keys are un-sorted) but this is why we add a OrderedDict Yaml representer https://github.com/artsy/hokusai/blob/master/hokusai/lib/representers.py

def create(app_name, verbose):
set_verbosity(verbose)
# copy staging yml file
create_new_app_yaml('hokusai/staging.yml', app_name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Ceiling cat] Seems like if the review app is by definition based on the staging environment, then maybe the proposed cli syntax could drop that detail? So just hokusai staging review_app create?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, that actually makes it easier for later if we wanted to do things like hokusai review_app deploy <app name>.

@ashkan18
Copy link
Contributor Author

ashkan18 commented Apr 19, 2018

Made some changes in last commit:

  • Moved the command to base so now it would be
hokusai review_apps create <name of app>
  • Added few new options:

    • --source-file, -sf: by default it's hokusai/staging.yaml but we now have option to change it
    • --destination-namespace, -dn: by default it's name of the app but we can override it to what we need
  • we now iterate through parsed yaml and we replace all instances of namespace to be desitination_namespace.

Question 1: Should we actually create the deployment on k8s when we do hokusai review_apps create? if we want to do that i need to also update the image to not be staging and be app_name otherwise i was thinking we make deploy part manual and it's deployer's job to make sure new yaml is properly have image tag.

Question 2: Should we (could we) copy configMap from source namespace to destination?

@izakp
Copy link
Contributor

izakp commented Apr 20, 2018

Question 1: Should we actually create the deployment on k8s when we do hokusai review_apps create?

We can have this command actually create the k8s resources if:

  • The new namespace is created before the app itself
  • All the review app's resources are within this namespace

Given that we will want to spot-check the new resource config before applying I would tie this to state dependent on a local file / directory structure. I.e. hokusai review_apps create would create the appropriate file and generate a UUID -appended to the filename (PR number?) to then reference via a hokusai review_apps show or whatnot

Question 2: Should we (could we) copy configMap from source namespace to destination?

I think, given the process above - if the steps taken are:

  1. create
  2. apply

then we can copy the configMap over from the source at apply / launch time...

I would make it a point that we think of review apps as having a lifecycle.

@izakp
Copy link
Contributor

izakp commented Apr 20, 2018

In general I think hokusai commands should wrap the underlying operations as transparently as possible. Why not just add a --namespace argument to the existing create command rather than a new, triply-nested one?

@joeyAghion I don't think passing the --namespace argument through to kubectl is viable as what we want to achieve here is a more declarative syntax and to operate across namespaces. If you kubectl apply with the --namespace option it will only scope its actions to that particular namespace.

@ashkan18 ashkan18 changed the title [WIP] Review apps Review apps Apr 24, 2018
@ashkan18
Copy link
Contributor Author

having trouble with having app with same name on different namespace, getting this error:

> hokusai review_app apply test_app
Error from server (AlreadyExists): error when creating "/Users/ashkinas/src/volt/hokusai/test_app.yml": deployments.extensions "volt-web" already exists
Error from server (AlreadyExists): error when creating "/Users/ashkinas/src/volt/hokusai/test_app.yml": services "volt-web" already exists
ERROR: Command 'kubectl --context staging create --save-config -f /Users/ashkinas/src/volt/hokusai/test_app.yml' returned non-zero exit status 1

print(exc)

def update_namespace(yaml_section, destination_namespace):
if 'namespace' in yaml_section: yaml_section['namespace'] = destination_namespace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would just overwrite namespace here - i.e. drop the if

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but that way this will set namespace in each yaml section, we want to only update if that yaml portion has namespace, i kind of use this method recursively.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading more here about what happens when a given namespace is defined in the kubectl context verses a given yaml spec. This might be working for us in tests because we are using the default namespace in the context definition in our ~/.kube/config file and operations against yaml resource specs without defining the namespace work as intended. In any case it seems this is the behavior implemented:

If no namespace is specified in the schema, then I'd expect the one from kubeconfig to be used.

If a namespace is explicitly specified on the command line and that conflicts with the one in the schema, kubectl should raise an error.

This is why I am in favor of putting namespace in the schema as it takes precedence over implicit schemas provided by context

def apply(app_name, verbose):
"""Create the Kubernetes resources defined in ./hokusai/%s.yml""" % app_name
set_verbosity(verbose)
hokusai.k8s_create(KUBE_CONTEXT, app_name, app_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would use hokusai.k8s_update here (also update the function k8s_update to accept yaml_file_name as a keyword argument)- this will then use kubectl apply so you can apply updates to the spec

set_verbosity(verbose)
# copy staging yml file
if destination_namespace is None: destination_namespace = app_name
create_new_app_yaml(source_file, app_name, destination_namespace)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps pause here (ask for confirmation?) and then call hokusai.k8s_create(KUBE_CONTEXT, app_name, app_name)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh so you are thinking of this immediately call create? i was thinking of this just creating the yaml file and once everything is verified whoever creating this app needs to do apply separately.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That works as well - but whereas kubectl create will fail (as you saw) if given resources already exist - kubectl apply will update existing resources - so the former is safer when you want to create

with open("hokusai/%s.yml" % app_name, 'w') as output:
yaml.safe_dump_all(yaml_content, output, default_flow_style=False)
except yaml.YAMLError as exc:
print(exc)
Copy link
Contributor

@izakp izakp Apr 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raise HokusaiError("Some error message") and it will be caught here or just don't catch the exception and it will bubble up and be caught here

yaml.safe_dump_all(yaml_content, output, default_flow_style=False)
except yaml.YAMLError as exc:
print(exc)

Copy link
Contributor

@izakp izakp Apr 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To dump the env configmap out you can use the ConfigMap class and instead of calling save on it - update it's struct attribute serving as a cache, then follow what is defined in the save method to write out a new NamedTemporaryFile then create it in the new namespace.

import os
from hokusai.services.kubectl import Kubectl
from hokusai.services.configmap import ConfigMap

kctl = Kubectl(context)
configmap = ConfigMap(context)

configmap.load()
configmap.struct['metadata']['namespace'] = review_app_namespace
f = configmap._to_file()
try:
  shout(kctl.command("apply -f %s" % f))
finally:
  os.unlink(f)

if 'namespace' in yaml_section: yaml_section['namespace'] = destination_namespace
for k, v in yaml_section.iteritems():
if isinstance(v, dict):
update_namespace(v, destination_namespace)
Copy link
Contributor

@izakp izakp Apr 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see what you mean - it doesn't have to be recursive - only operate at the top-level Kubernetes API objects. All top-level k8s API objects should have apiVersion and kind keys. metadata may be present so for each apiObject if it is ('metadata' in apiObject) add metadata['namepsace'] = destination_namespace... if not add `metadata = { 'namespace': destination_namespace }

For example ending up with something like this:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: fooNamespace

@ashkan18
Copy link
Contributor Author

Alright made some changes here in command structure, basically to follow commands for staging and production now review apps have:

hokusai review_app setup <name>  # creates yaml file
hokusai review_app create <name> # applies yaml file
hokusai review_app update <name> # updates :)

@ashkan18
Copy link
Contributor Author

alright, latest command format:

hokusai review_app setup <name> # creates yaml file
hokusai review_app create <name> # creates namespace/service/deployment, at this stage it will fail till we copy ConfigMap
hokusai review_app copy_env <name> # copies staging ConfigMap from default namespace to new namespace
hokusai review_app update <name> # updates deployment/service/namespace based on yaml

@click.option('-v', '--verbose', type=click.BOOL, is_flag=True, help='Verbose output')
@click.option('-sf', '--source-file', type=click.STRING, default='hokusai/staging.yml', help='Source deployment file')
@click.option('-dn', '--destination-namespace', type=click.STRING, default=None, help='Target Namespace')
def setup(app_name, verbose, source_file, destination_namespace):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor but the click library will automatically parse any @click.command-wrapped function docstrings and display them as --help :)

@@ -3,6 +3,7 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ project_name }}-{{ component }}
namespace: default
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@izakp
Copy link
Contributor

izakp commented Apr 27, 2018

LGTM! Will get this merged and into a release!

@ashkan18 ashkan18 merged commit 4a78c0e into artsy:master Apr 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants