Skip to content
This repository has been archived by the owner on Mar 23, 2019. It is now read-only.

Deploy on Kubernetes/OpenShift using ansible-container run #152

Closed
concaf opened this issue Aug 8, 2016 · 18 comments
Closed

Deploy on Kubernetes/OpenShift using ansible-container run #152

concaf opened this issue Aug 8, 2016 · 18 comments
Milestone

Comments

@concaf
Copy link
Contributor

concaf commented Aug 8, 2016

ISSUE TYPE
  • Feature Idea
SUMMARY

Hi,

Right now, to deploy my application on Kubernetes, first I have to run ansible-container shipit kube and then deploy the resulting playbook using the ansible-playbook command.

This workflow is definitely useful if I am exporting the playbook to a remote Kubernetes cluster and running it using ansible there, but if I want to deploy my application on my local Kubernetes cluster, then would it make more sense to deploy the application using something like ansible-container run --provider kubernetes? It could be a wrapper around shipit and ansible-playbook commands.
Similar workflow for OpenShift.

Does this make sense? Thoughts?

@chouseknecht
Copy link
Contributor

ansible-container shipit kube has a --save-config option to generate the K8s configuration files. It might be interesting to have ansible-container pipe those configuration files directly through kubectrl.

@concaf
Copy link
Contributor Author

concaf commented Aug 9, 2016

@chouseknecht, yep, would you want to pipe to kubectl command or deploy using the relevant modules using playbooks?

@j00bar
Copy link
Contributor

j00bar commented Aug 9, 2016

This would be a separate engine module - presently we presently implement docker/docker-compose. It would require a parallel implementation for what's in container/docker for kubes.

@concaf
Copy link
Contributor Author

concaf commented Aug 10, 2016

Thanks for the pointers @j00bar.
What would you suggest for the implementation of container/kubernetes? A couple of ways that come to my mind are:

  • Using a python library like pykube
  • Making API calls to k8s directly
  • Leveraging the clustering/kubernetes module in ansible-modules-extras

Could you point me to what is the way forward for this?

@j00bar
Copy link
Contributor

j00bar commented Aug 10, 2016

Either of the first two would be preferable to the latter - don't forget that the client isn't required to have Ansible installed. Not knowing as much about k8s as I'm sure you do, my advice is to consider what is going to be the most maintainable as Kubernetes and its API evolve.

@dustymabe
Copy link
Contributor

we'll soon be splitting out the library that we have been using as a backend for atomicapp to communicate with the APIs of kubernetes and openshift. We might be able to re-use that here.

cc @cdrage

@concaf
Copy link
Contributor Author

concaf commented Aug 10, 2016

+1, @cdrage's work performed almost all of the operations required here, awesome :)

@cdrage
Copy link
Contributor

cdrage commented Aug 10, 2016

Hey @j00bar

I'll be spitting out our library used in atomicapp as a separate repo for people to consume. Rather than relying on kubectl to be installed on the machine in order to deploy a Kubernetes app, you can access it rather through the HTTP api instead. Same goes for OpenShift.

This will be an agnostic library for use with Kubernetes and OpenShift.

I should have this up, tested and properly packaged by the end of Week 34 (August 21st-26th).

@j00bar
Copy link
Contributor

j00bar commented Aug 11, 2016

5rz77fi

@chouseknecht
Copy link
Contributor

@cdrage if that's available, then it seems we should just scrap the whole playbook/role generation thing. It feels like an unnecessary step. I'm guessing what we really want is:

  • Deploy the application directly to Kube or Openshift via the API.
  • Optionally, generate the Kube or Openshift config templates for debugging/testing purposes

@concaf
Copy link
Contributor Author

concaf commented Aug 11, 2016

if that's available, then it seems we should just scrap the whole playbook/role generation thing.

@chouseknecht by scraping, do you mean eliminating the need to generate ansible roles/k8s configuration using shipit? In my opinion, it's super useful for someone to convert the artifacts from docker compose to k8s/oc, and the ansible roles fit nicely if someone needs to fit those roles with their already existing production infrastructure.

Did I get you wrong?

@chouseknecht
Copy link
Contributor

I was just putting it out there. If generating a role is useful, then we should keep it. If it's not useful, and it's just an extra step that does not get used, then it should be eliminated. Eliminating it will depend on user feedback.

In the meantime, when @cdrage's work lands in the separate, consumable repo, it seems we should move quickly to incorporate it into ansible-container and create a one-step shipit option that deploys the app directly to Kube/Openshift via the API.

@chouseknecht
Copy link
Contributor

Honestly, I had not actually looked at atomicapp before now. Seems the thing we want to co-opt is the run command: atomicapp run projectatomic/helloapache --provider=kubernetes I might poke at the code a bit and see what's involved...

@chouseknecht
Copy link
Contributor

And before we get too far down this road @detiber is working on a swagger generated library that we need to consider as well.

@cdrage
Copy link
Contributor

cdrage commented Aug 29, 2016

Hey all,

I've been working hard on increasing the test coverage as well as doing some bug hunting with Kubeshift. I've updated the repo https://github.com/cdrage/kubeshift to include the first initial release (0.0.2) and I've also uploaded it to Pypi so you can easily install it via pip.

There are still some features that need to be added / fixed (mainly, the 'certificate-data' issues when deploying to a cert-specific configuration online).

Otherwise, feel free to try it out.

I'll be pushing to get this repo upstream to CentOS / RHEL and Debian. Although of course this takes quite a while to get into a mainstream release.

@cdrage
Copy link
Contributor

cdrage commented Sep 6, 2016

I've updated https://github.com/cdrage/kubeshift in regards to the certificate-data issues as well as having some functional tests added. I should be doing another release this week (0.0.2) that will include the fixes up in PyPi.

@cdrage
Copy link
Contributor

cdrage commented Sep 14, 2016

I've worked hard these past few days to implement some missing methods that I thought was needed in order for this functionality to work.

This was in order to gather and grep the information from what pods are running, services, what secrets are available, etc.

I've also updated the Kubeshift documentation to reflect this (https://github.com/cdrage/kubeshift). Again, since it was a large update, I'll be releasing another version of Kubeshift this week to Pypi in order to for people to retrieve the latest release via pip.

@chouseknecht
Copy link
Contributor

Moving this discussion to a new issue: #362

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants