Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LIBCLOUD-781 Introducing Container-as-a-Service cloud as an API base driver to libcloud #666

Closed

Conversation

@tonybaloney
Copy link
Contributor

tonybaloney commented Dec 22, 2015

The intention of this PR is to enable support for pure container hosts such as docker, rkt as well as support the introduction of container-as-a-service providers.

@tonybaloney tonybaloney mentioned this pull request Dec 22, 2015
1 of 4 tasks complete
@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 23, 2015

Docs available here: http://libcloud-fork.readthedocs.org/en/libcloud-781_containers/
Docker implementation completed.

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 23, 2015

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 23, 2015

@erjohnso I've review the https://cloud.google.com/container-engine/reference/rest/ docs for container engine. Not sure if it exposes a docker endpoint as well as a kubernetes endpoint or how you deploy containers?

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 23, 2015

Further considerations to the design:

  • IP addresses, should add an ip_addresses[] field to the Container class
  • Clusters, some of the CaaS providers support a notion of clusters (i.e. Kubernetes), the ContainerDriver should have a flag for supports_clusters, where the cluster methods, create_cluster, list_clusters etc. are implemented.
@SamuelMarks
Copy link
Contributor

SamuelMarks commented Dec 26, 2015

This feature sounds evil >_>

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 26, 2015

@SamuelMarks because the PR is 666, the mark of the devil or because trying to abstract something as new and relatively unstandardised as containerisation is bad?

@SamuelMarks
Copy link
Contributor

SamuelMarks commented Dec 26, 2015

Just the 666 :P

Although in all honesty, integrating containers is a slippery slope towards PaaS. We should be wary of scope creep.

If we are interested in supporting clusters, then let's look at putting it in another repository.

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 26, 2015

I'm about to write a blog on that very topic and clarify the inherent difference in CaaS and PaaS. The scope and interfaces are essentially completed. You can talk to a containerised hosting solution in a similar way to the compute driver (see joyent triton as the first example). The PR should ideally have more implementations in the form of other drivers. I would like to include Amazon ECS and Google Containers. Which is where the additional methods come from. Those are optional, in the form of a class property (supports_clusters). If ECS drives the pattern in other CaaS providers then this interface should be suitable, if not, well then at least the base methods are good for docker based systems.

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 26, 2015

I also need to develop a driver for rkt. But the API is still "experimental" so it's really a waste of time until its beta. At a glance the API spec would be suitable for these interfaces.

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Dec 26, 2015

Plus. We already support DNS, storage, load balancing. So this is not that different. The consensus on the discussion thread was to introduce a new driver type

@tonybaloney tonybaloney changed the title LIBCLOUD-781 Introducing Containers as an API base driver to libcloud LIBCLOUD-781 Introducing Container-as-a-Service cloud as an API base driver to libcloud Dec 26, 2015
@tonybaloney tonybaloney mentioned this pull request Jan 8, 2016
tonybaloney added 11 commits Jan 9, 2016
# Conflicts:
#	CHANGES.rst
#	libcloud/test/secrets.py-dist
… updated docs with examples and renamed the package to utils
…th tests and doc examples
… also with support for basic http auth
@tonybaloney
Copy link
Contributor Author

tonybaloney commented Jan 13, 2016

@erjohnso ^^ see start of kubernetes API implementation. corroborating this against GCE to make sure they aren't too dissimilar.

tonybaloney added 2 commits Jan 13, 2016
…gle-container pods. also added destroying containers (pods) via the API.
…g the correct class name.
@erjohnso
Copy link
Member

erjohnso commented Jan 14, 2016

@tonybaloney - for the k8s stuff, it looks like you're equating nodes to containers and clusters to namespaces, correct?

Without having thought through this too deeply, I was imagining that a cluster would represent the k8s cluster. I think that would work pretty well for create/delete. A create call could even go so far as to install a new k8s cluster (either open source local install, GKE cluster, etc). In that context, list_clusters likely only works for a meta-service like GKE.

I was also imagining containers would map more closely to pods. Since pods are likely the closest representation (e.g. 'the smallest deployable unit for k8s') and work well with CRUD operations.

wdyt?

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Jan 14, 2016

@erjohnso in the driver you connect to the API server of the cluster. When you run list_clusters they equate to namespaces. When you ask to list_containers it looks at all the pods in that cluster (namespace) and lists all the containers within them.
If you deploy a container to a cluster it creates a single container pod for that container and places it into the namespace.
list_locations will be the method to show the clusters in GKE. You can provision a cluster to those locations in the API.

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Jan 14, 2016

That design also reserves the opportunity to have extension methods to create pods with multiple containers in them and not break the design. Each container has the owning pod and namespace in the extra dict

@tonybaloney
Copy link
Contributor Author

tonybaloney commented Jan 17, 2016

Let me recap.

  1. The idea of the PR is to introduce and support container-as-a-service cloud providers
  2. The PR should not bias toward a particular vendor or provider API
  3. The API should be easy to use

http://www.slideshare.net/AnthonyShaw5/introducing-container-asaservice-support-to-apache-libcloud

The drivers still need polish, but I would like to get an RC out this week for feedback from the community.

@erjohnso are you +1 for the comments above? then I will resync and merge.

@erjohnso
Copy link
Member

erjohnso commented Jan 19, 2016

@tonybaloney

Please don't gate progress with this on my input. I'm by no means a k8s expert, nor am I as well versed in public container / docker services as you are.

While I think what you propose will work, my biggest worry (non-blocking!) is on Kubernetes users having to map resource primitives and methods to the CaaS driver's point of view. A common deployment option for k8s is to use Google Container Engine (GKE) to spin up clusters. And I do think GKE fits very well with the CaaS driver (same with ECS). But I do think the CaaS driver isn't as good a fit for k8s (especially within GKE). For instance, you'd have list_clusters support listing k8s clusters if the connection points to GKE, but return a list of namespaces if the connection points to the k8s endpoint. Having two separate concepts of 'clusters' in this context seems odd to me.

I also feel the CaaS driver is not a good fit as a client library for k8s. I would suspect k8s users to look for methods like create_pod, list_namespaces, etc. My two cents would be to either drop k8s from the CaaS driver, or clutter it up with a bunch of ex_ methods that actually fit with k8s resources (which goes against your point-2 above).

But, please feel free to blaze ahead. As I said, I'm not expert, and I'm also not directly affiliated with the k8s team nor its community. Please consider my input as just another libcloud user. :)

tonybaloney added 2 commits Jan 19, 2016
# Conflicts:
#	.travis.yml
…as subject to change.
@tonybaloney
Copy link
Contributor Author

tonybaloney commented Jan 19, 2016

@erjohnso I've marked the driver subject to change. I agree with your observations, of all the drivers k8s is the one I've lost the most sleep over.

I've been trying to ask people who are using either GKE and ECS and see how they would use a k8s driver and what for. The common response is this:

  • We have ECS/GKE for production containers
  • We want to emulate some of that functionality locally because we have spare hardware, k8s gives us that
  • K8S would be a good solution but we don't want to tie ourselves to it's concepts because we want to use ECS/GKE
    For that use case, this makes sense. But if you want to use k8s directly, I don't think you would use this driver.

Either way, further community feedback would be ideal.

@asfgit asfgit closed this in a313623 Jan 20, 2016
@@ -8,7 +8,6 @@ python:
- 3.4
- 3.5
- "pypy"
sudo: false

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

Ah, I missed this change.

You use sudo bellow so even if the sudo attribute is removed, it will still result in using legacy (non container) Travis CI infrastructure.

I think they added support for installing debian packages which still allows you to use new container based infrastructure. I will look into it.

env: ENV=docs
before_script: TOX_ENV=docs-travis
before_install:
- sudo apt-get update -qq

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

I'm not 100% sure if that's the root cause, but lint and docs jobs / tasks seems to have started getting stuck (https://travis-ci.org/apache/libcloud see build history, current build is stuck as well) after this change was merged (first I thought it might be an issue with Travis).

Will see if my change (17384ca) fixes that.

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

Sadly addons stuff doesn't appear to work (I didn't dig in too much), but at least "jobs getting stuck" part is fixed now.

I believe builds got stuck because "sudo: required" is not a valid syntax or something related to that.

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

I can't figure out why the addons stuff is not working (I tried many different things) :/

It seems that it works for all the tasks except the docs one (https://travis-ci.org/apache/libcloud/jobs/104312529).

I checked other projects and they use the same syntax and it works just fine for them.

@@ -44,7 +44,7 @@
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
'sphinx.ext.viewcode']
'sphinx.ext.viewcode', 'sphinx.ext.graphviz']

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

Nice 👍

b64encode(
('%s:%s' % (self.username,
self.password))
.encode('latin1'))

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

Probably safer to use utf-8, right? :)

:param password: (optional) Your hub account password
:type password: ``str``
"""
super(HubClient, self).__init__(self.host, username,

This comment has been minimized.

Copy link
@Kami

Kami Jan 23, 2016

Member

Minor thing - please use keyword arguments when calling methods when possible.

This means less surprises in case the method signature changes or similar.

@coveralls
Copy link

coveralls commented Nov 19, 2017

Coverage Status

Changes Unknown when pulling a477303 on DimensionDataCBUSydney:LIBCLOUD-781_containers into ** on apache:trunk**.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

5 participants
You can’t perform that action at this time.