Skip to content
This repository has been archived by the owner on Sep 26, 2021. It is now read-only.

Drivers: add joyent triton driver #1197

Closed
wants to merge 1 commit into from

Conversation

twhiteman
Copy link

Adds support for the Joyent SDC docker platform - fixes 1196

@hairyhenderson
Copy link
Contributor

I literally just finished watching the demo video not 10 minutes ago, and I was thinking man, someone should write a docker-machine driver for SDC, this stuff looks cool!

No lie!

Anyway, thanks for the PR, @twhiteman! Looks like you need to sign your commit, then the CI build should pass.

Also, looks like some weird mergeness going on - maybe you just need to squash the commit?

/cc @ehazlett @nathanleclaire

@twhiteman
Copy link
Author

Yeah, my first attempt failed due to go fmt, then I squashed and pushed again... now it's failing due to something that's not my commit... I guess I messed up the squash.

@hairyhenderson
Copy link
Contributor

@twhiteman - actually it's just failing because of the lacking signature. The build checks for that :)

@twhiteman
Copy link
Author

@hairyhenderson but the failed build is complaining about c840be8, which is not my commit

@hairyhenderson
Copy link
Contributor

@twhiteman OH - sorry, I see that now. Just squash it down to a single commit and force-push, then in theory it should be fine :)

@ehazlett
Copy link
Contributor

Yeah a rebase / squash should be fine.

@hairyhenderson
Copy link
Contributor

@twhiteman - I wonder if something like joyentsdc would be a better name for the driver... sdc is a little cryptic to people who aren't already familiar with it... It would also match more closely the convention with other drivers (such as amazonec2 or vmwarefusion). WDYT?

@twhiteman
Copy link
Author

Nod, I could change to that easily enough. The one thing against using "joyent" in the name (and this is a minor point) was that since SDC is open-source, there could be other operators out there (not joyent) that run SDC.

Anyway, I'll go ahead and update to joyentsdc and we can cross that bridge (sdc/joyent naming) if/when we get to it.

@hairyhenderson
Copy link
Contributor

This is a good point. What might make sense, then is to have an sdc driver and then add another one (named joyenttriton maybe?) which is based on sdc but sets certain joyent specific things (API endpoint for one). This might be similar to the rackspace and openstack drivers.


If you want to try to run SDC on your laptop please visit [the getting started with Cloud on a Laptop section](https://github.com/joyent/sdc#getting-started) or if you already are running SmartDataCenter [here](https://github.com/joyent/sdc-docker) is what you need to do in order to enable the Docker service

$ docker-machine create --driver sdc --sdc-region=$REGION --sdc-account=$ACCOUNT --sdc-key=$PATH_TO_SSH_KEY

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this would be preferable?

docker-machine create 
  --driver triton
  --triton-url=$CLOUDAPI_URL_OR_JOYENT_DATA_CENTER_NAME
  --triton-account=$ACCOUNT
  --triton-key=$PATH_TO_PRIVATE_SSH_KEY

@twhiteman twhiteman changed the title Drivers: add joyent sdc driver Drivers: add joyent triton driver May 21, 2015
@twhiteman twhiteman force-pushed the sdc-driver-v2 branch 2 times, most recently from cc66eb4 to bba7223 Compare May 21, 2015 21:23
@twhiteman
Copy link
Author

Okay, the driver name and flags have changed to "Triton" and the readme/docs mention it as "Joyent Triton".

As for the SDC (open source platform) driver - there are no differences between SDC and Triton (as yet), so any SDC users will be able to use this Triton driver (or we'll refactor the driver to share/inherit in the future).

cli.StringFlag{
Name: "triton-url",
Usage: "Triton Cloudapi URL",
Value: "",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

API URLs generally have defaults in other drivers - I wonder if there's a sane default you could set here?

@hairyhenderson
Copy link
Contributor

@twhiteman - there's an awful lot of shelling out going on in the driver. For the places where you shell out to openssl, could you use anything from utils/certs.go instead? And for the ssh shell-outs, could you use the stuff in the ssh package instead?

@twhiteman
Copy link
Author

Thanks for feedback @hairyhenderson.

For command line default values - yes, agreed, I think I can provide better ones.

There's no mention of coal

I'll update the docs to better reflect the "coal" naming and I'll combine it with a --triton-datacenter (short name) such that the URL is not double-used (confusing).

--triton-tls-skip-verify

Yes, I think a --triton-tls-skip-verify flag is a good idea, at it could be useful for other situations.

shelling out

Shell is used for two cases:

  • coal automatic url service discovery - I will try to use the ssh library, though my first attempt has it prompting me for a password (for a password-less key)
  • openssl key/cert generation - I did look at using utils/certs and also crypto/x509, but I didn't find how to do the same as what openssl does (and this may be my lack of go-lang familiarity)

Will update with new PR once I finish these items.

@arnos
Copy link

arnos commented Jun 2, 2015

+1

@twhiteman
Copy link
Author

A few things changed in this latest PR:

  • dropped the coal specific handling - this also removed need to shell out for SSH
  • added --triton-skip-tls-verify argument
  • added --triton-datacenter argument, as easy way to use a shorthand version of the url
  • added default values for arguments
  • updated documentation

Note: I could not figure out how to replace the shell out for openssl and ssh-keygen, so I've marked them as dependencies in the documentation.

@misterbisson
Copy link

@hairyhenderson, @ehazlett: any feedback here?

@corbinu
Copy link

corbinu commented Jul 1, 2015

Any progress here? I am constantly having to switch between different Tritom locations and boot2docker

@hairyhenderson
Copy link
Contributor

Sorry for the inattention on my part here... I started a new job and had a new kid in the past 4 weeks, so I haven't had a lot of personal time lately ;)

I'm going to give this a try tonight.

@hairyhenderson
Copy link
Contributor

$ docker-machine create -d triton --triton-account hairyhenderson --triton-key ~/.ssh/id_rsa triton     
Generating triton user certificates - you will be prompted for
your SSH private key password (if it's password protected).
Success!
To see how to connect Docker to this machine, run: machine env triton
$ eval "$(machine env triton)"
$ docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world (abf65824-5d7a-4cc2-b0a7-834c9cf34013)
91c95931e552: Already exists. 
a8219747be10: Already exists. 
Status: Downloaded newer image for docker.io/hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
...

Seems to be working!

Code-wise, things look fine (except for one super-minor outstanding suggestion that I'd like an opinion on).

The BATS integration tests don't pass with the driver, but I'm not sure they should, since a bunch of the functionality isn't supported (stop/kill/upgrade/etc... all make no sense for triton).

So from a quick pass this is a 👍 from me. Will need a review from @ehazlett and @nathanleclaire before it can be merged.

@hairyhenderson
Copy link
Contributor

This likely is unrelated to the driver, but I'm encountering this while I'm testing it out, so - it looks like Triton ignores the --memory flag, and always gives me the 1GB package.

$ docker run -d -m 128m --name foo debian:jessie
e7107503e0434b40a02801282b6a3926057268bbad01495ea35396fdb5e581d0
$ docker inspect -f '{{.HostConfig.Memory}}' foo
1073741824

And if I look in the Triton UI (https://my.joyent.com/main/#!/docker/containers), I see it's using the t4-standard-1G package, where it should be using t4-standard-128M.

But, as I said, this probably isn't related to this PR!

@trentm
Copy link

trentm commented Jul 3, 2015

@hairyhenderson The problem that you hit with "-m 128m" is a bug with Triton + docker 1.7 (it works with docker <= 1.6) that is on me to fix. The ticket can be viewed here: https://smartos.org/bugview/DOCKER-458

And yah, that issue isn't related to this PR. :)

@misterbisson
Copy link

@hairyhenderson thanks for giving this a look and 👍, and congratulations on all the exciting new changes in your life!

@twhiteman
Copy link
Author

@hairyhenderson

The BATS integration tests don't pass with the driver, but I'm not sure they should,
since a bunch of the functionality isn't supported (stop/kill/upgrade/etc... all make
no sense for triton).

I noticed the tests infrastructure has changed a few times (since I started this), what is the recommended way to run the tests against a particular driver?

@dekobon
Copy link

dekobon commented Jul 14, 2015

+1 Please merge this!

@misterbisson
Copy link

@ehazlett or @nathanleclaire is there anything we can do to move this forward?

Signed-off-by: Todd Whiteman <todd.whiteman@joyent.com>
@corbinu
Copy link

corbinu commented Jul 26, 2015

Anything I can do to help? I have been having to build my own version every time a new docker-machine is released.

@bahamat
Copy link

bahamat commented Jul 30, 2015

+1 I'd love this.

@ehazlett
Copy link
Contributor

Hi , thanks for your efforts and persistence in submitting this driver. We are extremely excited that there is so much interest in Docker Machine and we really appreciate your interest. However, at this time it is proving to be extremely difficult for us to keep up with reviewing and testing each of these drivers for inclusion in the Machine core. We really want to switch to a more pluggable model, as well as polish up a few things about the driver model which need to be changed to ensure a smooth and sustainable future.

Therefore, we will be moving to a plugin model for 0.5 and would love to have you involved in the design and development process. We are closing the outstanding driver PRs at this time, but please keep the code. We will stick closely to the current driver interface and you should be able to re-use a lot (if not all) of the existing driver along with the new plugin model. We will be moving all of the drivers which are merged directly into Machine today to the plugin model when it is available, so there will be no special treatment of those, and there will be documentation outlining the process of developing and using a Docker Machine driver plugin.

With all of that being said, we want to apologize for the lack of feedback on your pull request. As contributors ourselves, we understand that being left in limbo is no fun. We would have liked to address this sooner, and in the future we will be more responsive around these kinds of issues.

Once again, we thank you for the contribution and the tremendous support. Keep hacking strong!

If you want to contribute to the design of the plugin model, we'd love to get your input on this issue where we will be planning it:

#1626

@ehazlett & @nathanleclaire

@ehazlett ehazlett closed this Jul 31, 2015
@hairyhenderson
Copy link
Contributor

@twhiteman @misterbisson - FYI, there's now a list of available driver plugins in the Docker Machine docs: https://github.com/docker/machine/blob/master/docs/AVAILABLE_DRIVER_PLUGINS.md

Not sure if you guys have a working driver plugin yet, but if so it'd be great to see it listed there.

@misterbisson
Copy link

Thanks for the followup, @hairyhenderson. @tianon did some investigation in https://github.com/tianon/docker-machine-driver-triton and got the driver plugin working. Sadly, however, the driver plugin isn't enough.

Machine's workflow is to provision a VM using the driver plugin, then configure it (including setting certs) by SSHing into it. That's the part that doesn't work on Triton, since there is no VM. All containers on Triton run on multi-tenant bare metal, so there's nothing that a customer can SSH into to set certs.

Instead, the certs are set at account creation time when the user uploads an SSH public key. We use the user's SSH key to generate a TLS cert, so there's no additional configuration for Machine to do there.

So, while the driver works, Docker Machine doesn't work on Triton because of that incompatibility. Do you have any thoughts on how we can move forward?

@corbinu
Copy link

corbinu commented Dec 10, 2015

@hairyhenderson I would also like to know if there is anyway I can help move this forward. At the moment I am stuck on my own custom fork of the original implementation.

@tianon
Copy link
Contributor

tianon commented Dec 17, 2015

My only possible thought was to use the Go SSH package to pretend we're an
SSH server that happens to only implement the exact commands Machine uses
to grab certs, etc, but that feels like going way too far into the wrong
direction. 😢

Having Machine be a generic interface for managing all possible
configurations of my DOCKER_* environment variables to talk to multiple
Docker API endpoints is my own personal dream -- even things like
Rackspace's Carina could benefit from loosening the coupling of SSH <->
Docker host.

@hairyhenderson
Copy link
Contributor

@tianon - In concept it seems pretty straightforward to be able to let drivers override the provisioner, though it probably would require a Driver API change. @twhiteman logged #886 for this sort of thing a while back...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants