-
Notifications
You must be signed in to change notification settings - Fork 2k
Conversation
I literally just finished watching the demo video not 10 minutes ago, and I was thinking man, someone should write a docker-machine driver for SDC, this stuff looks cool! No lie! Anyway, thanks for the PR, @twhiteman! Looks like you need to sign your commit, then the CI build should pass. Also, looks like some weird mergeness going on - maybe you just need to squash the commit? |
Yeah, my first attempt failed due to go fmt, then I squashed and pushed again... now it's failing due to something that's not my commit... I guess I messed up the squash. |
@twhiteman - actually it's just failing because of the lacking signature. The build checks for that :) |
@hairyhenderson but the failed build is complaining about c840be8, which is not my commit |
@twhiteman OH - sorry, I see that now. Just squash it down to a single commit and force-push, then in theory it should be fine :) |
Yeah a rebase / squash should be fine. |
@twhiteman - I wonder if something like |
Nod, I could change to that easily enough. The one thing against using "joyent" in the name (and this is a minor point) was that since SDC is open-source, there could be other operators out there (not joyent) that run SDC. Anyway, I'll go ahead and update to joyentsdc and we can cross that bridge (sdc/joyent naming) if/when we get to it. |
This is a good point. What might make sense, then is to have an sdc driver and then add another one (named joyenttriton maybe?) which is based on sdc but sets certain joyent specific things (API endpoint for one). This might be similar to the rackspace and openstack drivers. |
|
||
If you want to try to run SDC on your laptop please visit [the getting started with Cloud on a Laptop section](https://github.com/joyent/sdc#getting-started) or if you already are running SmartDataCenter [here](https://github.com/joyent/sdc-docker) is what you need to do in order to enable the Docker service | ||
|
||
$ docker-machine create --driver sdc --sdc-region=$REGION --sdc-account=$ACCOUNT --sdc-key=$PATH_TO_SSH_KEY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this would be preferable?
docker-machine create
--driver triton
--triton-url=$CLOUDAPI_URL_OR_JOYENT_DATA_CENTER_NAME
--triton-account=$ACCOUNT
--triton-key=$PATH_TO_PRIVATE_SSH_KEY
cc66eb4
to
bba7223
Compare
Okay, the driver name and flags have changed to "Triton" and the readme/docs mention it as "Joyent Triton". As for the SDC (open source platform) driver - there are no differences between SDC and Triton (as yet), so any SDC users will be able to use this Triton driver (or we'll refactor the driver to share/inherit in the future). |
cli.StringFlag{ | ||
Name: "triton-url", | ||
Usage: "Triton Cloudapi URL", | ||
Value: "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API URLs generally have defaults in other drivers - I wonder if there's a sane default you could set here?
@twhiteman - there's an awful lot of shelling out going on in the driver. For the places where you shell out to |
Thanks for feedback @hairyhenderson. For command line default values - yes, agreed, I think I can provide better ones.
I'll update the docs to better reflect the "coal" naming and I'll combine it with a
Yes, I think a
Shell is used for two cases:
Will update with new PR once I finish these items. |
+1 |
A few things changed in this latest PR:
Note: I could not figure out how to replace the shell out for |
@hairyhenderson, @ehazlett: any feedback here? |
Any progress here? I am constantly having to switch between different Tritom locations and boot2docker |
Sorry for the inattention on my part here... I started a new job and had a new kid in the past 4 weeks, so I haven't had a lot of personal time lately ;) I'm going to give this a try tonight. |
$ docker-machine create -d triton --triton-account hairyhenderson --triton-key ~/.ssh/id_rsa triton
Generating triton user certificates - you will be prompted for
your SSH private key password (if it's password protected).
Success!
To see how to connect Docker to this machine, run: machine env triton
$ eval "$(machine env triton)"
$ docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world (abf65824-5d7a-4cc2-b0a7-834c9cf34013)
91c95931e552: Already exists.
a8219747be10: Already exists.
Status: Downloaded newer image for docker.io/hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
... Seems to be working! Code-wise, things look fine (except for one super-minor outstanding suggestion that I'd like an opinion on). The BATS integration tests don't pass with the driver, but I'm not sure they should, since a bunch of the functionality isn't supported ( So from a quick pass this is a 👍 from me. Will need a review from @ehazlett and @nathanleclaire before it can be merged. |
This likely is unrelated to the driver, but I'm encountering this while I'm testing it out, so - it looks like Triton ignores the $ docker run -d -m 128m --name foo debian:jessie
e7107503e0434b40a02801282b6a3926057268bbad01495ea35396fdb5e581d0
$ docker inspect -f '{{.HostConfig.Memory}}' foo
1073741824 And if I look in the Triton UI (https://my.joyent.com/main/#!/docker/containers), I see it's using the But, as I said, this probably isn't related to this PR! |
@hairyhenderson The problem that you hit with "-m 128m" is a bug with Triton + docker 1.7 (it works with docker <= 1.6) that is on me to fix. The ticket can be viewed here: https://smartos.org/bugview/DOCKER-458 And yah, that issue isn't related to this PR. :) |
@hairyhenderson thanks for giving this a look and 👍, and congratulations on all the exciting new changes in your life! |
I noticed the tests infrastructure has changed a few times (since I started this), what is the recommended way to run the tests against a particular driver? |
+1 Please merge this! |
@ehazlett or @nathanleclaire is there anything we can do to move this forward? |
Signed-off-by: Todd Whiteman <todd.whiteman@joyent.com>
Anything I can do to help? I have been having to build my own version every time a new docker-machine is released. |
+1 I'd love this. |
Hi , thanks for your efforts and persistence in submitting this driver. We are extremely excited that there is so much interest in Docker Machine and we really appreciate your interest. However, at this time it is proving to be extremely difficult for us to keep up with reviewing and testing each of these drivers for inclusion in the Machine core. We really want to switch to a more pluggable model, as well as polish up a few things about the driver model which need to be changed to ensure a smooth and sustainable future. Therefore, we will be moving to a plugin model for 0.5 and would love to have you involved in the design and development process. We are closing the outstanding driver PRs at this time, but please keep the code. We will stick closely to the current driver interface and you should be able to re-use a lot (if not all) of the existing driver along with the new plugin model. We will be moving all of the drivers which are merged directly into Machine today to the plugin model when it is available, so there will be no special treatment of those, and there will be documentation outlining the process of developing and using a Docker Machine driver plugin. With all of that being said, we want to apologize for the lack of feedback on your pull request. As contributors ourselves, we understand that being left in limbo is no fun. We would have liked to address this sooner, and in the future we will be more responsive around these kinds of issues. Once again, we thank you for the contribution and the tremendous support. Keep hacking strong! If you want to contribute to the design of the plugin model, we'd love to get your input on this issue where we will be planning it: |
@twhiteman @misterbisson - FYI, there's now a list of available driver plugins in the Docker Machine docs: https://github.com/docker/machine/blob/master/docs/AVAILABLE_DRIVER_PLUGINS.md Not sure if you guys have a working driver plugin yet, but if so it'd be great to see it listed there. |
Thanks for the followup, @hairyhenderson. @tianon did some investigation in https://github.com/tianon/docker-machine-driver-triton and got the driver plugin working. Sadly, however, the driver plugin isn't enough. Machine's workflow is to provision a VM using the driver plugin, then configure it (including setting certs) by SSHing into it. That's the part that doesn't work on Triton, since there is no VM. All containers on Triton run on multi-tenant bare metal, so there's nothing that a customer can SSH into to set certs. Instead, the certs are set at account creation time when the user uploads an SSH public key. We use the user's SSH key to generate a TLS cert, so there's no additional configuration for Machine to do there. So, while the driver works, Docker Machine doesn't work on Triton because of that incompatibility. Do you have any thoughts on how we can move forward? |
@hairyhenderson I would also like to know if there is anyway I can help move this forward. At the moment I am stuck on my own custom fork of the original implementation. |
My only possible thought was to use the Go SSH package to pretend we're an Having Machine be a generic interface for managing all possible |
@tianon - In concept it seems pretty straightforward to be able to let drivers override the provisioner, though it probably would require a |
Adds support for the Joyent SDC docker platform - fixes 1196