Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upProvide (optional) ability to use Kubernetes as the runtime engine instead of Docker #1815
Comments
This comment has been minimized.
This comment has been minimized.
|
@JeanMertz right now there are no plans for this sort of deep integration with kubernetes. I'm certainly keeping an eye on the project and where it goes in the future, but I'm not sure this would be the right decision for drone at this time. |
bradrydzewski
closed this
Oct 12, 2016
This comment has been minimized.
This comment has been minimized.
|
I should point out however, that the engine used to run builds (docker) is exposed as an interface so in theory it could be swapped with a different implementation. This is the interface that is defined for running builds: And this is the docker implementation: I would certainly encourage a community effort to create a kubernetes implementation. I know @tboerger expressed some interest here. I've discussed with @gtaylor as well. Bottom line is I think making drone kubernetes-only would not be a wise decision for the project, but supporting multiple runtime engines is certainly of interest. This sort of thing, of course, depends on community engagement since I cannot volunteer to take on all these tasks. So while it isn't something I would work on, I would certainly make myself available to provide technical guidance to individuals interested in contributing an implementation. |
JeanMertz
referenced this issue
Oct 12, 2016
Closed
Multi-Machine Parallel Builds (fan-in, fan-out) #1814
This comment has been minimized.
This comment has been minimized.
|
That engine abstraction looks interesting. Kubernetes has a very good first-class Golang API, and the feature set required here (Start/Stop/Remove/Wait/Logs) seems really limited, so it wouldn't be too hard to implement that on top of Kubernetes. Maybe I'll give it a stab some time in the near future, if I ever manage to put more than 24 hours in a day. |
This comment has been minimized.
This comment has been minimized.
|
cool, if you end up looking into an implementation give a shout-out in the drone developer channel at https://gitter.im/drone/drone-dev . I'm sure you could find some others interesting in lending a hand :) |
This comment has been minimized.
This comment has been minimized.
|
Let's re-open this but with a slightly adjusted scope of adding experimental support for an alternate kubernetes engine, alongside the existing docker engine. I would love to hear what @gtaylor thinks about this and what might be possible. |
bradrydzewski
reopened this
Oct 12, 2016
bradrydzewski
changed the title
Using Kubernetes Jobs as "workers"
Provide (optional) ability to use Kubernetes as the runtime engine instead of Docker
Oct 12, 2016
This comment has been minimized.
This comment has been minimized.
|
When I'm more familiar with the k8s codebase I really would like to give it a try to build a real k8s agent. That's something my teamlead also asked for. |
This comment has been minimized.
This comment has been minimized.
|
I've got a crappy custom scheduler that can parse a .drone.yml and fire up pods with some Drone plugins working. This isn't useful for Drone itself, but served as a nice exercise to get a feel for what this would look like. A few notes about what I did:
Things I haven't got around to:
|
This comment has been minimized.
This comment has been minimized.
|
@gtaylor thanks for the detailed reply. Regarding using kubernetes secret store, I'm wondering how well that would work with drone. Thoughts on #1808 (comment) ? |
This comment has been minimized.
This comment has been minimized.
|
@bradrydzewski I think you create multiple Kubernetes Secret resources, perhaps one per pipeline step (a container within the build Pod). You can pull these secrets into each container (step) individually. Drone would determine which secrets go to which steps and stuff those secrets in the step's respective Secret object. Also note that each step container can use multiple Secret objects to pull env vars (or mount as files) from. That may or may not be useful to you. While what I am describing above doesn't lend itself to a ton of value over what Drone has, the values won't be viewable in |
This comment has been minimized.
This comment has been minimized.
|
Also, I don't think it'd be worth heavily pursuing deep Kubernetes integration until Kubernetes 1.5 lands. The Job system is still shaking out in major ways. Secrets are going to be seeing lots of expansion soon, as are the various network/resource ACLs. The jump from 1.3 to the recent 1.4 saw a cron-like Job scheduling system become available in alpha, so that's still super raw as well. It'd certainly be worth tinkering with and building some familiarity, but this is going to take a good bit of thought and care to do well. We'd need to get kind of hacky with build pods and plugins to make it work well right now. Things look super bright in the not-so-distant future. |
This comment has been minimized.
This comment has been minimized.
derekperkins
commented
Dec 20, 2016
|
@gtaylor Now that 1.5 is out, do you feel more confident about tackling this? |
This comment has been minimized.
This comment has been minimized.
|
FWIW I am also interested, at some point, trying to figure out what a "serverless" drone would look like. I think the concept of a build queue and pending builds could be eliminated by using the on-demand capabilities of services like hyper.sh and rackspace carina. I'm sure other vendors will launch similar on-demand capabilities as well. I'm not sure how Kubernetes fits into the picture here, but am interested in the overall concept. |
This comment has been minimized.
This comment has been minimized.
|
@derekperkins It's definitely more possible now. It would still be a whole lot of work to do really well, in that the perfect situation is that we're scattering the work out across multiple Pods. Failing to achieve that means that we're not any better off than we currently are. It's one of those things where this could be really awesome if done right, but it could also be thoroughly underwhelming and a black mark otherwise. We'd have to provide something more compelling and capable than all of these Jenkins + Kubernetes whitepapers (a well-trodden path at this point), at the very minimum.
It could be neat, but is there any money in that? At what point do you just run Circle CI/Travis/CodeShip/Shippable or one of the infinite other hosted solutions that are effectively "serverless" from the customer's perspective? Can't imagine the bigger money on-prem orgs using those services with their metal. If you really don't want to maintain servers, fire up a Google Container Engine (hosted Kubernetes) cluster and install Drone. They maintain the VMs and it's cheap ($5/month at the lowest level). You can still get your fingers in if you want to have the cluster auto-scale up/down as jobs pile up, and you can mix in their equivalent of Spot instances (pre-emptible VMs). If and when you eventually want to take more direct control with your own cluster, Container Engine runs the same Kubernetes that is found in the open source project's repo.
It's still probably a little early for Drone and Kubernetes to go down this road too much yet, but it fits into the picture in that it's not a proprietary, close sourced option like hyper.sh and Rackspace Carina :) It also now has far more adoption and mindshare than those two relatively niche services. |
This comment has been minimized.
This comment has been minimized.
|
I'm going to list some challenges based on my conversation with @gtaylor. I'm not a kubernetes expect, so my apologies if I misinterpreted the discussion.
These are some of the main challenges that we will face with native kubernetes support, as I understand it. We could definitely create a very basic prototype implementation that showcases drone using kubernetes as the backend, but it would have some initial limitations:
Perhaps with a basic implementation in place, we could engage the kubernetes community and use it as a starting point and figure out how to fill in the remaining gaps. |
This comment has been minimized.
This comment has been minimized.
|
As a reference point: we are currently using Jenkins on top of Kubernetes, together with some plugins (one of them being the kubernetes-plugin), to simulate what I'd like Drone to do/represent. Jenkins comes with a lot of bagage (mostly good, some bad, some ugly), but the current set-up looks something like this:
|
This comment has been minimized.
This comment has been minimized.
|
@JeanMertz is this something you would be willing to help implement? I have no real world experience with Kubernetes and have quite a lot on my plate. Perhaps if this were a community effort it would have more of a chance of succeeding. What do you think? |
This comment has been minimized.
This comment has been minimized.
jmn
commented
Feb 2, 2017
|
Hi,
I am not sure if this is what is meant but there are Init Containers:
|
This comment has been minimized.
This comment has been minimized.
|
@jmn I think perhaps a better way of describing the issue is that kuberenetes does not easily map to the drone yaml at this time. The drone yaml executes batch steps, with linked services, and needs to evaluate whether or not the step should be executed at runtime based on results of prior steps. Consider this configuration:
This doesn't mean it is impossible, though. The suggestion by @JeanMertz is really interesting. His suggestion is that each step should be its own pod, with its own set of services, and Drone would handle orchestrating sequential pod execution to emulate build steps. Unfortunately I do not have any experience with kubernetes outside of reading a few blog posts, so it is not something I will be able to implement at this time. Community contributions very welcome :) |
This comment has been minimized.
This comment has been minimized.
|
I should point out that I'm also not connected to the kubernetes community. If there are individuals in the kubernetes community that you think might be interested in helping implement a native-kubernetes CI system, please help them get in touch @ gitter.im/bradrydzewski |
This comment has been minimized.
This comment has been minimized.
webwurst
commented
Feb 2, 2017
|
I would like to help out where I can. We are using a small/cheap Kubernetes
cluster for some time for some open-data projects:
https://github.com/codeformuenster/kubernetes-deployment
And we used Drone to create Docker images for ARM a while ago:
https://github.com/armhf-drone-plugins
Haven't played with Drone 0.5 yet unfortunately. And constraint would be
time, as always ;)
…On Thu, Feb 2, 2017 at 12:35 PM Brad Rydzewski ***@***.***> wrote:
I should point out that I'm also not connected to the kubernetes
community. If there are individuals in the kubernetes community that you
think might be interested in working on this feature, you should send them
to the drone gitter channel.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1815 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AANxonZZMqXsm0vD1EG311CXUBH4JFJcks5rYb-DgaJpZM4KVKD2>
.
|
This comment has been minimized.
This comment has been minimized.
derekperkins
commented
Feb 8, 2017
•
|
@clintberry just got drone working on kubernetes and is working on a |
This comment has been minimized.
This comment has been minimized.
|
I've used a slightly modified version of this repo from @vallard to deploy Drone to Kubernetes: https://github.com/vallard/drone-kubernetes Works well enough, though it does bypass the Kubernetes scheduler, and connects to the docker daemon on the host directly. Would definitely be interested in a solution that could work within that context. One note: I think trying to map some of Drone's concepts to Kubernetes directly isn't going to help make this happen. In general, I think the design @gtaylor suggested above has some merit, though I'm not sold that having a new namespace-per-build is necessary. Working within the context of a pod isn't too far off what Drone is doing right now to run builds, it would just require something Kubernetes-specific logic to ensure that various "components" within a Pod are a) started in the right order, and b) executed serially, c) etc. Keep in mind that the main benefit of a Pod within Kubernetes is to link together dependent containers w/o relying on cluster-wide functions like service discovery, services, ingress controllers, etc. I think starting w/ a set of requirements around what is expected from build scheduler, including some of the newer features around shared volumes, matrix builds, fan-out, fan-in, could help clarify what is required. The effort should start small, too, just solve a simple use case that 1) clones a repo, 2) executes a build step, 3) collects logs from these steps. I'd be happy to carve out some time to look into this deeper, but I doubt I'd have much time to contribute much in the way of coding in the near future. |
This comment has been minimized.
This comment has been minimized.
|
@benschumacher my ultimate goal is to create a compiler of sorts (note that it is not entirely vaporware, as I do have certain pieces working). The compiler would consist of frontends which would take different configuration formats (drone.yml, travis.yml, bitbucket-pipelines.yml) and compile them down to an intermediate representation. I have an implementation of this that works with the .drone.yml and bitbucket-pipelines.yml. I am working on a formal specification for the intermediate representation: Ideally this intermediate representation would work with multiple backends, where a backend is a container engine (such as Docker) or an orchestration engine (such as Kubernetes). I have a working backend for Docker and am confident this could work for LXD and Rocket. I am not sure what changes would be required for this to work with Kubernetes, however, I am optimistic that this is a solvable problem.
I think the specification for the IR gets us closer to this goal. It might still be too docker specific though, so would love to hear your feedback, and would love to have you as part of the working group (that invite extends to anyone in this thread as well).
At this stage if you can participate in more of an architectural (non-coding) capacity it would still be tremendously helpful. The backend implementations tend to be quite small, so if we get the IR specification in a good place the implementation should hopefully be pretty straightforward. |
This comment has been minimized.
This comment has been minimized.
clintberry
commented
Feb 8, 2017
|
Forgive my ignorance, but I don't understand what you are all trying to accomplish here. Kubernetes just uses docker under the hood, and Drone runs great in kubernetes right now. I installed the drone server as a deployment, and drone agents as a deployment. I can scale the agents now with 2 clicks as needed. I don't think adding a tighter integration to kubernetes gives you anything special, but this is where my ignorance comes in. What features are you looking for with a deeper kubernetes integration? |
This comment has been minimized.
This comment has been minimized.
@clintberry, Such setup is very limited. With Kubernetes scheduler the following features can be achieved:
|
This comment has been minimized.
This comment has been minimized.
clintberry
commented
Feb 8, 2017
I don't know if I am comfortable letting my CI engine spin up Kubernetes pods in production. I think I would rather keep the Docker-in-Docker methods for isolation/security. I'm sure you could still get some sort of isolation/security with drone-created pods, but is it worth the hassle?
I am new to drone, so maybe I was wrong in assuming this, but I assumed I would be able to run only one concurrent job/build per agent that is connected to my drone server. For me, this is ideal because I can control the amount of resources that my build system uses. I don't want infinite scaling of my build system using precious production resources.
I can see why you would like that. Especially if you have large services you need to spin up. I still don't want anything running outside of my agent docker, but I totally could see why you would want this. But at the same time, each agent gets distributed to kube workers according to load, so you get at least some distribution of resources, but certainly not to the level you are suggesting here. I understand I am probably being too narrow-minded on this. I apologize if I am coming across confrontational. I am just trying to understand your use cases a bit more. |
This comment has been minimized.
This comment has been minimized.
ekozan
commented
Aug 30, 2017
•
|
I go to DInd |
This comment has been minimized.
This comment has been minimized.
lflare
commented
Sep 8, 2017
|
Sorry to cut-in but my development workflow has recently been prioritizing budget. My current setup involves having a single RPI3B serving as my central server including hosting Gitea and Drone server. However, just awhile ago, I came across hyper.sh and I felt that the key concepts behind it seems so much in-tune with what a CI agent needs. In particular, the build server doesn't have to run 24/7 hyper.sh bills per second and it's more or less based around Docker as a whole. Unfortunately, as of this writing, it seems drone agents are fairly limited in capabilities? Is integration with something like hyper.sh feasible? |
This comment has been minimized.
This comment has been minimized.
|
@lflare can we take hyper.sh discussion to another issue or better yet, discourse? This issue is already getting kind of out of hand with off-topic stuff. |
This comment has been minimized.
This comment has been minimized.
lflare
commented
Sep 8, 2017
|
@gtaylor I unfortunately do not intend to discuss this in a thread-like manner akin to that of Discourse. Never mind that, I don't think @bradrydzewski has the time for this anyways. |
This comment has been minimized.
This comment has been minimized.
We can plugin different container backends (kubernetes, hyper, etc) as long as they can implement the below interface. I personally do not see an issue with drone being flexible enough. If anything, I would question whether or not hyper will be flexible enough. // Engine defines a container orchestration backend and is used
// to create and manage container resources.
type Engine interface {
// Setup the pipeline environment.
Setup(*Config) error
// Start the pipeline step.
Exec(*Step) error
// Kill the pipeline step.
Kill(*Step) error
// Wait for the pipeline step to complete and returns
// the completion results.
Wait(*Step) (*State, error)
// Tail the pipeline step logs.
Tail(*Step) (io.ReadCloser, error)
// Destroy the pipeline environment.
Destroy(*Config) error
}I think the real issue here is not technical. The problem is despite clear interest in additional container runtimes and backends we do not have volunteers to implement or alternatively sponsor features (financially). I agree with Greg that we should track hyper as a separate issue. I also think we probably should lock this issue pending a volunteer or sponsor. If anyone is reading this thread and interested in implementing native kubernetes or hyper support, or willing to fund feature development, please message me at https://discourse.drone.io/ I will unlock this issue once we have progress to report. |
drone
locked and limited conversation to collaborators
Sep 8, 2017
bradrydzewski
added this to In Progress
in Version 1.0
Jun 14, 2018
bradrydzewski
moved this from In Progress
to Done
in Version 1.0
Jun 18, 2018
drone
unlocked this conversation
Jun 20, 2018
This comment has been minimized.
This comment has been minimized.
|
unlocking this issue now that we have a volunteer and a base implementation in place. If you have any questions please direct to @metalmatze. The kubernetes runtime source code is available here https://github.com/drone/drone-kubernetes-runtime @metalmatze gave a demo of the kubernetes runtime at container days. I will post a link here to the video when one is available. https://twitter.com/thenewstack/status/1009059265032671232 note that the kubernetes runtime will require drone version 0.9 which is not yet released. I do not have an estimate release date to share at this time. In the mean time, you can build, test and run the kubernetes runtime from the command line if you want to take a look or get your hands dirty. |
bradrydzewski
removed this from Done
in Version 1.0
Nov 8, 2018
This comment has been minimized.
This comment has been minimized.
combor
commented
Nov 28, 2018
|
@bradrydzewski is this included in upcoming 1.0? |
This comment has been minimized.
This comment has been minimized.
zetaab
commented
Nov 29, 2018
|
@combor drone-kubernetes-runtime repo says WIP. Anyways in drone-k8s-engine it seems that there is requirement for RWX persistent volumes, at least we do not have those in our environment. |
This comment has been minimized.
This comment has been minimized.
|
The kubernetes runtime repo had been the initial development, AFAIK it's moved into the drone-runtime repo part of a different development branch: https://github.com/drone/drone-runtime/tree/kubernetes |
This comment has been minimized.
This comment has been minimized.
zetaab
commented
Nov 29, 2018
|
oh, okay then there is some updates recently :) |
This comment has been minimized.
This comment has been minimized.
|
I am very excited to CLOSE this issue as COMPLETE |
bradrydzewski
closed this
Dec 7, 2018
This comment has been minimized.
This comment has been minimized.
|
I haven't responded here in a while, but I wanted to thank you and everyone else for the hard work on making this happen @bradrydzewski The combination of Drone and Kubernetes (especially when using a hosted solution such as GKE), makes a self-hosted CI/CD setup simpler, yet more powerful than ever. Having Drone be open-source and as extensible as it is, is icing on the cake, as it'll allow us to make Drone work just the way we need it to. I'm looking forward to start experimenting with this. Awesome job. |
This comment has been minimized.
This comment has been minimized.
|
@JeanMertz thank you so much for the kind words. I think there is still a ton of room for improvement and I am really excited for what is next (not sure what that is, but I'm excited for it). Also huge thanks for @metalmatze. He started working on this in October 2017, so I really appreciate his help (and patience) getting everything rolled out. |
This comment has been minimized.
This comment has been minimized.
zetaab
commented
Dec 7, 2018
•
|
This is cool thing! However, at least we would like to see builds without docker agent. The problem with docker is that we need to mount docker sockets inside container, which is not really secure. I am thinking could we somehow use kaniko instead? https://github.com/GoogleContainerTools/kaniko Well.. if i am thinking this through, it is quite big and maybe difficult change :( |
This comment has been minimized.
This comment has been minimized.
Wait... unless I'm misreading, I believe this is without mounting Docker sockets. The blog mentions a job is started as a native Kubernetes job, which in turn spins up native Kubernetes pods and "secrets" to share any configuration between the jobs. The "old" way of doing Drone on Kubernetes was indeed with mounted Docker sockets (with all the security risk and lack of maintainability included), but this announcement removes all of that, and you no longer have to concern yourself with the container engine running below Kubernetes. |
This comment has been minimized.
This comment has been minimized.
zetaab
commented
Dec 7, 2018
|
@JeanMertz WOW if that is true! need to test this |
This comment has been minimized.
This comment has been minimized.
It is! Check out this work in progress guide:
|
This comment has been minimized.
This comment has been minimized.
|
yep, this is correct. The Pipeline steps are launched as Kubernetes Pods. There are no more agents, and no more mounting Docker sockets :) |
This comment has been minimized.
This comment has been minimized.
malcolmholmes
commented
Dec 11, 2018
|
@bradrydzewski any pointers to how to start this (as opposed to Docker based Drone)? This is very exciting as I was just starting to battle with Drone in Kubernetes. Sample k8s yaml manifests, or other startup hints would be extremely useful. Thanks! |
This comment has been minimized.
This comment has been minimized.
|
@malcolmholmes there is work being done on an updated helm chart that might be helpful. See helm/charts#9617. There is also a thread here in our forum where some helm and general kubernetes discussions are happening. There are also some very high level docs here, but they only really show you the absolute basics and would require you to setup a more complete configuration. |
This comment has been minimized.
This comment has been minimized.
malcolmholmes
commented
Dec 12, 2018
|
@bradrydzewski thanks for those. Very helpful. |
This comment has been minimized.
This comment has been minimized.
malcolmholmes
commented
Dec 16, 2018
|
Hi, I have followed the instructions as best I can. I am using gitlab.com, and get this:
Here's the container config:
|
This comment has been minimized.
This comment has been minimized.
|
@malcolmholmes many of those environment variable names are incorrect. I also recommend creating thread in our mailing list as opposed to seeking support through this existing issue. https://discourse.drone.io - env:
- - name: DRONE_GITLAB
- value: "true"
- - name: DRONE_ADMIN
- value: *A_VALUE_HERE*
- name: DRONE_LOGS_DEBUG
value: "true"
- name: DRONE_KUBERNETES_ENABLED
value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
value: build-jobs
- - name: DRONE_GITLAB_CLIENT
+ - name: DRONE_GITLAB_CLIENT_ID
valueFrom:
secretKeyRef:
key: drone_gitlab_client
name: drone
- - name: DRONE_GITLAB_SECRET
+ - name: DRONE_GITLAB_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: drone_gitlab_secret
name: drone
- name: DRONE_RPC_SECRET
valueFrom:
secretKeyRef:
key: drone_shared_secret
name: drone |
JeanMertz commentedOct 12, 2016
I was wondering, has there been a consideration to move drone up the stack, and have it rely on Kubernetes features to function?
It could potentially ease the burden of a lot of things Drone currently has to manage on top of Docker, and with minikube, running drone locally could become as simple as
minikube create && drone deploy.I know this is an extreme oversimplification of things, and it would obviously mean giving up some freedom (dependent on a container scheduler, instead of only Docker), but there are obviously also a lot of upsides to this.