Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide (optional) ability to use Kubernetes as the runtime engine instead of Docker #1815

Closed
JeanMertz opened this Issue Oct 12, 2016 · 96 comments

Comments

Projects
None yet
@JeanMertz
Copy link
Contributor

JeanMertz commented Oct 12, 2016

I was wondering, has there been a consideration to move drone up the stack, and have it rely on Kubernetes features to function?

It could potentially ease the burden of a lot of things Drone currently has to manage on top of Docker, and with minikube, running drone locally could become as simple as minikube create && drone deploy.

I know this is an extreme oversimplification of things, and it would obviously mean giving up some freedom (dependent on a container scheduler, instead of only Docker), but there are obviously also a lot of upsides to this.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Oct 12, 2016

@JeanMertz right now there are no plans for this sort of deep integration with kubernetes. I'm certainly keeping an eye on the project and where it goes in the future, but I'm not sure this would be the right decision for drone at this time.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Oct 12, 2016

I should point out however, that the engine used to run builds (docker) is exposed as an interface so in theory it could be swapped with a different implementation. This is the interface that is defined for running builds:
https://github.com/drone/drone/blob/master/build/engine.go

And this is the docker implementation:
https://github.com/drone/drone/tree/master/build/docker

I would certainly encourage a community effort to create a kubernetes implementation. I know @tboerger expressed some interest here. I've discussed with @gtaylor as well. Bottom line is I think making drone kubernetes-only would not be a wise decision for the project, but supporting multiple runtime engines is certainly of interest.

This sort of thing, of course, depends on community engagement since I cannot volunteer to take on all these tasks. So while it isn't something I would work on, I would certainly make myself available to provide technical guidance to individuals interested in contributing an implementation.

@JeanMertz

This comment has been minimized.

Copy link
Contributor Author

JeanMertz commented Oct 12, 2016

That engine abstraction looks interesting.

Kubernetes has a very good first-class Golang API, and the feature set required here (Start/Stop/Remove/Wait/Logs) seems really limited, so it wouldn't be too hard to implement that on top of Kubernetes.

Maybe I'll give it a stab some time in the near future, if I ever manage to put more than 24 hours in a day.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Oct 12, 2016

cool, if you end up looking into an implementation give a shout-out in the drone developer channel at https://gitter.im/drone/drone-dev . I'm sure you could find some others interesting in lending a hand :)

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Oct 12, 2016

Let's re-open this but with a slightly adjusted scope of adding experimental support for an alternate kubernetes engine, alongside the existing docker engine. I would love to hear what @gtaylor thinks about this and what might be possible.

@bradrydzewski bradrydzewski reopened this Oct 12, 2016

@bradrydzewski bradrydzewski changed the title Using Kubernetes Jobs as "workers" Provide (optional) ability to use Kubernetes as the runtime engine instead of Docker Oct 12, 2016

@tboerger

This comment has been minimized.

Copy link
Member

tboerger commented Oct 12, 2016

When I'm more familiar with the k8s codebase I really would like to give it a try to build a real k8s agent. That's something my teamlead also asked for.

@gtaylor

This comment has been minimized.

Copy link
Member

gtaylor commented Oct 13, 2016

I've got a crappy custom scheduler that can parse a .drone.yml and fire up pods with some Drone plugins working. This isn't useful for Drone itself, but served as a nice exercise to get a feel for what this would look like. A few notes about what I did:

  • Each build gets its own Kubernetes Namespace. This makes cleanup as easy as one DELETE call, and also means we don't need to get weird with service names (to avoid collisions). Eventually we'll be able to set network ACLs to prevent cross-Namespace interactions, and we can already set quotas to avoid resource exhaustion.
  • Each build's pipeline section is a Job (not a Replication Controller or Deployment), with each step being a container within the Job's Pod. Using a Job means that we don't restart the whole thing when failures occur.
  • In my experimentation, all of the pipeline is in one Pod so we can mount the same emptyDir volume on all of the pipeline containers.
  • Dependent services run in separate Pods. These are accessible by hostname.

Things I haven't got around to:

  • .netrc injection.
  • I'm not sure if multiple containers within one Pod start in order, and block on their predecessor. We wouldn't want multiple pipeline steps executing simultaneously by default. In an ideal case, we can run all pipeline steps in one Pod. In a less ideal case, we run each pipeline as a step and pass around a volume, which would potentially be much slower.
  • As Brad mentioned, secrets. One way to address this is for Drone to do the figuring out what should be injected into the build's Namespace as a Secrets object. In other words: the Kubernetes part should be handed the secrets that have already been resolved.
  • Build Namespaces should probably eventually be cleaned up. Drone can pull the logs and the build states as it goes, so the only reason to keep them around would be for troubleshooting. This is going to vary by org, but we'd probably want to keep them for 24 hours over here.
@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Oct 14, 2016

@gtaylor thanks for the detailed reply. Regarding using kubernetes secret store, I'm wondering how well that would work with drone. Thoughts on #1808 (comment) ?

@gtaylor

This comment has been minimized.

Copy link
Member

gtaylor commented Oct 15, 2016

@bradrydzewski I think you create multiple Kubernetes Secret resources, perhaps one per pipeline step (a container within the build Pod). You can pull these secrets into each container (step) individually. Drone would determine which secrets go to which steps and stuff those secrets in the step's respective Secret object.

Also note that each step container can use multiple Secret objects to pull env vars (or mount as files) from. That may or may not be useful to you.

While what I am describing above doesn't lend itself to a ton of value over what Drone has, the values won't be viewable in kubectl describe pod, unlike straight env vars. It'd be very important not to show secret values in pod descriptions.

@gtaylor

This comment has been minimized.

Copy link
Member

gtaylor commented Oct 15, 2016

Also, I don't think it'd be worth heavily pursuing deep Kubernetes integration until Kubernetes 1.5 lands. The Job system is still shaking out in major ways. Secrets are going to be seeing lots of expansion soon, as are the various network/resource ACLs. The jump from 1.3 to the recent 1.4 saw a cron-like Job scheduling system become available in alpha, so that's still super raw as well.

It'd certainly be worth tinkering with and building some familiarity, but this is going to take a good bit of thought and care to do well. We'd need to get kind of hacky with build pods and plugins to make it work well right now.

Things look super bright in the not-so-distant future.

@derekperkins

This comment has been minimized.

Copy link

derekperkins commented Dec 20, 2016

@gtaylor Now that 1.5 is out, do you feel more confident about tackling this?

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 20, 2016

FWIW I am also interested, at some point, trying to figure out what a "serverless" drone would look like. I think the concept of a build queue and pending builds could be eliminated by using the on-demand capabilities of services like hyper.sh and rackspace carina. I'm sure other vendors will launch similar on-demand capabilities as well.

I'm not sure how Kubernetes fits into the picture here, but am interested in the overall concept.

@gtaylor

This comment has been minimized.

Copy link
Member

gtaylor commented Dec 22, 2016

@derekperkins It's definitely more possible now. It would still be a whole lot of work to do really well, in that the perfect situation is that we're scattering the work out across multiple Pods. Failing to achieve that means that we're not any better off than we currently are.

It's one of those things where this could be really awesome if done right, but it could also be thoroughly underwhelming and a black mark otherwise. We'd have to provide something more compelling and capable than all of these Jenkins + Kubernetes whitepapers (a well-trodden path at this point), at the very minimum.

FWIW I am also interested, at some point, trying to figure out what a "serverless" drone would look like.

It could be neat, but is there any money in that? At what point do you just run Circle CI/Travis/CodeShip/Shippable or one of the infinite other hosted solutions that are effectively "serverless" from the customer's perspective? Can't imagine the bigger money on-prem orgs using those services with their metal.

If you really don't want to maintain servers, fire up a Google Container Engine (hosted Kubernetes) cluster and install Drone. They maintain the VMs and it's cheap ($5/month at the lowest level). You can still get your fingers in if you want to have the cluster auto-scale up/down as jobs pile up, and you can mix in their equivalent of Spot instances (pre-emptible VMs). If and when you eventually want to take more direct control with your own cluster, Container Engine runs the same Kubernetes that is found in the open source project's repo.

I'm not sure how Kubernetes fits into the picture here

It's still probably a little early for Drone and Kubernetes to go down this road too much yet, but it fits into the picture in that it's not a proprietary, close sourced option like hyper.sh and Rackspace Carina :) It also now has far more adoption and mindshare than those two relatively niche services.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Jan 24, 2017

I'm going to list some challenges based on my conversation with @gtaylor. I'm not a kubernetes expect, so my apologies if I misinterpreted the discussion.

  1. kubernetes has no notion of sequential or chained jobs. This will need to be simulated
  2. drone uses a single shared network for all containers in the build process. This means service containers are available at localhost. This could pose a challenge depending on how we implement kubernetes support. Using multiple-pods would prevent this
  3. drone support single-machine fan-in / fan-out. This means build steps can be running in parallel and have access to the underlying build workspace (where the code is cloned). In kubernetes, if using multiple-pods, this could be difficult since we can only mount the volume to a single pod at a time.

These are some of the main challenges that we will face with native kubernetes support, as I understand it.

We could definitely create a very basic prototype implementation that showcases drone using kubernetes as the backend, but it would have some initial limitations:

  1. steps run sequentially, without any parallelism
  2. service containers would use custom hostnames, and not localhost

Perhaps with a basic implementation in place, we could engage the kubernetes community and use it as a starting point and figure out how to fill in the remaining gaps.

@JeanMertz

This comment has been minimized.

Copy link
Contributor Author

JeanMertz commented Jan 26, 2017

As a reference point: we are currently using Jenkins on top of Kubernetes, together with some plugins (one of them being the kubernetes-plugin), to simulate what I'd like Drone to do/represent.

Jenkins comes with a lot of bagage (mostly good, some bad, some ugly), but the current set-up looks something like this:

  • We run Jenkins on top of GKE (Google Container Engine)
  • the concept of build nodes is translated to single pods
    • each job on Jenkins creates a single pod
    • this pod represents a one-time-use "node"
    • within this pod, the job is started
    • we run our tests in parallel, for each parallel process, we launch one container, so some tests run 22 containers in a single pod
    • this ensures localhost works across containers
    • it also ensures data is shared across volumes
  • we also have the "autoscale" feature of GKE enabled, this means that if too many jobs are being scheduled, GKE will start up a new node and add it to the node-pool of out Kubernetes cluster
  • all our nodes run on preemptible machines (cheaper, but "unreliable"), in practice, this means once every 1000 or so runs, a job fails because the node was deleted on GCE, but we accept this
  • in effect, this means we have near-infinite scalability of our CI environment
@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Jan 26, 2017

@JeanMertz is this something you would be willing to help implement? I have no real world experience with Kubernetes and have quite a lot on my plate. Perhaps if this were a community effort it would have more of a chance of succeeding. What do you think?

@jmn

This comment has been minimized.

Copy link

jmn commented Feb 2, 2017

Hi,

kubernetes has no notion of sequential or chained jobs. This will need to be simulated

I am not sure if this is what is meant but there are Init Containers:

An init container is exactly like a regular container, except that it always runs to completion and each init container must complete successfully before the next one is started.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Feb 2, 2017

@jmn I think perhaps a better way of describing the issue is that kuberenetes does not easily map to the drone yaml at this time. The drone yaml executes batch steps, with linked services, and needs to evaluate whether or not the step should be executed at runtime based on results of prior steps.

Consider this configuration:

pipeline:
  backend:
    image: golang
    commands:
      - go build
      - go test
  frontent:
    image: node
    commands:
      - npm install
      - npm build
  publish:
    image: plugins/docker
    repo: foo/bar
    when:
      event: push
  deploy:
    image: plugins/ssh
    shell:
      - docker pull foo/bar
      - docker stop foo/bar
      - docker run foo/bar
    when:
      event: deployment
      branch: master
  notify:
    image: plugins/slack
    channel: dev
    when:
      status: [ success, failure ]

services:
  redis:
    image: redis:latest
  mysql:
    image: mysql:latest

This doesn't mean it is impossible, though. The suggestion by @JeanMertz is really interesting. His suggestion is that each step should be its own pod, with its own set of services, and Drone would handle orchestrating sequential pod execution to emulate build steps.

Unfortunately I do not have any experience with kubernetes outside of reading a few blog posts, so it is not something I will be able to implement at this time. Community contributions very welcome :)

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Feb 2, 2017

I should point out that I'm also not connected to the kubernetes community. If there are individuals in the kubernetes community that you think might be interested in helping implement a native-kubernetes CI system, please help them get in touch @ gitter.im/bradrydzewski

@webwurst

This comment has been minimized.

Copy link

webwurst commented Feb 2, 2017

@derekperkins

This comment has been minimized.

Copy link

derekperkins commented Feb 8, 2017

@clintberry just got drone working on kubernetes and is working on a helm install package

@benschumacher

This comment has been minimized.

Copy link
Contributor

benschumacher commented Feb 8, 2017

I've used a slightly modified version of this repo from @vallard to deploy Drone to Kubernetes:

https://github.com/vallard/drone-kubernetes

Works well enough, though it does bypass the Kubernetes scheduler, and connects to the docker daemon on the host directly. Would definitely be interested in a solution that could work within that context.

One note: I think trying to map some of Drone's concepts to Kubernetes directly isn't going to help make this happen. In general, I think the design @gtaylor suggested above has some merit, though I'm not sold that having a new namespace-per-build is necessary. Working within the context of a pod isn't too far off what Drone is doing right now to run builds, it would just require something Kubernetes-specific logic to ensure that various "components" within a Pod are a) started in the right order, and b) executed serially, c) etc. Keep in mind that the main benefit of a Pod within Kubernetes is to link together dependent containers w/o relying on cluster-wide functions like service discovery, services, ingress controllers, etc.

I think starting w/ a set of requirements around what is expected from build scheduler, including some of the newer features around shared volumes, matrix builds, fan-out, fan-in, could help clarify what is required. The effort should start small, too, just solve a simple use case that 1) clones a repo, 2) executes a build step, 3) collects logs from these steps. I'd be happy to carve out some time to look into this deeper, but I doubt I'd have much time to contribute much in the way of coding in the near future.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Feb 8, 2017

@benschumacher my ultimate goal is to create a compiler of sorts (note that it is not entirely vaporware, as I do have certain pieces working). The compiler would consist of frontends which would take different configuration formats (drone.yml, travis.yml, bitbucket-pipelines.yml) and compile them down to an intermediate representation. I have an implementation of this that works with the .drone.yml and bitbucket-pipelines.yml.

I am working on a formal specification for the intermediate representation:
https://github.com/cncd/cncd/blob/master/content/spec/ir.md

Ideally this intermediate representation would work with multiple backends, where a backend is a container engine (such as Docker) or an orchestration engine (such as Kubernetes). I have a working backend for Docker and am confident this could work for LXD and Rocket. I am not sure what changes would be required for this to work with Kubernetes, however, I am optimistic that this is a solvable problem.

think starting w/ a set of requirements around what is expected from build scheduler, including some of the newer features around shared volumes, matrix builds, fan-out, fan-in, could help clarify what is required

I think the specification for the IR gets us closer to this goal. It might still be too docker specific though, so would love to hear your feedback, and would love to have you as part of the working group (that invite extends to anyone in this thread as well).

I'd be happy to carve out some time to look into this deeper, but I doubt I'd have much time to contribute much in the way of coding in the near future.

At this stage if you can participate in more of an architectural (non-coding) capacity it would still be tremendously helpful. The backend implementations tend to be quite small, so if we get the IR specification in a good place the implementation should hopefully be pretty straightforward.

@clintberry

This comment has been minimized.

Copy link

clintberry commented Feb 8, 2017

Forgive my ignorance, but I don't understand what you are all trying to accomplish here. Kubernetes just uses docker under the hood, and Drone runs great in kubernetes right now. I installed the drone server as a deployment, and drone agents as a deployment. I can scale the agents now with 2 clicks as needed. I don't think adding a tighter integration to kubernetes gives you anything special, but this is where my ignorance comes in. What features are you looking for with a deeper kubernetes integration?

@kop

This comment has been minimized.

Copy link
Contributor

kop commented Feb 8, 2017

Forgive my ignorance, but I don't understand what you are all trying to accomplish here. Kubernetes just uses docker under the hood, and Drone runs great in kubernetes right now. I installed the drone server as a deployment, and drone agents as a deployment. I can scale the agents now with 2 clicks as needed. I don't think adding a tighter integration to kubernetes gives you anything special, but this is where my ignorance comes in. What features are you looking for with a deeper kubernetes integration?

@clintberry, Such setup is very limited. With Kubernetes scheduler the following features can be achieved:

  • Avoid Docker In Docker usage;
  • "Real" scaling, when K8S pod is created for every task and cluster scales automatically up and down to fulfil CPU/RAM requests;
  • When you run DIND setup, any of services you create in your drone.yml will be deployed to the same host your agent is running in. This is bad, they should deploy to the least busy host instead;
  • I believe there are a lot more, those are just cases that came to my head instantly.
@clintberry

This comment has been minimized.

Copy link

clintberry commented Feb 8, 2017

Avoid Docker In Docker usage

I don't know if I am comfortable letting my CI engine spin up Kubernetes pods in production. I think I would rather keep the Docker-in-Docker methods for isolation/security. I'm sure you could still get some sort of isolation/security with drone-created pods, but is it worth the hassle?

"Real" scaling...

I am new to drone, so maybe I was wrong in assuming this, but I assumed I would be able to run only one concurrent job/build per agent that is connected to my drone server. For me, this is ideal because I can control the amount of resources that my build system uses. I don't want infinite scaling of my build system using precious production resources.

When you run DIND setup, any of services you create in your drone.yml will be deployed to the same host your agent is running in. This is bad, they should deploy to the least busy host instead;

I can see why you would like that. Especially if you have large services you need to spin up. I still don't want anything running outside of my agent docker, but I totally could see why you would want this. But at the same time, each agent gets distributed to kube workers according to load, so you get at least some distribution of resources, but certainly not to the level you are suggesting here.

I understand I am probably being too narrow-minded on this. I apologize if I am coming across confrontational. I am just trying to understand your use cases a bit more.

@ekozan

This comment has been minimized.

Copy link

ekozan commented Aug 30, 2017

@lflare

This comment has been minimized.

Copy link

lflare commented Sep 8, 2017

Sorry to cut-in but my development workflow has recently been prioritizing budget. My current setup involves having a single RPI3B serving as my central server including hosting Gitea and Drone server.

However, just awhile ago, I came across hyper.sh and I felt that the key concepts behind it seems so much in-tune with what a CI agent needs. In particular, the build server doesn't have to run 24/7 hyper.sh bills per second and it's more or less based around Docker as a whole.

Unfortunately, as of this writing, it seems drone agents are fairly limited in capabilities? Is integration with something like hyper.sh feasible?

@gtaylor

This comment has been minimized.

Copy link
Member

gtaylor commented Sep 8, 2017

@lflare can we take hyper.sh discussion to another issue or better yet, discourse? This issue is already getting kind of out of hand with off-topic stuff.

@lflare

This comment has been minimized.

Copy link

lflare commented Sep 8, 2017

@gtaylor I unfortunately do not intend to discuss this in a thread-like manner akin to that of Discourse. Never mind that, I don't think @bradrydzewski has the time for this anyways.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Sep 8, 2017

Unfortunately, as of this writing, it seems drone agents are fairly limited in capabilities? Is integration with something like hyper.sh feasible?

We can plugin different container backends (kubernetes, hyper, etc) as long as they can implement the below interface. I personally do not see an issue with drone being flexible enough. If anything, I would question whether or not hyper will be flexible enough.

// Engine defines a container orchestration backend and is used
// to create and manage container resources.
type Engine interface {
	// Setup the pipeline environment.
	Setup(*Config) error
	// Start the pipeline step.
	Exec(*Step) error
	// Kill the pipeline step.
	Kill(*Step) error
	// Wait for the pipeline step to complete and returns
	// the completion results.
	Wait(*Step) (*State, error)
	// Tail the pipeline step logs.
	Tail(*Step) (io.ReadCloser, error)
	// Destroy the pipeline environment.
	Destroy(*Config) error
}

I think the real issue here is not technical. The problem is despite clear interest in additional container runtimes and backends we do not have volunteers to implement or alternatively sponsor features (financially).

I agree with Greg that we should track hyper as a separate issue.

I also think we probably should lock this issue pending a volunteer or sponsor. If anyone is reading this thread and interested in implementing native kubernetes or hyper support, or willing to fund feature development, please message me at https://discourse.drone.io/

I will unlock this issue once we have progress to report.

@drone drone locked and limited conversation to collaborators Sep 8, 2017

@gardener-ci gardener-ci referenced this issue Jan 13, 2018

Closed

Garden Operator CI #12

10 of 21 tasks complete

@bradrydzewski bradrydzewski added this to In Progress in Version 1.0 Jun 14, 2018

@bradrydzewski bradrydzewski moved this from In Progress to Done in Version 1.0 Jun 18, 2018

@drone drone unlocked this conversation Jun 20, 2018

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Jun 20, 2018

unlocking this issue now that we have a volunteer and a base implementation in place. If you have any questions please direct to @metalmatze. The kubernetes runtime source code is available here https://github.com/drone/drone-kubernetes-runtime

@metalmatze gave a demo of the kubernetes runtime at container days. I will post a link here to the video when one is available. https://twitter.com/thenewstack/status/1009059265032671232

note that the kubernetes runtime will require drone version 0.9 which is not yet released. I do not have an estimate release date to share at this time. In the mean time, you can build, test and run the kubernetes runtime from the command line if you want to take a look or get your hands dirty.

@bradrydzewski bradrydzewski removed this from Done in Version 1.0 Nov 8, 2018

@combor

This comment has been minimized.

Copy link

combor commented Nov 28, 2018

@bradrydzewski is this included in upcoming 1.0?

@zetaab

This comment has been minimized.

Copy link

zetaab commented Nov 29, 2018

@combor drone-kubernetes-runtime repo says WIP. Anyways in drone-k8s-engine it seems that there is requirement for RWX persistent volumes, at least we do not have those in our environment.

@tboerger

This comment has been minimized.

Copy link
Member

tboerger commented Nov 29, 2018

The kubernetes runtime repo had been the initial development, AFAIK it's moved into the drone-runtime repo part of a different development branch: https://github.com/drone/drone-runtime/tree/kubernetes

@zetaab

This comment has been minimized.

Copy link

zetaab commented Nov 29, 2018

oh, okay then there is some updates recently :)

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 7, 2018

I am very excited to CLOSE this issue as COMPLETE
https://blog.drone.io/drone-goes-kubernetes-native/

@JeanMertz

This comment has been minimized.

Copy link
Contributor Author

JeanMertz commented Dec 7, 2018

I haven't responded here in a while, but I wanted to thank you and everyone else for the hard work on making this happen @bradrydzewski ❤️

The combination of Drone and Kubernetes (especially when using a hosted solution such as GKE), makes a self-hosted CI/CD setup simpler, yet more powerful than ever. Having Drone be open-source and as extensible as it is, is icing on the cake, as it'll allow us to make Drone work just the way we need it to.

I'm looking forward to start experimenting with this. Awesome job.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 7, 2018

@JeanMertz thank you so much for the kind words. I think there is still a ton of room for improvement and I am really excited for what is next (not sure what that is, but I'm excited for it). Also huge thanks for @metalmatze. He started working on this in October 2017, so I really appreciate his help (and patience) getting everything rolled out.

@zetaab

This comment has been minimized.

Copy link

zetaab commented Dec 7, 2018

This is cool thing! However, at least we would like to see builds without docker agent. The problem with docker is that we need to mount docker sockets inside container, which is not really secure. I am thinking could we somehow use kaniko instead? https://github.com/GoogleContainerTools/kaniko Well.. if i am thinking this through, it is quite big and maybe difficult change :(

@JeanMertz

This comment has been minimized.

Copy link
Contributor Author

JeanMertz commented Dec 7, 2018

However, at least we would like to see builds without docker agent

Wait... unless I'm misreading, I believe this is without mounting Docker sockets. The blog mentions a job is started as a native Kubernetes job, which in turn spins up native Kubernetes pods and "secrets" to share any configuration between the jobs.

The "old" way of doing Drone on Kubernetes was indeed with mounted Docker sockets (with all the security risk and lack of maintainability included), but this announcement removes all of that, and you no longer have to concern yourself with the container engine running below Kubernetes.

@zetaab

This comment has been minimized.

Copy link

zetaab commented Dec 7, 2018

@JeanMertz WOW if that is true! need to test this

@JeanMertz

This comment has been minimized.

Copy link
Contributor Author

JeanMertz commented Dec 7, 2018

@JeanMertz WOW if that is true! need to test this

It is! Check out this work in progress guide:

The Drone Kubernetes Runtime takes a .drone.yml configuration and translates it into Kubernetes native pods, secrets, and services. So what does this mean if you are already running Drone on Kubernetes today? It means no more build agents. No more mouting the host machine Docker socket or running Docker in Docker. If you are into buzzwords, it means Drone is now Kubernetes Native.

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 7, 2018

yep, this is correct. The Pipeline steps are launched as Kubernetes Pods. There are no more agents, and no more mounting Docker sockets :)

@malcolmholmes

This comment has been minimized.

Copy link

malcolmholmes commented Dec 11, 2018

@bradrydzewski any pointers to how to start this (as opposed to Docker based Drone)? This is very exciting as I was just starting to battle with Drone in Kubernetes. Sample k8s yaml manifests, or other startup hints would be extremely useful. Thanks!

@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 12, 2018

@malcolmholmes there is work being done on an updated helm chart that might be helpful. See helm/charts#9617. There is also a thread here in our forum where some helm and general kubernetes discussions are happening.

There are also some very high level docs here, but they only really show you the absolute basics and would require you to setup a more complete configuration.

@malcolmholmes

This comment has been minimized.

Copy link

malcolmholmes commented Dec 12, 2018

@bradrydzewski thanks for those. Very helpful.

@malcolmholmes

This comment has been minimized.

Copy link

malcolmholmes commented Dec 16, 2018

Hi, I have followed the instructions as best I can. I am using gitlab.com, and get this:

{"error":"Source code management system not configured","level":"fatal","msg":"main: source code management system is not configured","time":"2018-12-16T22:40:53Z"}. Would you expect this version to work with Gitlab?

Here's the container config:

    spec:
      containers:
      - env:
        - name: DRONE_GITLAB
          value: "true"
        - name: DRONE_ADMIN
          value: *A_VALUE_HERE*
        - name: DRONE_LOGS_DEBUG
          value: "true"
        - name: DRONE_KUBERNETES_ENABLED
          value: "true"
        - name: DRONE_KUBERNETES_NAMESPACE
          value: build-jobs
        - name: DRONE_GITLAB_CLIENT
          valueFrom:
            secretKeyRef:
              key: drone_gitlab_client
              name: drone
        - name: DRONE_GITLAB_SECRET
          valueFrom:
            secretKeyRef:
              key: drone_gitlab_secret
              name: drone
        - name: DRONE_RPC_SECRET
          valueFrom:
            secretKeyRef:
              key: drone_shared_secret
              name: drone
        image: drone/drone:1.0.0-rc.2
        name: drone
        ports:
        - containerPort: 80
        - containerPort: 443
@bradrydzewski

This comment has been minimized.

Copy link
Member

bradrydzewski commented Dec 16, 2018

@malcolmholmes many of those environment variable names are incorrect. I also recommend creating thread in our mailing list as opposed to seeking support through this existing issue. https://discourse.drone.io

      - env:
-       - name: DRONE_GITLAB
-         value: "true"
-       - name: DRONE_ADMIN
-         value: *A_VALUE_HERE*
        - name: DRONE_LOGS_DEBUG
          value: "true"
        - name: DRONE_KUBERNETES_ENABLED
          value: "true"
        - name: DRONE_KUBERNETES_NAMESPACE
          value: build-jobs
-       - name: DRONE_GITLAB_CLIENT
+       - name: DRONE_GITLAB_CLIENT_ID
          valueFrom:
            secretKeyRef:
              key: drone_gitlab_client
              name: drone
-       - name: DRONE_GITLAB_SECRET
+       - name: DRONE_GITLAB_CLIENT_SECRET
          valueFrom:
            secretKeyRef:
              key: drone_gitlab_secret
              name: drone
        - name: DRONE_RPC_SECRET
          valueFrom:
            secretKeyRef:
              key: drone_shared_secret
              name: drone
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.