You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the only officially supported compute backend in Digger is GitHub Actions. A few people managed to make it work with GitLab and Jenkins, but that's most likely with an older version, and almost certainly backendless mode. So as of today, users of any CI other than GitHub Actions cannot have parallelism and other benefits of the orchestrator.
Changes to Digger Orchestrator
There are a few options to design the integration with TeamCity on the backend side of Digger.
Option 1: TeamCity-specific CiBackend interface
Continue the work that has been already started in the backend/ci_backends dir. We could have a CiBackend implementations for each of the most popular CI backends - there are not that many:
TeamCity (this RFC)
GitLab
Jenkins
Bitbucket Pipelines
Azure DevOps
BuildKite
As a prerequisite / first step, we will probably need to extract github-specific functionality into its own standalone lib that implements the CIBackend interface. But that is probably the first step regardless of the option we choose, just so we get it cleanly separated from the rest of the codebase.
Option 2: Generic Webhooks
Instead of implementing TeamCity-specific integration as part of the orchestrator, we can expose a generic webhook for subscribing to new jobs. The user will then need to handle the webhooks and convert the job spec into a TeamCity-specific payload. Which is obviously the downside of this approach.
Option 3: TeamCity-specific service that pulls from the orchestrator
Orchestrator becomes a "naked queue" that only supports pull-based consumption of jobs [RFC #1526]
For each CI backend, we can supply a small service that polls the orchestrator, pulls the jobs and converts them into CI-specific payloads. Design-wise it'd be similar to the Option 1 (CiBackend interface) but split at the service level, not at the code level.
The downside of this approach is that it'll be 2 services instead of 1. But splitting apart is kind of inevitable; and most people will host it in K8S anyways.
Calling TeamCity API
Regardless of the backend design we choose, integrating with the TeamCity API will be more or less the same.
How do we pass the job spec to Digger CLI? 2 options:
as a single env var (example above); this will work but kinda dirty
as a list of Properties; cleaner and teamCity-specific; but the downside is that the user will need to pass each property to Digger CLI in the BuildConfiguraion.
User-defined part in TeamCity
To run Digger in the job, the user will need to define a CommandLine Runner build step as part of their pipeline, invoking Digger CLI
The text was updated successfully, but these errors were encountered:
Currently, the only officially supported compute backend in Digger is GitHub Actions. A few people managed to make it work with GitLab and Jenkins, but that's most likely with an older version, and almost certainly backendless mode. So as of today, users of any CI other than GitHub Actions cannot have parallelism and other benefits of the orchestrator.
Changes to Digger Orchestrator
There are a few options to design the integration with TeamCity on the backend side of Digger.
Option 1: TeamCity-specific CiBackend interface
Continue the work that has been already started in the backend/ci_backends dir. We could have a CiBackend implementations for each of the most popular CI backends - there are not that many:
As a prerequisite / first step, we will probably need to extract github-specific functionality into its own standalone lib that implements the CIBackend interface. But that is probably the first step regardless of the option we choose, just so we get it cleanly separated from the rest of the codebase.
Option 2: Generic Webhooks
Instead of implementing TeamCity-specific integration as part of the orchestrator, we can expose a generic webhook for subscribing to new jobs. The user will then need to handle the webhooks and convert the job spec into a TeamCity-specific payload. Which is obviously the downside of this approach.
Option 3: TeamCity-specific service that pulls from the orchestrator
Orchestrator becomes a "naked queue" that only supports pull-based consumption of jobs [RFC #1526]
For each CI backend, we can supply a small service that polls the orchestrator, pulls the jobs and converts them into CI-specific payloads. Design-wise it'd be similar to the Option 1 (CiBackend interface) but split at the service level, not at the code level.
The downside of this approach is that it'll be 2 services instead of 1. But splitting apart is kind of inevitable; and most people will host it in K8S anyways.
Calling TeamCity API
Regardless of the backend design we choose, integrating with the TeamCity API will be more or less the same.
To start a job in TeamCity, we will likely need to use the Start Custom Build API:
POST <teamcity_server_url>/app/rest/buildQueue
How do we pass the job spec to Digger CLI? 2 options:
User-defined part in TeamCity
To run Digger in the job, the user will need to define a CommandLine Runner build step as part of their pipeline, invoking Digger CLI
The text was updated successfully, but these errors were encountered: