Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] TeamCity support as compute backend #1531

Open
ZIJ opened this issue May 30, 2024 · 0 comments
Open

[RFC] TeamCity support as compute backend #1531

ZIJ opened this issue May 30, 2024 · 0 comments

Comments

@ZIJ
Copy link
Contributor

ZIJ commented May 30, 2024

Currently, the only officially supported compute backend in Digger is GitHub Actions. A few people managed to make it work with GitLab and Jenkins, but that's most likely with an older version, and almost certainly backendless mode. So as of today, users of any CI other than GitHub Actions cannot have parallelism and other benefits of the orchestrator.

Changes to Digger Orchestrator

There are a few options to design the integration with TeamCity on the backend side of Digger.

Option 1: TeamCity-specific CiBackend interface

Continue the work that has been already started in the backend/ci_backends dir. We could have a CiBackend implementations for each of the most popular CI backends - there are not that many:

  • TeamCity (this RFC)
  • GitLab
  • Jenkins
  • Bitbucket Pipelines
  • Azure DevOps
  • BuildKite

As a prerequisite / first step, we will probably need to extract github-specific functionality into its own standalone lib that implements the CIBackend interface. But that is probably the first step regardless of the option we choose, just so we get it cleanly separated from the rest of the codebase.

Option 2: Generic Webhooks

Instead of implementing TeamCity-specific integration as part of the orchestrator, we can expose a generic webhook for subscribing to new jobs. The user will then need to handle the webhooks and convert the job spec into a TeamCity-specific payload. Which is obviously the downside of this approach.

Option 3: TeamCity-specific service that pulls from the orchestrator

Orchestrator becomes a "naked queue" that only supports pull-based consumption of jobs [RFC #1526]

For each CI backend, we can supply a small service that polls the orchestrator, pulls the jobs and converts them into CI-specific payloads. Design-wise it'd be similar to the Option 1 (CiBackend interface) but split at the service level, not at the code level.

The downside of this approach is that it'll be 2 services instead of 1. But splitting apart is kind of inevitable; and most people will host it in K8S anyways.

Calling TeamCity API

Regardless of the backend design we choose, integrating with the TeamCity API will be more or less the same.

To start a job in TeamCity, we will likely need to use the Start Custom Build API:

POST <teamcity_server_url>/app/rest/buildQueue

{
   "buildType": {"id": "BuildConfigurationID"},
   "properties": {
       "property": [
           {"name": "env.DIGGER_JOB_SPEC", "value": "<job spec>"}
       ]
   }
 }

How do we pass the job spec to Digger CLI? 2 options:

  • as a single env var (example above); this will work but kinda dirty
  • as a list of Properties; cleaner and teamCity-specific; but the downside is that the user will need to pass each property to Digger CLI in the BuildConfiguraion.

User-defined part in TeamCity

To run Digger in the job, the user will need to define a CommandLine Runner build step as part of their pipeline, invoking Digger CLI

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant