Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build go binary with coverage integration tests #6684

Open
8 tasks
Tracked by #7933
muhlemmer opened this issue Oct 9, 2023 · 5 comments
Open
8 tasks
Tracked by #7933

build go binary with coverage integration tests #6684

muhlemmer opened this issue Oct 9, 2023 · 5 comments
Assignees
Labels
ci Improve continous integration devops improvement

Comments

@muhlemmer
Copy link
Contributor

muhlemmer commented Oct 9, 2023

As a developer I want a fast pipeline.

Integrations tests are basically API calls from a client, to a running zitadel server. Currently each of the integration tests starts it's own zitadel server, through the ./cmd package. For each package time is lost on starting and stopping zitadel. Also, if the tests are run in paralel, they seem to choke as multiple servers are trying to start-up and trying to obtain spooler related locks. Therefore we currently run the tests sequential.

Starting from Go 1.20, it is possible to build a binary with coverage profiling support. This way we could build a zitadel server binary with coverage support, launch it during pipeline initialization and then run the package tests in paralel. The coverage report can then still be obtained and uploaded to codecov.

Acceptance criteria

  • adapt integration tests:
    • build a coverage binary of zitadel
    • run the binary as a background service
    • SIGTERM the binary when the tests are completed
    • Transform binary test reports in text and upload to codecov.
  • adapt *_integration_test.go files:
    • disable server start in TestMain when in pipline
    • server start should still be supported for local development
@muhlemmer
Copy link
Contributor Author

IMO we should start prioritizing this. After #7388 it's really a pain to test zitadel locally. Because Go uses a build cache under /tmp on *nix systems that grows with every package being build / tested within the same execution of the go test command. Now it needs more than 7GB of space when testing the full scope of unit and integration tests.

On (some?) linux distributions /tmp is a tmpfs which is a in-memory file system. By default it has the size of 50% of RAM. In my case I have 16GB or RAM, 8 GB available under /tmp with some programs like chrome and VSCode using some of that ~800MB. In this setup the tests now fail with an error "no space left on device", somewhere in the last of the integrations tests.

I hope that can be greatly reduced if we build the server binary once for the integration tests, so that only the client objects have to be build for each package.

As a workaround I moved /tmp to the root filesystem. This is not great as it will wear down the m.2 nVME of the laptop quicker. Increasing the /tmp to use more RAM was not an option as VSCode with the Go PLS and Chrome with the google workspace apps are also memory hogs.

@hifabienne
Copy link
Member

@eliobischof or @stebenz can you estimate this issue?

@eliobischof
Copy link
Member

@stebenz will have a look about what exactly we should do.

@stebenz
Copy link
Collaborator

stebenz commented Mar 4, 2024

@muhlemmer @eliobischof
Currently, for each package in which an integration test is implemented, we start a new server, create the clients, check if the machineusers exist, and create new PATs for each machineuser(IAM, OrgOwner, and Login).

How to continue, I would split integration tests into binary for a singular server and tests, which call the server through clients.
But there are still some open questions:

  • How do we start and stop the binary? Do we do it in the pipeline or somewhere in the code?
  • How do we provide credentials for the clients? (Currently stored in a map in the Tester)
  • Do we eliminate all command and query side calls in the integration tests, to only use the API?

@muhlemmer
Copy link
Contributor Author

How do we start and stop the binary? Do we do it in the pipeline or somewhere in the code?

That's what I wrote in the acceptance criteria:

  • build a coverage binary of zitadel
  • run the binary as a background service
  • SIGTERM the binary when the tests are completed

I didn't specify the exact method, because I don't know which works best in the pipeline. (bash, services or docker compose). I image building a docker image and running it just as we do with the e2e tests would be the most straightforward. Just make sure you make the coverage reports accessible thru a volume. You can also go oldschool:

zitadel &
zitadel_pid=$!
go test ....
kill -s SIGTERM $zitadel_pid

From my perspective that's up to the implementer in the end.

  • How do we provide credentials for the clients? (Currently stored in a map in the Tester)
  • Do we eliminate all command and query side calls in the integration tests, to only use the API?

In principle we should be able to create service users and obtain credentials over the API. We can still store them in the same map, so they can be cached for multiple uses. We can then get rid of the direct command and query calls.
But if for some reason we still need those calls, the "client" (running the tests) should be able to dail the postgres server.

@stebenz stebenz mentioned this issue May 14, 2024
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci Improve continous integration devops improvement
Projects
Status: 🔖 Ready
Development

No branches or pull requests

4 participants