Replies: 2 comments
-
It looks like most of your problem can be summarized as: orchestrating docker builds, which dagger can handle pretty easily. Before I dig into your points below, I'd recommend reading those 2 guides from the dagger doc:
I am not familiar with Pants, but I would either: integrate
I would install dagger natively on Windows + use Docker Desktop. That way everything that dagger does ends up in WSL containers. Then I would build the devcontainer from dagger by integrating the vscode cli in a dagger package (let me know if you need help implementing that one). Here is some related docs: https://code.visualstudio.com/docs/remote/devcontainer-cli#_building-a-dev-container-image - not sure how that'll play out with the "Rebuild devcontainer" button in VSCode though. Once the integration is done, you would just run
As briefly explained above, you can run those in dagger actions like we do on the internal CI - the advantage is that all of them run inside container (so the behavior will be the same locally in your CI), and you can also either call all tests:
Dagger implements caching, so this is another thing that is simplified. You can just build container images, binaries or run tests while ignoring the new changes. Then dagger will run the actions only when the input (eg.: the source code) changed. Let me know if you need help building some of this. |
Beta Was this translation helpful? Give feedback.
-
I think portability and reproducability will be a great enhancement in terms of CI/CD with dagger. I can test those pipelines locally and be sure they run on CI. And for the specific case of orchestrating docker builds for all the microservices it can be considered very helpful. I am a little worried about two things:
Appreciate your thoughts on that end! |
Beta Was this translation helpful? Give feedback.
-
I just came accross dagger today and have a few questions. I would really appreciate some guidance whether dagger is what I am looking for, which I hope:
Currently, we are in the process of changing to a Python monorepo where we will use https://www.pantsbuild.org/ as the build system (we will have to rewrite CI/CD as well then). It is a great tool for packaging our application build artifacts from the monorepo. It also supports building docker images, but I am having the feeling that in my setup this limits us a bit. This is not due to pants but due to missing Docker features I think (mostly for DRY purposes).
I will try to give some minimum example of the setup to illustrate the issues I am facing, which I hope can be solved with dagger. So the project structure looks roughly as follows:
As you can see, we use a VSCode devcontainer on WSL2. In that container I can run something like
./pants run django
and pants will infer all Python dependencies (e.g. my common lib and third party requirements), build an artifact and run it so I can develop my application.For deployment I will add this artifact to a Docker image (Dockerfile), define a
docker_image
target in pants and publish it to a registry. Same holds true for my service (I intentionally mentioned weasyprint, I will outline the reason below), here I will even use the the Pants Lambda integration and test this locally using localstack.Now comes the Docker pain. Django as well as Weasyprint need some certain system dependencies. So in order to do
./pants run django
or./pants test django
in my devcontainer those need to be added to the devcontainer Dockerfile, for the deployment also to that Dockerfile. And of course tests should run on CI as well, so they need to be available there as well. So I started drafting something like this.devcontainer/Dockerfile
:FROM python:3.8-bullseye as build
RUN apt-get --download-only django-sys-dependencies --folder django-build
Now in
django/Dockerfile
:With pants I can define dependencies such that the build stage will be build before the Django Dockerfile will be build. Honestly, I could not think of any better solution with Docker regarding those system dependencies.
The very same thing holds true for weasyprint. So I will add the system dependencies for this service again to the first Dockerfile in the build stage and copy it to the weasyprint service Dockerfile as well. Now, I am not only having one service but multiple for which the Dockerfile in most cases just looks completely the same, where I just add that build artifact. That would be a lot of repetition.
I found this article using some "hidden" feature of Docker with multi-stage + ONBUILD. I tried an noticed that this only works without Buildkit which I had enabled. Then I came accross earthly, which seems to resolve things were I reapeat a lot, in particular for Docker images. I am aware that this can be solved with dagger in a similiar way. Am I right or did I totally missunderstand dagger? From my first impression, earthly feels way more Dockerfile focused.
As I would like to solve those Docker issues and have a CI/CD pipeline with fast iteration I would really appreciate some guidance how to use dagger in this case. Here are some questions I have:
Beta Was this translation helpful? Give feedback.
All reactions