-
Notifications
You must be signed in to change notification settings - Fork 564
Description
Hey Team Digger,
I promised to roast the Digger Helm chart installation guide, it did take me a bit to get to it but now I feel like I have a good understanding of how the chart is structured and am able to provide some constructive feedback.
I think that the Digger Helm Chart installation guide is a bit cumbersome and has room for improvement in the Developer Experience Department.
Here are the reasons:
-
I think that the idea of having to create your own values.yaml and manually edit in multiple steps isn't the best. The reason why i say this is because ideally, we don't want the user to use their cognitive power to create a yaml file from scratch, we should take advantage of the fact that helm allows us to do this:
helm inspect values digger-backend > values.yaml
The output of the command is an out-of-the-box, well-formatted, well-documented file with all the possible values of a Helm Chart, then when the user opens up the values.yaml file, it's more like filling out a form for them rather than creating a form from scratch. Take for example the way that Atlantis does it, their documentation empowers the user to accomplish more in less steps, which saves developers' time and gets them up and running faster. -
Helm Charts should be stored in a Helm Repo, that way users will always have access to the latest version of the helm chart, and we won't have to worry about hardcoding the docker image tag in the docs and making the user responsible for finding the right docker image tag from GH releases. Helm Repos also allow for futureproofing, existing users will only need to run
helm repo update URL
and they will get the new version of digger to their local helm cache. Alternatively, they'd be required to manually reinstall the digger helm chart if there was an update to the chart like adding a new Kubernetes Resource. Also having a Helm Repo is great because it can contain multiple charts in case Digger decides to have different charts for different use-cases. -
Built-in PostgreSQL (Testing Only) simply doesn't work. I understand the way that the Chart is structured, however, I don't know why we would want two separate set-ups for testing and prod. Why can't testing have a statefullSet as well? Additionally, the DATABASE_URL env variable resolved to this value (postgres://postgres:$(POSTGRES_PASSWORD)@pg-postgresql.db:5432/digger?sslmode=disable) even when this conditional is set to false. I think it has smth to do with the way that values in values.yaml file take precedence. (Correct me if i'm wrong here and the testing PostgreSQL works for you)
-
I think that the chart can benefit by applying some best practices in the configuration management department (Making the chart more easily configurable for the user). There are a lot of information online on how to achieve enterprise-readiness, i found this article to be very useful, it gives a great example on how a popular Helm Chart (MLflow) achieved enterprise-readiness.
-
It would be great to not have to reinstall the chart with the new values for GH authentication, but rather figure out how to make it work in a single step, mimicking the way that Atlantis does it

I think addressing the points above is a good start and will definitely make the digger helm chart easier to deploy for new users adopting digger as it would follow an already familiar pattern for them. Hope this is helpful, let me know what you think. Feel free to shoot me questions and poke at my suggestions.