Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm: add helm chart #363

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

helm: add helm chart #363

wants to merge 10 commits into from

Conversation

roobre
Copy link

@roobre roobre commented Aug 28, 2022

This PR adds a basic helm chart, which allows easy deployment of scrutiny in Kubernetes.

This is a pretty rough first iteration, but is as far as I can tell funcitonal and allows certain degree of customization.

There are some rough edges I've left as review comments, just to doublecheck what I'm doing makes sense.

Please let me know what you think!

{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell, the collector writes stuff in /opt/scrutiny.
Good K8s hygiene mandates the root of the container to be read-only, and all locations where temporary files are written to be specific EmptyDir mount points. Is there any other location where this docker image writes to?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to make it use a read only file system, but it really freaks out. It want to write for entrypoints and cron job stuff. I think the collector would need significant changes to facilitate this.

@AnalogJ
Copy link
Owner

AnalogJ commented Aug 31, 2022

woah, this is great, thanks! 🥳
I should be able to give you a detailed review this weekend.

@@ -0,0 +1,61 @@
apiVersion: apps/v1
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to use Kubernetes's built in cron-scheduling, instead of my kludgy cron-in-docker solution if we can.
I've used https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ in the past, but I'm not sure if it has the capabilities to force a container on every node. Is that something you're familiar with?

Copy link
Author

@roobre roobre Sep 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree, I do not particularly like the cron-in-docker thing. Unfortunately as far as I know, there is no reliable way to run a CronJob on all nodes :(
I've seen hacks here and there, like forcing a number of replicas == number of nodes and setting antiaffinity rules, but I don't think that's very reliable.

An alternative to the cron in docker could be a simpler image that just does:

#!/bin/sh
while [ true ]; do
    /opt/scrutiny/bin/something something
    sleep 6h
done

This script is simple enough to inline in the K8s manifest. WDYT?

charts/scrutiny/Chart.yaml Outdated Show resolved Hide resolved
@roobre
Copy link
Author

roobre commented Sep 3, 2022

Thanks for the in-depth review @AnalogJ!
On my side:

  • Get rid of the autoscaling functionality
  • Rewrite default tags to use appVersion

What I'd like to confirm:

  • Wether we need to mount /run/udev
  • If you're ok with maintaining appVersion in sync
  • If you like the idea of replacing the cron-in-docker with the custom script inline in the manifest (no changes required to the docker image)

@AnalogJ
Copy link
Owner

AnalogJ commented Sep 3, 2022

Hey @roobre

  • Wether we need to mount /run/udev
    • Yes, I think this is still required with --privileged
  • If you're ok with maintaining appVersion in sync
  • If you like the idea of replacing the cron-in-docker with the custom script inline in the manifest (no changes required to the docker image)
    • Unfortunately I don't think we can use a simple custom script like you mentioned, some of our users definitely depend on the customization of the cron schedule.

I hope that answers all your questions!

@roobre
Copy link
Author

roobre commented Sep 3, 2022

@AnalogJ
We could expose an entry in values.yaml and use it to feed the script. For example:

collector:
  period: 6h

And then:

while [ true ]; do
    /opt/scrutiny/bin/something something
    sleep {{ .Values.period }}
done

It is definitely not the same as cron, but might fill the gap and be more idiomatic than the cron.
Otherwise, we can just keep the cron in place.

@AnalogJ
Copy link
Owner

AnalogJ commented Sep 23, 2022

Hey @roobre I promise I'm not ignoring your hard work, I just got a bit distracted with some other projects -- https://www.reddit.com/r/selfhosted/comments/xj9rx7/introducing_fasten_a_selfhosted_personal/

I need to make some tweaks to my "bumpr" tool (to keep the versions in the helm manifest in sync with the scrutiny version) but this is looking pretty good.

@Starttoaster
Copy link

Just wanted to say thanks @AnalogJ for the update, and @roobre for the hard work. Wanted to chime in as an interested 3rd party waiting for this to merge before I run scrutiny, personally.

@issmirnov
Copy link

Hey folks,

Sorry to necromance old PR's, but I too am interested in deploying scrutiny. Is this helm chart in a good enough state to be used?

P.S. Fasten looks amazing, looking forward to running it on my k8s cluster too!

@roobre
Copy link
Author

roobre commented Sep 27, 2023

Hey @issmirnov,

I'm using this chart for my NAS and while some things UX-wise can definitely be improved it should work well for the most part.

Copy link
Contributor

@uhthomas uhthomas Jan 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure this can be a deployment. My understanding is that there is a sqlite database somewhere in /opt/scrutiny/config which is required. Issues occur if it's lost.

This should probably be a stateful set with one replica, or at least /opt/scrutiny/config should have persistence.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

privileged: true
capabilities:
add:
- SYS_RAWIO
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SYS_ADMIN is also required for nvme

Co-authored-by: Thomas <9749173+uhthomas@users.noreply.github.com>
Co-authored-by: Thomas <9749173+uhthomas@users.noreply.github.com>
@codecov-commenter
Copy link

codecov-commenter commented Feb 24, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 31.91%. Comparing base (0febe3f) to head (1d527c9).
Report is 84 commits behind head on master.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #363      +/-   ##
==========================================
- Coverage   32.54%   31.91%   -0.64%     
==========================================
  Files          54       30      -24     
  Lines        3045     2717     -328     
  Branches       66        0      -66     
==========================================
- Hits          991      867     -124     
+ Misses       2018     1811     -207     
- Partials       36       39       +3     
Flag Coverage Δ
unittests 31.91% <ø> (-0.64%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@adepssimius
Copy link

This looks cool and would be awesome to have in my cluster. I've been using scrutiny for years on a homelab server and I hope I can continue to use it now that that I'm migrating to k8s.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants