-
Notifications
You must be signed in to change notification settings - Fork 74
[helm] Add ingress #517
[helm] Add ingress #517
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, I think this is super valuable. I had to write a ingress like this myself a couple of times.
I have commented in-line, where I would like you to revisit your changes. Let me know if you have any questions or would like me to take another look.
ingress: | ||
enabled: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be added to values.yaml
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, but do you mean additionally instead of instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I actually mean instead here. The values.yaml is always used, while the values-micro-services.yaml just overrides it when actually micro-services mode is wanted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, alright :D
If you don't mind my dumb questions, by which mechanism? (Wild guess, helm uses a values.yaml
by default and we add some via the -f my-file.yaml
?)
- backend: | ||
service: | ||
name: {{ include "phlare.fullname" $ }}-query-frontend | ||
port: | ||
number: 4100 | ||
path: /querier.v1.QuerierService/ | ||
pathType: Prefix | ||
- backend: | ||
service: | ||
name: {{ include "phlare.fullname" $ }}-distributor | ||
port: | ||
number: 4100 | ||
path: /push.v1.PusherService/ | ||
pathType: Prefix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will not work when we deploy a single binary version of Phlare. I suggest you either:
- Have a if/else for that case and just use the one service
- We always create the distributor / query-frontend services but in the single binary they should just point to the same pod(s)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I will go with the if else for minimal changes, if that's ok for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that sounds good to me, also feel free to try any other idea you might have. Those two just came to my mind thinking about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does counting the number of keys to check if it is in micro-service mode satisfying to you? It feels weird but I am not used to helm world.
Hello Simon, I will make the review changes soon and hopefully, this will be included in the next release! Quick questions though
|
All you need is docker cli and go, it has been tested mostly on macos and linux. No idea what happens if you're on windows. |
Quick addition: In case you are wondering for a way to deploy micro services there is |
Everything goes fine with
If that rings a bell (I am on Mac M1) Also |
Yes it does.
Is that the case also on main ? |
Yep, on main also. I just rebased to make sure. A |
And here we go for the review. Unfortunately, I still struggle a bit testing this. I got issues with the commands Using But, probably a miss in my knowledge, I was not able to provide a host and to make it accessible through grafana.
I was not able to hit with a grafana instance on the same kind cluster Feel free to educate me where I am wrong in testing the ingresses locally / retest if needed. I am confident this new template should now do the trick 🎉 |
And here we go, I force pushed the CI fixes, the CI should be green now. Ready for another review whenever you have time @simonswine, thanks for your time! |
I'll have a look this week too. |
You'll need to rebase of master, there was a lint issue for helm that is fixed now. I wanted to push to your branch but can't. |
name: {{ include "phlare.fullname" $ }} | ||
{{- end }} | ||
port: | ||
number: 4100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be the service port configuration instead of 4100 ? {{.Values.service.port}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no way though to change the default port, so this is really just a nit but I think we should probably fix that later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I was able to test this with kind.
I had to use a kind config like this
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: phlare-dev
nodes:
- role: worker
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
Then install the ingress controller using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
And add a host in the value file:
enabled: true
hosts:
- localhost
I was able to push and query using localhost/
Thanks for the test workflow, I will try this for my personal knowledge Personally I had to remap the ports, because some of the ports on my PC seem reserved. I was wondering, it costs hardly nothing to add the |
Thanks for the review @cyriltovena, |
Head branch was pushed to by a user without write access
* [helm] Add ingress * review * helm docs * code review * helm docs last fixup :fingerscrossed: --------- Co-authored-by: Louis Fruleux <louis.fruleux@teads.tv>
Rational
Ingress makes the distributor accessible for external grafana agent.
Also, the query-frontend makes the frontend querier accessible to external Grafana.
Tests
I struggled to test this locally.
curl -h 'Host: lfr.hack' 127.0.0.1/read
<= does not workSo I just compared the generated files with the ones we have in prod on our sides and they seem okay.