Skip to content
This repository was archived by the owner on Jul 19, 2023. It is now read-only.

Conversation

bluesheeptoken
Copy link
Contributor

@bluesheeptoken bluesheeptoken commented Feb 13, 2023

Rational

Ingress makes the distributor accessible for external grafana agent.
Also, the query-frontend makes the frontend querier accessible to external Grafana.

Tests

I struggled to test this locally.

  • I installed the nains controller from the kind documentation (don't forget to remove the selector for the pod, we can probably patch the test cluster if testing ingress regularly becomes a thing)
  • Modified the ingress to redirect /ready
  • Tried to test it via curl -h 'Host: lfr.hack' 127.0.0.1/read <= does not work
  • Tried via dnsmasq without more success (probably a setup issue on my side)

So I just compared the generated files with the ones we have in prod on our sides and they seem okay.

@CLAassistant
Copy link

CLAassistant commented Feb 13, 2023

CLA assistant check
All committers have signed the CLA.

Copy link
Collaborator

@simonswine simonswine left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, I think this is super valuable. I had to write a ingress like this myself a couple of times.

I have commented in-line, where I would like you to revisit your changes. Let me know if you have any questions or would like me to take another look.

Comment on lines 65 to 66
ingress:
enabled: false
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be added to values.yaml instead

Copy link
Contributor Author

@bluesheeptoken bluesheeptoken Feb 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, but do you mean additionally instead of instead?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I actually mean instead here. The values.yaml is always used, while the values-micro-services.yaml just overrides it when actually micro-services mode is wanted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, alright :D

If you don't mind my dumb questions, by which mechanism? (Wild guess, helm uses a values.yaml by default and we add some via the -f my-file.yaml ?)

Comment on lines 18 to 31
- backend:
service:
name: {{ include "phlare.fullname" $ }}-query-frontend
port:
number: 4100
path: /querier.v1.QuerierService/
pathType: Prefix
- backend:
service:
name: {{ include "phlare.fullname" $ }}-distributor
port:
number: 4100
path: /push.v1.PusherService/
pathType: Prefix
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will not work when we deploy a single binary version of Phlare. I suggest you either:

  • Have a if/else for that case and just use the one service
  • We always create the distributor / query-frontend services but in the single binary they should just point to the same pod(s)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I will go with the if else for minimal changes, if that's ok for you.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that sounds good to me, also feel free to try any other idea you might have. Those two just came to my mind thinking about it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does counting the number of keys to check if it is in micro-service mode satisfying to you? It feels weird but I am not used to helm world.

@bluesheeptoken
Copy link
Contributor Author

Thank you, I think this is super valuable. I had to write a ingress like this myself a couple of times.

I have commented in-line, where I would like you to revisit your changes. Let me know if you have any questions or would like me to take another look.

Hello Simon,
Thank you for reviewing this PR, I appreciate it.

I will make the review changes soon and hopefully, this will be included in the next release!

Quick questions though

  • to install the cluster in monolith mode, it's make deploy right?
  • Is there a way to easily tests helm charts automatically? (Might be a newby question, I am completely new to helm and k8s in general)

@cyriltovena
Copy link
Collaborator

cyriltovena commented Feb 21, 2023

Thank you, I think this is super valuable. I had to write a ingress like this myself a couple of times.
I have commented in-line, where I would like you to revisit your changes. Let me know if you have any questions or would like me to take another look.

Hello Simon, Thank you for reviewing this PR, I appreciate it.

I will make the review changes soon and hopefully, this will be included in the next release!

Quick questions though

  • to install the cluster in monolith mode, it's make deploy right?
  • Is there a way to easily tests helm charts automatically? (Might be a newby question, I am completely new to helm and k8s in general)

make deploy and make deploy-monitoring should create a cluster with everything. You can then port-forwad nginx 80 to your machine and access Phlare from Grafana.

All you need is docker cli and go, it has been tested mostly on macos and linux. No idea what happens if you're on windows.

@simonswine
Copy link
Collaborator

Quick addition: In case you are wondering for a way to deploy micro services there is make deploy-micro-services. As far as I am aware the nginx in the monitoring stack is not using Kubernetes ingress objects

@bluesheeptoken
Copy link
Contributor Author

Thank you, I think this is super valuable. I had to write a ingress like this myself a couple of times.
I have commented in-line, where I would like you to revisit your changes. Let me know if you have any questions or would like me to take another look.

Hello Simon, Thank you for reviewing this PR, I appreciate it.
I will make the review changes soon and hopefully, this will be included in the next release!
Quick questions though

  • to install the cluster in monolith mode, it's make deploy right?
  • Is there a way to easily tests helm charts automatically? (Might be a newby question, I am completely new to helm and k8s in general)

make deploy and make deploy-monitoring should create a cluster with everything. You can then port-forwad nginx 80 to your machine and access Phlare from Grafana.

All you need is docker cli and go, it has been tested mostly on macos and linux. No idea what happens if you're on windows.

Everything goes fine with make deploy-micro-services, but make deploygoes with an error

Release "phlare-dev" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: kind not set

If that rings a bell (I am on Mac M1)

Also make deploy-monitoring creates the grafana part to check everything, right?

@cyriltovena
Copy link
Collaborator

cyriltovena commented Feb 21, 2023

Yes it does.

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: kind not set

Is that the case also on main ?

@bluesheeptoken
Copy link
Contributor Author

Yep, on main also. I just rebased to make sure.

A helm install phlare-dev . -f values.yaml seem to work though

@bluesheeptoken
Copy link
Contributor Author

And here we go for the review. Unfortunately, I still struggle a bit testing this.

I got issues with the commands make deploy and make deploy-monitoring I was not able to solve without digging more into the project structure

Using make deploy-micro-services or helm upgrade phlare . -f values.yaml, I was able to check the the paths were correctly resolved. I also checked with our current prod cluster that the ingresses are similar. 👍

But, probably a miss in my knowledge, I was not able to provide a host and to make it accessible through grafana.
With the following ingress

NAMESPACE   NAME                            CLASS    HOSTS               ADDRESS     PORTS   AGE
default     phlare-micro-services-ingress   <none>   phlare.louis.hack   localhost   80      26m

I was not able to hit with a grafana instance on the same kind cluster http://phlare.louis.hack:80. (I tried a few things such as adding a host resolution in /etc/hosts) 👎

Feel free to educate me where I am wrong in testing the ingresses locally / retest if needed. I am confident this new template should now do the trick 🎉

@bluesheeptoken
Copy link
Contributor Author

And here we go, I force pushed the CI fixes, the CI should be green now.

Ready for another review whenever you have time @simonswine, thanks for your time!

@cyriltovena
Copy link
Collaborator

I'll have a look this week too.

@cyriltovena
Copy link
Collaborator

You'll need to rebase of master, there was a lint issue for helm that is fixed now. I wanted to push to your branch but can't.

name: {{ include "phlare.fullname" $ }}
{{- end }}
port:
number: 4100
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be the service port configuration instead of 4100 ? {{.Values.service.port}}

Copy link
Collaborator

@cyriltovena cyriltovena Mar 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no way though to change the default port, so this is really just a nit but I think we should probably fix that later.

Copy link
Collaborator

@cyriltovena cyriltovena left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

I was able to test this with kind.

I had to use a kind config like this

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: phlare-dev
nodes:
  - role: worker
  - role: control-plane
    kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
    extraPortMappings:
      - containerPort: 80
        hostPort: 80
        protocol: TCP
      - containerPort: 443
        hostPort: 443
        protocol: TCP

Then install the ingress controller using

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

And add a host in the value file:

  enabled: true
    hosts:
    - localhost

I was able to push and query using localhost/

@bluesheeptoken
Copy link
Contributor Author

LGTM

I was able to test this with kind.

I had to use a kind config like this

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: phlare-dev
nodes:
  - role: worker
  - role: control-plane
    kubeadmConfigPatches:
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            node-labels: "ingress-ready=true"
    extraPortMappings:
      - containerPort: 80
        hostPort: 80
        protocol: TCP
      - containerPort: 443
        hostPort: 443
        protocol: TCP

Then install the ingress controller using

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

And add a host in the value file:

  enabled: true
    hosts:
    - localhost

I was able to push and query using localhost/

Thanks for the test workflow, I will try this for my personal knowledge

Personally I had to remap the ports, because some of the ports on my PC seem reserved.
And that's true, I have not tried with a simple localhost. I probably took an overcomplicated approach for this !

I was wondering, it costs hardly nothing to add the "ingress-ready=true" node registration, and it makes it easy to install the nginx controller from GitHub instead of downloading the file and removing the condition. Maybe that's something we want to perpetuate? 🤷

@bluesheeptoken
Copy link
Contributor Author

Thanks for the review @cyriltovena,
we should be alright, I fixed up everything, including the helm-docs that I forgot 🙈

@cyriltovena cyriltovena enabled auto-merge (squash) March 2, 2023 10:34
auto-merge was automatically disabled March 2, 2023 10:39

Head branch was pushed to by a user without write access

@cyriltovena cyriltovena enabled auto-merge (squash) March 2, 2023 10:44
@cyriltovena cyriltovena merged commit dd79086 into grafana:main Mar 2, 2023
simonswine pushed a commit to simonswine/pyroscope that referenced this pull request Jun 30, 2023
* [helm] Add ingress

* review

* helm docs

* code review

* helm docs last fixup :fingerscrossed:

---------

Co-authored-by: Louis Fruleux <louis.fruleux@teads.tv>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants