-
Notifications
You must be signed in to change notification settings - Fork 2.2k
querier: per-endpoint configuration #4389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feedback from our 1:1, great job!
Let's make sure to test it too. Is the plan to NOT to dynamic reload in this PR? (that's great idea BTW)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
As mentioned in our 1:1 wonder if we can help @hitanshu-mehta and help ourselves to do this work on #4421 and review his PR 🤔
I am stuck with testing mTLS in querier.
Edit: Also tried creating certificates with rsa.certiicate but no luck. |
|
Finally I was able to test mutual TLS in querier. Earlier I was configuring tls, only in querier for both client and server. Which was wrong as in our case sidecar is the server and after configuring the same in sidecar it worked 😄. This PR needs another round of review. PS: Failing tests seems unrelated as all tests are passing locally. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Looks good, just some nits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great! Only a few wrinkles to iron out!
493fa83
to
7cf6ed0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It looks good to me, except for one part. Added suggestion what we can do to ensure single storeset. Otherwise LGTM! 💪🏽
@@ -422,94 +472,89 @@ func runQuery( | |||
dialOpts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I wrong if I would say that if we somehow pass through spec the dial options per addresses we could have just one storeset right? Do you think it would be worth doing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the review.
if we somehow pass through spec the dial options per addresses we could have just one storeset right?
But dialopts are different for each endpoint config (as it depends on config.tlsConfig
). So even if we somehow pass it through spec, we would not have one storeset as storeset would be defined inside the loop to have all tlsConfig.
To have only one storeset I thought to take the storeset initialization outside the loop but we can't because it uses dnsProvider which needs to be different for each endpoint config else it gives error.
So we need to have storeset inside the loop (which means more than one storeset) to have different dnsProvider and tlsConfig corresponding to different endpoint sets.
Do you think it would be worth doing?
As discussed with @kakkoyun (in our 1:1) we expect only few configs (like 4 or 5) from the user so having these much of storesets would not affect the performance much. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But dialopts are different for each endpoint config (as it depends on config.tlsConfig). So even if we somehow pass it through spec, we would not have one storeset as storeset would be defined inside the loop to have all tlsConfig.
Yea, I think we could do that per addresses.
As discussed with @kakkoyun (in our 1:1) we expect only few configs (like 4 or 5) from the user so having these much of storesets would not affect the performance much. :)
I know companies running Thanos against 300 of StoreAPIs, so I would be careful here. But also I am not worried about performance too much here, rather maintainability and debuggability when you have different metrics per 100 of those storesets potentially. And different health loops. Not a biggie, maybe I can try propose a PR to yours if we want to compare different solutions? (:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can try propose a PR to yours if we want to compare different solutions
That would be really helpful 🤗
grpcProbe, | ||
prober.NewInstrumentation(comp, logger, extprom.WrapRegistererWithPrefix("thanos_", reg)), | ||
var ( | ||
// Adding separate for loop for each client func() below because storeSets is being populated in a go-routine and this code executes before it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, this is not the best. I mentioned Idea how we can ensure single storeset
13ee05b
to
e0a0667
Compare
e0a0667
to
ee33fa5
Compare
rebased successfully to main 😵💫 Further tasks:
|
ee33fa5
to
e705d87
Compare
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
Signed-off-by: Namanl2001 <namanlakhwani@gmail.com>
bdf3391
to
f19a67b
Compare
This PR works fine (see approval) but there is a suggestion to ensure a single storeset. The work for which would be carried out on another PR. This was my Google Summer of Code 2021 project. ❤️ Closing |
The Thanos Querier component supports basic mTLS configuration for internal gRPC communication. This works great for basic use cases but it still requires extra forward proxies to make it work for bigger deployments.
This PR add support for per-endpoint TLS configuration in Thanos Query Component for internal gRPC communication. Here we are introducing a new CLI option
--endpoints.config
, which will accept the content/path to a yaml file which contains a list as follows:Changes
Based on proposal #4377
--endpoint.config
cli optionpkg/store/config.go
to load config from cli options (in list form)query.go
to iterate over the list itemsVerification
Added e2e tests for
--endpoints.config
with mutual TLS.