Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upKubernetes SD: Allow restricting namespaces #2280
Comments
This comment has been minimized.
This comment has been minimized.
Let's try to avoid additional types of relabelling, they cause enough confusion as-is. A simple regex is the most we should need. |
This comment has been minimized.
This comment has been minimized.
Agreed. |
This comment has been minimized.
This comment has been minimized.
|
Mh, if I cannot get a list of all namespaces due to access control, how can I apply a regex to it to filter namespaces? |
This comment has been minimized.
This comment has been minimized.
|
@fabxc Yes that was my thought with
|
This comment has been minimized.
This comment has been minimized.
|
Listing namespaces is a different level of access than looking inside them.
I'd like to have a way to select namespaces by label – since namespaces are
not nested, the only way to group by multiple dimensions is to construct a
namespace name _somehow_ and use labels for the real breakdown. Selecting
with a regex is ambiguous in this case.
…On Wed, Dec 14, 2016, 08:25 Frederic Branczyk ***@***.***> wrote:
@fabxc <https://github.com/fabxc> Yes that was my thought with
it requires the ability to watch/list namespaces in the first place, I'm
not sure if this is ok.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2280 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAICBrp8eJqr-suICbBozLAK0XKwLEQ1ks5rH5n_gaJpZM4LMYtm>
.
|
This comment has been minimized.
This comment has been minimized.
|
I agree that label selection is the way to do it. Any SD-specific mechanism
should we add should also stick with the ideal methods of that particular
SD. And Kubernetes' design is relatively explicitly against encoding
grouping information into the plain name.
We probably need label selection and plain names for cases where Prometheus
doesn't have perms to list all namespaces/list resources without namespace
specification.
Have there been any scalability issues yet? The former could be solved via
relabeling too. But the collection under the hood would be quite exotic and
I'd certainly don't rate it as a target label anymore with that level of
indirection.
On Wed, Dec 14, 2016 at 8:38 AM Matthias Rampke <notifications@github.com>
wrote:
… Listing namespaces is a different level of access than looking inside them.
I'd like to have a way to select namespaces by label – since namespaces are
not nested, the only way to group by multiple dimensions is to construct a
namespace name _somehow_ and use labels for the real breakdown. Selecting
with a regex is ambiguous in this case.
On Wed, Dec 14, 2016, 08:25 Frederic Branczyk ***@***.***>
wrote:
> @fabxc <https://github.com/fabxc> Yes that was my thought with
>
> it requires the ability to watch/list namespaces in the first place, I'm
> not sure if this is ok.
>
> —
>
>
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <
#2280 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AAICBrp8eJqr-suICbBozLAK0XKwLEQ1ks5rH5n_gaJpZM4LMYtm
>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2280 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEuA8mdLVY-b1zMqsQrHYYQCXc6MAFh7ks5rH5zogaJpZM4LMYtm>
.
|
This comment has been minimized.
This comment has been minimized.
nsams
commented
Dec 14, 2016
|
see #1951 |
This comment has been minimized.
This comment has been minimized.
|
@fabxc what I mainly meant in terms of the "scalability" issue is that if we have the informers setup to list/watch on all namespaces, then we are filling and updating our cache unnecessarily much when we could restrict it to the objects that we are interested in.
Agreed, however then it would probably make sense to adapt here as well. |
This comment has been minimized.
This comment has been minimized.
|
Yes, I totally agree that there's overhead and it will most likely become a scalability issue eventually. Was just wondering whether it came up it – but probably not given Kubernetes clusters still being fairly small on average. How do you mean adapt? Just in terms of generating the appropriate config if possible or change the selection field in the operator in some way? |
This comment has been minimized.
This comment has been minimized.
widgetpl
commented
Dec 22, 2016
|
I think i have hit such problem which needs restricting namespaces or sth similar. My problem is described here.
as
|
This comment has been minimized.
This comment has been minimized.
|
@widgetpl This is about a different issue. I commented on the stackoverflow thread to keep this on-topic. |
This comment has been minimized.
This comment has been minimized.
widgetpl
commented
Dec 23, 2016
|
@brancz I will not agree that this is a bit different issue, for me it sound a little bit different. One of the reasons of this issue is
and i think this is the same problem which I have ( large cluster and big amount of events/metrics ) unless you are talking only about kubernetes cluster itself without annotated services, pods etc. but I think they are included. In my opinion it depends how you look on it. I suppose that prometheus is watching and listing changes inside cluster through the API to know where it can find some metrics and when this cluster grows it has more and more Kubernetes objects to check. Allowing Prometheus to watch and list specific namespeces or even services, deployments, daemonsets inside those namespeces will reduce those events but it will also scrape specific metrics. Then we can be able to do functional sharding inside Kubernetes. Unfortunately I have not found how i can make functional sharding inside Kubernetes cluster when everything is a Kubernetes object and my question was not about what should I do but how can I do this inside Kubernetes. One solution which I mentioned is static_config but it is not best solution as my pod can float between Kubernetes nodes. Another splution would be to set Prometheus target in static config to external IP of service but then Prometheus will not use Kubernetes API for service discovery. Please correct me if I am wrong. |
This comment has been minimized.
This comment has been minimized.
|
@widgetpl This is a different issue, please don't pollute the bug tracker. |
brancz
referenced this issue
Feb 7, 2017
Closed
tectonic: Prometheus doesn't create RBAC rules for service account #134
bakins
referenced this issue
Apr 19, 2017
Merged
Allow limiting Kubernetes service discover to certain namespaces #2642
This comment has been minimized.
This comment has been minimized.
carlpett
commented
Apr 24, 2017
|
We're in need of being able to run Prometheus without it having access to all namespaces as well. I'm willing to put together a PR, but is there a decision on what approach to take? Our usecase would be satisfied with just having a list of namespaces (that is, no need for automagically inspecting what namespaces are available to the service account). Is that something everyone is satisfied with? If no config is given by the user, the current From a quick glance at the code, I'm changes would be around here? Anything in particular that would be done more than just looping over those? |
This comment has been minimized.
This comment has been minimized.
|
Yes that's where I would start as well. I'm thinking that we actually would want both, in fact I'm thinking of something like Namespace discovery similar to Target discovery, where you could either specify a static list of namespaces if they are known at point of configuring, or be able to specify a label-selector for listing namespaces (which would require RBAC roles to access the list namespaces endpoint). Although labeling namespaces seems to be getting more widely adopted it may also be wanted to filter based on the namespace name, although I would hold off on that until it is a wanted feature. Wdyt @fabxc @brian-brazil @matthiasr ? |
This comment has been minimized.
This comment has been minimized.
|
Do we still need this given RBAC? I'd presume it'll just return the namespaces you are allowed see. |
This comment has been minimized.
This comment has been minimized.
|
If you have the role to list namespaces, I would expect to get all namespace objects. |
This comment has been minimized.
This comment has been minimized.
|
Can we check what the k8 plans are there? In particular what happens when you don't have that role? |
This comment has been minimized.
This comment has been minimized.
|
Currently, you can either list all namespaces or none. This is the same with all resources in Kubernetes - the list action applies to all of that resource type. For namespaced resources (pods, services, etc), a role can be grant access on a per namespace basis, however, namespaces themselves are cluster level. Unsure of future plans in k8s, though. |
This comment has been minimized.
This comment has been minimized.
|
Implemented in #2642. |
grobie
closed this
May 25, 2017
johscheuer
referenced this issue
Jan 16, 2018
Closed
Kubernetes namespace filtering with regex #3692
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
brancz commentedDec 14, 2016
Currently the Kubernetes SD lists and watches all changes for each role. This is problematic for two reasons:
I see two possibilities (this part is up for discussion):
The second solution seems like it is the desirable one, but besides adding complexity it requires the ability to watch/list namespaces in the first place, I'm not sure if this is ok. Possibly this is not an "or" decision, but both are required to cover everyones needs.
This will make #2191 a bit more complicated for the Kubernetes SD, but still doable.
@fabxc @beorn7 @juliusv @brian-brazil @matthiasr @alexsomesan