New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set the server_addr for the dashboard and prometheus modules #2408
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
To note the thought I mentioned in the meeting today: I have been having the thought that I would wish the operator could do more of the config work and we could get rid of the need for calling the rook binary for init. I think this PR is good as-is; I am noticing here that it is a fair bit of additional code to add this support into the binary init path, and it would be nice to not have to keep doing this in the future.
@BlaineEXE @galexrt Per our discussion instead of setting the Leaving this PR as-is I believe would work with both ipv4 and ipv6 since we would be binding to the correct pod ip either way. I do prefer the approach of setting 0.0.0.0 centrally in the operator, although this means we need a new setting in the operator.yaml to indicate if we're running ipv4 or ipv6. Any other suggestions? |
pkg/daemon/ceph/mgr/init.go
Outdated
|
||
clusterName := "ceph" | ||
context.ConfigDir = "/etc" | ||
settingPath := fmt.Sprintf("mgr/prometheus/server_addr") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use fmt.Sprintf()
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll clean this up
pkg/daemon/ceph/mgr/init.go
Outdated
return fmt.Errorf("setting prometheus server_addr failed. %+v", err) | ||
} | ||
|
||
settingPath = fmt.Sprintf("mgr/dashboard/server_addr") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
@@ -102,7 +102,8 @@ func (c *Cluster) toggleDashboardModule() error { | |||
} | |||
|
|||
func (c *Cluster) configureDashboardModule() error { | |||
hasChanged, err := client.MgrSetConfig(c.context, c.Namespace, c.cephVersion.Name, "mgr/dashboard/url_prefix", c.dashboard.UrlPrefix) | |||
allMgrs := "" | |||
hasChanged, err := client.MgrSetConfig(c.context, c.Namespace, allMgrs, c.cephVersion.Name, "mgr/dashboard/url_prefix", c.dashboard.UrlPrefix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
allMgrs
is empty. It is used in the MgrSetConfig
func to build the mgr ID. Is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it applies to all mgrs, the empty string is needed. I'll rework this to make it more readable.
@@ -98,6 +98,7 @@ func (c *Cluster) makeConfigInitContainer(mgrConfig *mgrConfig) v1.Container { | |||
}}}, | |||
k8sutil.PodIPEnvVar(k8sutil.PrivateIPEnvVar), | |||
k8sutil.PodIPEnvVar(k8sutil.PublicIPEnvVar), | |||
k8sutil.PodIPEnvVar("ROOK_MGR_MODULE_SERVER_ADDR"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have two other env vars that have the Pod IP in them (see line 99-100) do we really need a new env var here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
while it's not strictly necessary, i wanted to be clear about the purpose of this setting. If the public and private ip configuration changes in the future, we should reevaluate this setting as well. This way it will be more obvious to evaluate it.
Signed-off-by: travisn <tnielsen@redhat.com>
136811a
to
261aae3
Compare
I think I would lean toward the solution that supports both IPv4 and IPv6 without special support for either. I think there could be many ways to do that, though. I'm not sure what might happen if the k8s network is IPv4 and the host network is on IPv6 (or vice versa). If we know that the mgr binding will always be on the cluster network, we could go the route of setting a config parameter with the 4/6 type of IP for the overlay network, but it would seem better to me to detect that automatically and not make the user configure it at all. Or if we know for certain that the mgr will use the host network, we could detect that type. A lot of my concerns are based around the fact that many Ceph clusters will likely have 3 networks (mgmt, client, and cluster data). And to be fair, I haven't seen anywhere that Rook has plans for how to achieve that kind of support. And my level of ignorance about this subject is high. I think my bottom line opinion at present is that if Rook is supposed to support IPv4 and IPv6 now, we keep the initial patch. But if Rook currently just supports IPv4, setting |
I second this sentiment. Same applies to EdgeFS - even more so, as it
trying to optimize backend networking as much as possible, leveraging
UDP/IP and PF-RING. Backend network needs to be isolated from client and
mgmt for many not even performance-related reasons too.
I was thinking that we should start working on multi-network interface CRD
design, perhaps incorporating:
https://github.com/intel/multus-cni
…On Thu, Dec 20, 2018 at 2:10 PM Blaine Gardner ***@***.***> wrote:
I think I would lean toward the solution that supports both IPv4 and IPv6
without special support for either. I think there could be many ways to do
that, though. I'm not sure what might happen if the k8s network is IPv4 and
the host network is on IPv6 (or vice versa).
If we know that the mgr binding will always be on the cluster network, we
could go the route of setting a config parameter with the 4/6 type of IP
for the overlay network, but it would seem better to me to detect that
automatically and not make the user configure it at all. Or if we know for
certain that the mgr will use the host network, we could detect that type.
A lot of my concerns are based around the fact that many Ceph clusters
will likely have 3 networks (mgmt, client, and cluster data). And to be
fair, I haven't seen anywhere that Rook has plans for how to achieve that
kind of support. And my level of ignorance about this subject is high.
I think my bottom line at present is that if Rook is supposed to support
IPv4 and IPv6, we keep the initial patch. But if Rook currently just
supports IPv4, setting 0.0.0.0 is easier, and we can cross the IPv6
hurdle later when we are ready to tackle it.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2408 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABNTVz5FZmW9VYAPjOfBkiy7yVnHPlZYks5u7ArGgaJpZM4ZckT_>
.
|
I think that bears more looking into, and this seems to meet a lot of my current thoughts for what is needed for host networking. It also seems similar in scope to CNI Genie, which I have been strongly advised to avoid, so I do have some hesitation. With Ceph at least, we will also be fighting any sort of layering between the pods and the network. @sebastian-philipp has quoted a figure that I now forget that Ceph performance degrades substantially even just by enabling iptables with non-blocking rules on bare metal hosts. I believe it was around 60% performance degradation. I think for all Rook storage backends, host networking's main feature will be to increase network throughput and reduce latency, so we may want to make some decisions around what Rook wants to support. Do we risk degrading performance by going through CNI layers that are heavy and increasing security, or do we leave the networks as unimpeded as possible and instead give strong recommendations about isolating the different networks for security purposes? And do we want to give the user the option of either? |
hostNetwork: true - clearly will boost performance when separate backend
interface configured. We, however, need to try to avoid the use of
hostNetwork as it breaks cross pod isolation. I'm surprised it is just 60%
drop in performance...
Isolation on the same physical interface will be a challenge and likely
will affect performance. True, when CNI is on the data path.
With multi-interface pod networking where Ceph or EdgeFS backend networking
is running on the dedicated interface, performance will actually improve!
Genie is another interesting alternative. But given that Intel is backing
Multus - that would be my first choice.
…On Thu, Dec 20, 2018 at 2:25 PM Blaine Gardner ***@***.***> wrote:
I think that bears more looking into, and this seems to meet a lot of my
current thoughts for what is needed for host networking. It also seems
similar in scope to CNI Genie, which I have been strongly advised to avoid,
so I do have some hesitation.
With Ceph at least, we will also be fighting any sort of layering between
the pods and the network. @sebastian-philipp
<https://github.com/sebastian-philipp> has quoted a figure that I now
forget that Ceph performance degrades substantially even just by enabling
iptables with non-blocking rules on bare metal hosts. I believe it was
around 60% performance degradation.
I think for all Rook storage backends, host networking's main feature will
be to increase network throughput and reduce latency, so we may want to
make some decisions around what Rook wants to support. Do we risk degrading
performance by going through CNI layers that degrade network
throughput/latency and increasing security, or do we leave the network as
clean as possible and instead give strong recommendations about isolating
the different networks for security purposes? And do we want to give the
user the option of either?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2408 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABNTV6_6-uAaa1ZPdjo_-9-_7TDdiy86ks5u7A5PgaJpZM4ZckT_>
.
|
Great discussion around networking. I would propose that this PR go ahead and merge since it is compatible with either ipv4 or ipv6. |
+1
…On Thu, Dec 20, 2018 at 3:02 PM Travis Nielsen ***@***.***> wrote:
Great discussion around networking. I would propose that this PR go ahead
and merge since it is compatible with either ipv4 or ipv6.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2408 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABNTV514IDhbnefoav6pfraRdFv_5v7Cks5u7Bb5gaJpZM4ZckT_>
.
|
Signed-off-by: travisn tnielsen@redhat.com
Description of your changes:
The
server_addr
needs to be set on the mgr so the prometheus and dashboard mgr modules can bind to the pod ip when starting their endpoints. The mgr key is not sufficient to set this setting, so the admin key is used in the config-init container, then the admin key is deleted before the init container completes.Which issue is resolved by this Pull Request:
Resolves #2335
Checklist:
make codegen
) has been run to update object specifications, if necessary.