-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow distinguishing between private and public network interfaces #591
Comments
We rely on this functionality of RKE1 as well, please clarify this in the documentation and preferably make this configurable via environment variables. That would be awesome, thank you! |
same problem here. this would be useful for multiple IF case. |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
This is still relevant for us. |
Does adding a CCM that sets the |
RKE2 will auto-detect its IP address via the interface with the default route. You can override this with --node-ip. If you want to have an external address for your node, you must specify --node-external-ip; RKE2 will not auto-detect one. |
It seems this can be solved by setting |
Some cloud providers (or on-prem deployments) provide two network interfaces, one with public network (or at least a default route), and one with a "private interface", which often provides some lower-latency, more direct connection to other nodes (so inter-node communication, both pod network traffic as well as etcd traffic should go over there).
RKE1's
cluster.yml
allowed specifyinginternal_address
andaddress
for each node, and configuringinternal_address
to the IP address in the "internal interface" caused internal traffic to go via that interface.Given this
cluster.yml
was generated on some admin host, it could be done after all machines were up and ip addresses were known.With RKE2, setup is different. There's a
config.yml
on each node.rke2 server
has some--bind-address
,--advertise-address
and--node-ip
options (and it's currently not quite documented yet on which need to be set for above scenario).I planned initializing all nodes with a (semi-static) cloud-init file, and without setting these options, I would end up with three different, static cloud-init files ("first" server, other servers, agent).
If I need to explicitly specify these addresses, I either need to provide individual cloud-init files per node (which might be challenging. At some cloud providers, I might only know the internal IP address after the machine has booted up, and then the cloud-init already needs to be passed), or replace the static
config.yml
with some script splicing together a base yml, and adding exact IP addresses determined at runtime.It'd really help if I could pass in a CIDR instead of a static IP address for private networking, and RKE2 would deduce the exact IPs from there.
The text was updated successfully, but these errors were encountered: