Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In-cluster didn't format correct kube_host_port for IPv6 cluster #874

Closed
somnusfish opened this issue Apr 12, 2022 · 5 comments · Fixed by #876
Closed

In-cluster didn't format correct kube_host_port for IPv6 cluster #874

somnusfish opened this issue Apr 12, 2022 · 5 comments · Fixed by #876
Labels
bug Something isn't working config Kube config related

Comments

@somnusfish
Copy link
Contributor

Current and expected behavior

Issue:

Get InvalidAuthority on IPv6 cluster using kube::client::Client::try_default()

Sample error:

Controller: "Error: ClientCreate { source: InferConfig(InferConfigError { in_cluster: ParseClusterUrl(InvalidUri(InvalidAuthority)), kubeconfig: ReadConfig(Os { code: 2, kind: NotFound, message: "No such file or directory" }, "/.kube/config") }) }"

Cause:

kube_host_port() in incluster_config.rs didn't handle IPv6 address as expected.

fn kube_host_port() -> Option<String> {
    let host = kube_host()?;
    let port = kube_port()?;
    Some(format!("https://{}:{}", host, port))
}

Expected address: "https://[fd49:683:e486::1]:443"
Actual address: "https://fd49:683:e486::1:443"

Possible solution

Option 1: Add extra logic in kube_host_port()
Option 2: Use KUBERNETES_PORT with an adjustment of schema.

Additional context

No response

Environment

EKS

➜  kube-rs git:(ipv6-fix) kubectl version --short
Client Version: v1.18.9-eks-d1db3c
Server Version: v1.21.5-eks-bc4871b

Configuration and features

No response

Affected crates

No response

Would you like to work on fixing this bug?

yes

@somnusfish
Copy link
Contributor Author

Hi developer team,

We want to do a release to patch this issue this week since our customer complained the service didn't work on the IPv6 cluster. I was wondering if there is any suggested workaround on this before the actual patch goes into live?

We were just trying to use the default way to build the client, sample code:

   let k8s_client = kube::client::Client::try_default()
       .await
       .context(error::ClientCreate)?;

@clux clux added the config Kube config related label Apr 12, 2022
@clux
Copy link
Member

clux commented Apr 12, 2022

Hey there. Thanks for the report and quick fix for this. We were not aware.

A possible workaround for this is to make the client use the exported kube_dns function to talk to the apiserver rather than an explicit IP. This is going to become default at some point anyway as it is the new recommended way.

That said, with a good bugfix, this seems like a good time to make a release, so will try to start a release later today.

@clux
Copy link
Member

clux commented Apr 12, 2022

Ok 0.71.0 is released with the fix and more. We can either close this issue as a result of the pr, or we can consider changing our in-cluster default for cluster_url defined at https://github.com/kube-rs/kube-rs/blob/91c054c277a520aad617bd8752097336632a7270/kube-client/src/config/mod.rs#L195-L201

No one has said anything about the default on rustls since it was instated half a year ago in #587, and while the rustls load is lower, I would imagine it is a pretty safe thing to change at this point.

@somnusfish
Copy link
Contributor Author

Thanks for the fast reaction to get 0.71.0 out. Really appreciate your work.

It seems still have issue with IPv6 support using kube_server(), here is the sample error:

{"v":0,"name":"brupop-controller","msg":"failed with error error trying to connect: error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1882:: hostname mismatch","level":50,"hostname":"brupop-controller-deployment-5fdff957c8-fzm6t","pid":1,"time":"2022-04-13T02:49:14.527250505+00:00","target":"kube_client::client::builder","line":164,"file":"/src/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-client-0.71.0/src/client/builder.rs"}

I think this might caused by using ip address to build ssl connect which ends up getting hostname mismatch. But I am not familiar with SSL so I am not sure how to fix it. Error line here.

I was able enable rustls-tls feature and use incluster_config::kube_dns() to get my client. Totally support switching the default to kube_dns()

clux added a commit that referenced this issue Apr 13, 2022
This is the recommended, and only documented method on https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/
The legacy method has issues with ipv6 and it's time to retire it.

We trialled the new method for 6months via #587 without any reports.

Closes #874

Signed-off-by: clux <sszynrae@gmail.com>
@clux clux linked a pull request Apr 13, 2022 that will close this issue
@clux clux closed this as completed in #876 Apr 13, 2022
clux added a commit that referenced this issue Apr 13, 2022
* Switch to kubernetes dns for incluster url everywhere

This is the recommended, and only documented method on https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/
The legacy method has issues with ipv6 and it's time to retire it.

We trialled the new method for 6months via #587 without any reports.

Closes #874

Signed-off-by: clux <sszynrae@gmail.com>

* remove code for legacy methods, was never actually made public

Signed-off-by: clux <sszynrae@gmail.com>

* simplify kube_dns fn with less unwraps

Signed-off-by: clux <sszynrae@gmail.com>
@clux
Copy link
Member

clux commented Apr 13, 2022

Fix for it has been merged to master. It's been tested on both TLS stacks, so you can try to pin kube to a git sha to test it before a new version is out.

[dependencies.kube]
features = ["runtime", "client", "derive"]
git = "https://github.com/kube-rs/kube-rs.git"
rev = "dd0b2585729dab5c140ab96dc35c00484cc992bc"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working config Kube config related
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants