-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow the connector to run outside of a kube pod #2613
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how I feel about this. Running the connector outside of Kubernetes is a great change but keeping support for Kubernetes while in this mode seems unnecessary. Perhaps what is needed is to decouple the connector from the Kubernetes connector so it can be run separately with possibly other integrations
The goal here isn't to support different kinds of connectors, but to allow the kubernetes connector to run in different environments. For "getting started" and while developing/contributing, being able to run the connector locally outside of a kube pod is a big improvement. It removes all the complications of helm, and makes it much easier to iterate and try things out. Even in production environments, having the authentication proxy run inside the environment for which it is authenticating requests could cause problems, especially when trying to recover that environment.
I think that would also be a good change, but there's not much value in doing that until we are ready to add a different kind of destination. |
40f3218
to
9f54b31
Compare
9f54b31
to
296d31e
Compare
d699f2f
to
68c834e
Compare
a1f771a
to
d342a51
Compare
68c834e
to
77dd82c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Marking this as ready for review. I've cleaned up the changes, and tested it manually.
I cut out some of the changes required to run locally and I'll re-introduce those in a follow up PR.
} | ||
|
||
k.SecretReader = secrets.NewKubernetesSecretProvider(clientset, namespace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field was not used anywhere, so I was able to remove it.
clusterCACert, err := kubernetes.CA() | ||
if err != nil { | ||
return fmt.Errorf("reading CA file: %w", err) | ||
} | ||
|
||
certPool := x509.NewCertPool() | ||
|
||
if ok := certPool.AppendCertsFromPEM(clusterCACert); !ok { | ||
if ok := certPool.AppendCertsFromPEM(k8s.Config.CAData); !ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CA data is populated by the rest.InClusterConfig
as long as we call rest.LoadTLSFiles
(which I added to NewKubernetes
). This removes the need to look it up again, and helps keep most the in-cluster logic in the NewKubernetes
constructor.
checkSum := k8s.Checksum() | ||
logging.L.Debug().Str("uniqueID", checkSum).Msg("Cluster uniqueID") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tested this manually for now, since we don't quite yet have an easy way of testing this outside of kube.
I tested this by running infra connector
locally using my kubectl config, and again by running it in a pod. I compared the checksum in both cases to ensure it produced the same value.
@@ -501,6 +487,8 @@ func (k *Kubernetes) Name(chksm string) (string, error) { | |||
return name, nil | |||
} | |||
|
|||
const podLabelsFilePath = "/etc/podinfo/labels" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved this constant to next to where it is used. This helps made code more readable by reducing the need to scroll around the file when reading this function, or looking for functions that use the constant.
I know some styles encourage putting all the constants at the top of the file, but I think often that's not the best choice. It's fine if the constants are for something like an enum that will be used all over the place. For constants like this that are only used by a single function, and are unlikely to ever be used in another place, I think it works much better to keep it close to the function where it is used.
@@ -52,6 +51,15 @@ type ListenerOptions struct { | |||
Metrics string | |||
} | |||
|
|||
// connector stores all the dependencies for the connector operations. | |||
type connector struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm never sure what to call these, so I went with the same pattern we used for Server
and called it connector
. As the code changes maybe we'll find a better name for this, or find ways to split it into other types.
I'm doing this now to allow us to split up syncWithServer
and removing the need to pass around 5+ args to the functions.
func syncDestination(con connector) error { | ||
host, port, err := con.k8s.Endpoint() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The next step for allowing this to run locally will be to add a config setting that allows us to skip the endpoint lookup. So before making that change I'm extracting this logic from sycnWithServer
to give us some space to add that new logic.
1f22999
to
9a6bc74
Compare
77dd82c
to
21f51cb
Compare
internal/kubernetes/kubernetes.go
Outdated
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules() | ||
cfg := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, nil) | ||
if config, err = cfg.ClientConfig(); err != nil { | ||
return nil, fmt.Errorf("load kube config %w", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This ended up not being that useful, so I'm going to delete this "read kube API settings from kubectl config" logic. In #3350 I added support for reading the token from connector options. I think that's a better approach for reading the host and CA cert. I'll add those options in the next follow up, which should also add the tests.
Previously we would only look for in-cluster config. In the future we should be able to specify these values from config options, so that we can run a connector in tests. This change reduces the number of places we use the in-cluser calls by relying on the rest.Config that is populated by the NewKubernetes constructor. Also read the CA cert from the existing config, instead of reading the file again. Also remove the SecretReader field, it was never used. Also add more godoc.
This should allow us to split up the conection logic by using this new type as the function parameter.
21f51cb
to
622366f
Compare
Summary
Branched from #3220. Best viewed by individual commit, and the third commit benefits from the "Hide Whitespace" view.
This PR adds support for reading the kube API hostname and CA cert from kubectl config when reading the in-cluster files fails. This is the first step to allow us to run the connector outside of kube (primarily for testing).
Also extracts a new
type connector struct
for the many args being passed tosyncWithServer
, and extracts a newsyncDestination
from that function as well. This is the first step to split up thesyncWithServer
goroutine into its two responsibilities: updating the destination, and updating role bindings.In the next follow up PR I'll be addressing the remaining in-cluster calls, to allow the connector to run outside of a kube pod.
Related to #2500 (but doesn't get us all the way there yet)