Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use port forwarding instead of API server proxying #80

Merged
merged 2 commits into from
May 6, 2021

Conversation

michaelmdresser
Copy link
Contributor

Multiple users have reported problems with kubectl cost
when running in a more security conscious environment that
implements network security policies of various kinds. These
policies often prevent the K8s API server from communicating
with the Kubecost service.

This commit introduces new behavior that will cause requests,
by default, to go through a temporarily forwarded port on
localhost that goes to a Kubecost pod. This entirely circumvents
the issue of API server proxying, but is more complex.

The old behavior is now under the flag --use-proxy.

Tested locally against a GKE cluster.

@dwbrown2
Copy link
Contributor

dwbrown2 commented May 6, 2021

This wouldn't change the default (install) experience would it? It definitely seems valuable but wouldn't want to create friction for most of our users who don't have network policies enabled.

Another way of asking may be, should --use-proxy be true by default?

@michaelmdresser
Copy link
Contributor Author

michaelmdresser commented May 6, 2021

This won't cause any install friction.

I think we should avoid --use-proxy=true as default because I'm pretty confident that this new behavior will work for more users than the old method. kubectl port-forward pretty much always works and this is essentially the same thing.

Multiple users have reported problems with kubectl cost
when running in a more security conscious environment that
implements network security policies of various kinds. These
policies often prevent the K8s API server from communicating
with the Kubecost service.

This commit introduces new behavior that will cause requests,
by default, to go through a temporarily forwarded port on
localhost that goes to a Kubecost pod. This entirely circumvents
the issue of API server proxying, but is more complex.

The old behavior is now under the flag --use-proxy.

Tested locally against a GKE cluster.
Copy link

@nikovacevic nikovacevic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Nothing blocking: just that one typo and a design question, which we can discuss later.

README.md Outdated Show resolved Hide resolved
Comment on lines +86 to +95
if no.useProxy {
allocations, err = query.QueryAllocation(ko.clientset, *ko.configFlags.Namespace, no.serviceName, no.window, "namespace", context.Background())
if err != nil {
return fmt.Errorf("failed to query allocation API: %s", err)
}
} else {
allocations, err = query.QueryAllocationFwd(ko.restConfig, *ko.configFlags.Namespace, no.serviceName, no.window, "namespace", context.Background())
if err != nil {
return fmt.Errorf("failed to query allocation API: %s", err)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you consider a design that would allow for a single QueryAllocation function? Like, perhaps setting useProxy as a value in the context and making the decision internally to the function? I wouldn't say I feel strongly about this, but having n different entire versions of the API based on configuration flags feels potentially cumbersome.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good thought. The only possible future extension I see to query paradigms is a custom URL that Kubecost has been exposed at. That's really very different behavior and requires a different set of parameters, so I wouldn't put it under the same function. When that day comes, the most important duplicated data is imo the construction of the query parameters within QueryAllocation(Fwd). I'd say those can get refactored into a single initialization then.

I'm really tempted to therefore just put a boolean flag on QueryAllocation and QueryAggCostModel for useProxy, but I'm trying to adhere to a "no boolean behavior flags on exported functions" thing so that's out.

It's a little boilerplate-y right now, but I don't mind it too much. I think there's a lot of potential for refactoring and generating a lot of these subcommands automatically instead of the intense duplication going on right now. Something like "generate a subcommand for the allocation API with a pod aggregation" instead of making a new file pod.go and just changing some names. Well, anyway that's neither here nor there. When I get around to adding assets subcommands I think that refactor will happen.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I changed my mind in #82

@michaelmdresser michaelmdresser merged commit 17490c3 into main May 6, 2021
@michaelmdresser michaelmdresser deleted the mmd/port-forward branch May 6, 2021 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants