-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use port forwarding instead of API server proxying #80
Conversation
This wouldn't change the default (install) experience would it? It definitely seems valuable but wouldn't want to create friction for most of our users who don't have network policies enabled. Another way of asking may be, should |
This won't cause any install friction. I think we should avoid |
Multiple users have reported problems with kubectl cost when running in a more security conscious environment that implements network security policies of various kinds. These policies often prevent the K8s API server from communicating with the Kubecost service. This commit introduces new behavior that will cause requests, by default, to go through a temporarily forwarded port on localhost that goes to a Kubecost pod. This entirely circumvents the issue of API server proxying, but is more complex. The old behavior is now under the flag --use-proxy. Tested locally against a GKE cluster.
d7db54d
to
a77b6ed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Nothing blocking: just that one typo and a design question, which we can discuss later.
if no.useProxy { | ||
allocations, err = query.QueryAllocation(ko.clientset, *ko.configFlags.Namespace, no.serviceName, no.window, "namespace", context.Background()) | ||
if err != nil { | ||
return fmt.Errorf("failed to query allocation API: %s", err) | ||
} | ||
} else { | ||
allocations, err = query.QueryAllocationFwd(ko.restConfig, *ko.configFlags.Namespace, no.serviceName, no.window, "namespace", context.Background()) | ||
if err != nil { | ||
return fmt.Errorf("failed to query allocation API: %s", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you consider a design that would allow for a single QueryAllocation
function? Like, perhaps setting useProxy
as a value in the context
and making the decision internally to the function? I wouldn't say I feel strongly about this, but having n
different entire versions of the API based on configuration flags feels potentially cumbersome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good thought. The only possible future extension I see to query paradigms is a custom URL that Kubecost has been exposed at. That's really very different behavior and requires a different set of parameters, so I wouldn't put it under the same function. When that day comes, the most important duplicated data is imo the construction of the query parameters within QueryAllocation(Fwd)
. I'd say those can get refactored into a single initialization then.
I'm really tempted to therefore just put a boolean flag on QueryAllocation
and QueryAggCostModel
for useProxy
, but I'm trying to adhere to a "no boolean behavior flags on exported functions" thing so that's out.
It's a little boilerplate-y right now, but I don't mind it too much. I think there's a lot of potential for refactoring and generating a lot of these subcommands automatically instead of the intense duplication going on right now. Something like "generate a subcommand for the allocation API with a pod
aggregation" instead of making a new file pod.go
and just changing some names. Well, anyway that's neither here nor there. When I get around to adding assets subcommands I think that refactor will happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I changed my mind in #82
Multiple users have reported problems with kubectl cost
when running in a more security conscious environment that
implements network security policies of various kinds. These
policies often prevent the K8s API server from communicating
with the Kubecost service.
This commit introduces new behavior that will cause requests,
by default, to go through a temporarily forwarded port on
localhost that goes to a Kubecost pod. This entirely circumvents
the issue of API server proxying, but is more complex.
The old behavior is now under the flag --use-proxy.
Tested locally against a GKE cluster.