-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[WIP] Rebase to the latest Kubernetes version #1483
[WIP] Rebase to the latest Kubernetes version #1483
Conversation
cc @deads2k I have to make sure I didn't actually break anything, but the tests pass, etc. Let me know if you see anything that could be improved re: the split repositories, generic API server, etc. |
"please look through 1 million changed lines to find something funny looking" sniff |
@@ -48,13 +49,18 @@ func (sink *gclSink) ExportEvents(eventBatch *core.EventBatch) { | |||
glog.V(4).Info("Exporting events") | |||
entries := make([]*gcl.LogEntry, len(eventBatch.Events)) | |||
for i, event := range eventBatch.Events { | |||
evtJson, err := json.Marshal(event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look right. What kind of event is this? Normally you'd using something like runtime.Encode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The library updated from taking an interface{}
type to taking a pre-encoded blob of json. Presumably, it was just calling json.Marshal
before, but the library was doing it internally. I can switch it over to runtime.Encode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The library updated from taking an interface{} type to taking a pre-encoded blob of json. Presumably, it was just calling json.Marshal before, but the library was doing it internally. I can switch it over to runtime.Encode
I won't block on it, but I think that would be better.
[]unversioned.GroupVersionResource{}, resourceConfig, s.RuntimeConfig) | ||
if err != nil { | ||
glog.Fatalf("error in initializing storage factory: %s", err) | ||
func newAPIServer(s *options.HeapsterRunOptions) (*genericapiserver.GenericAPIServer, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
break the rewiring in this file into its own commit for us? Looks more involved than the rest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack, will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, the rewiring stuff having to do with the generic API server code is now in it's own commit.
|
||
return genericConfig.New() | ||
return genericAPIServerConfig.Complete().New() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got to the end here. At a glance, this looks good. Did you support a general heapster webhook authentication before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it just tried to support "all" auth before, but I'm also not entirely certain that anyone was actually using -- I think the HPA is normally deployed to point to the non-generic-API-server variant of the API, and the Kube cluster monitoring addon doesn't enable the generic API server code in Heapster.
2b9e4b0
to
8bb5d38
Compare
Heh, sorry. The "important" changes are now in their own commit, separate from the "find-replace" commit on the API. |
cc @piosz |
8bb5d38
to
6ca8ac5
Compare
@piosz this looks like it's failing to build because it's missing the |
// given binary target. | ||
import ( | ||
// Cloud providers | ||
_ "k8s.io/kubernetes/pkg/cloudprovider/providers" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happened to these? Unnecessary?
@@ -121,14 +121,13 @@ func createAndRunAPIServer(opt *options.HeapsterRunOptions, metricSink *metricsi | |||
glog.Errorf("Could not create the API server: %v", err) | |||
return | |||
} | |||
healthz.InstallHandler(server.Mux, healthzChecker(metricSink)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a way to add health checks. Did you accidentally lose this entirely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, your main looks to be fighting with some of the API server endpoints like heath
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought when I was looking there was something in the PrepareRun()
code of the generic API server code that did this for us
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, your main looks to be fighting with some of the API server endpoints like heath
That shouldn't be the case, since we're on a different port
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a way to add health checks. Did you accidentally lose this entirely?
Whoops, yeah, looks like this was installing the HTTP handler with a custom checker. I'll add the custom checker back in
@@ -213,12 +212,12 @@ func getListersOrDie(kubernetesUrl *url.URL) (*cache.StoreToPodLister, *cache.St | |||
return podLister, nodeLister | |||
} | |||
|
|||
func createKubeClientOrDie(kubernetesUrl *url.URL) *kube_client.Client { | |||
func createKubeClientOrDie(kubernetesUrl *url.URL) *kube_client.Clientset { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks dirty. doesn't have to be fixed here, but why do you have a side-channel to load the server location instead of a kubeconfig file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a legacy thing -- you run --source=kubernetes.summary_api=$KUBE_CONFIGURATION_IN_A_URL
, but you can specify to use the inClusterConfig option which should just pull from the injected kubeconfig, IIRC.
testAPIGroup(t, serverIP) | ||
testAPIResourceList(t, serverIP) | ||
} | ||
|
||
func TestRunSecureServer(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait, does this test not even attempt to close the server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like its preexisting, but you should have an issue.
opt.CertDirectory = "/tmp" | ||
opt.SecureServing.ServingOptions.BindPort = testSecurePort | ||
opt.SecureServing.ServerCert.CertDirectory = "/tmp" | ||
opt.DisableAuthForTesting = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're starting a secure server without security.....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're starting a secure server without security.....
Yeah, given a loopback identity which you have access to, you should be using that. followup I guess since its preexisting.
A few comments on wiring, but it looks largely ok. I'll leave detailed review to a heapster specialist. @DirectXMan12 nice job, this separation looks a lot cleaner. |
6ca8ac5
to
4a7fab2
Compare
This commit contains the Heapster code changes required to rebase Heapster onto the latest Kubernetes. This begins to move away from making use of `k8s.io/kubernetes` proper, instead favoring `k8s.io/client-go`, `k8s.io/apimachinery`, and `k8s.io/apiserver`. This should make future updates easier.
This commit continues the work of the previous commit, rebasing the generic-API-server-based code in Heapster to use the latest copy of generic API server, and to follow the proper conventions, including switching over to just use webhook authentication.
4a7fab2
to
3332644
Compare
@kubernetes/test-infra-maintainers do you have any idea what's going on here? Nothing useful seems to be in the logs, but symptoms seem to point to this test being run under Go 1.6, AFAICT. |
Seems the heapster test needs to be dockerized and use go 1.7 image. Currently it's using go in jenkins which is 1.6. I'll take a look later. |
FYI, I discovered an issue in this PR that only appears at runtime with the API server "storage". It's fixed in the update PR that I posted ( #1537 ), which may end up superseding this PR. |
@k8s-bot test this |
@DirectXMan12: The following test(s) failed:
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
superseded by #1537 |
This commit rebases onto the latest version of Kubernetes and associated libraries.
It also simplifies the API server code to make better use of the genericapiserver code, and to rely on delegated authn/authz.