-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replace kubeclient with kubeclientset in scheduler factory #34084
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -28,7 +28,7 @@ import ( | |
"k8s.io/kubernetes/pkg/api" | ||
"k8s.io/kubernetes/pkg/api/errors" | ||
"k8s.io/kubernetes/pkg/client/cache" | ||
client "k8s.io/kubernetes/pkg/client/unversioned" | ||
clientset "k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset" | ||
"k8s.io/kubernetes/pkg/fields" | ||
"k8s.io/kubernetes/pkg/types" | ||
"k8s.io/kubernetes/pkg/util/runtime" | ||
|
@@ -53,7 +53,7 @@ const ( | |
|
||
// ConfigFactory knows how to fill out a scheduler config with its support functions. | ||
type ConfigFactory struct { | ||
Client *client.Client | ||
Client clientset.Interface | ||
// queue for pods that need scheduling | ||
PodQueue *cache.FIFO | ||
// a means to list all known scheduled pods. | ||
|
@@ -96,7 +96,7 @@ type ConfigFactory struct { | |
} | ||
|
||
// Initializes the factory. | ||
func NewConfigFactory(client *client.Client, schedulerName string, hardPodAffinitySymmetricWeight int, failureDomains string) *ConfigFactory { | ||
func NewConfigFactory(client clientset.Interface, schedulerName string, hardPodAffinitySymmetricWeight int, failureDomains string) *ConfigFactory { | ||
stopEverything := make(chan struct{}) | ||
schedulerCache := schedulercache.New(30*time.Second, stopEverything) | ||
|
||
|
@@ -478,48 +478,48 @@ func getNodeConditionPredicate() cache.NodeConditionPredicate { | |
// scheduled. | ||
func (factory *ConfigFactory) createUnassignedNonTerminatedPodLW() *cache.ListWatch { | ||
selector := fields.ParseSelectorOrDie("spec.nodeName==" + "" + ",status.phase!=" + string(api.PodSucceeded) + ",status.phase!=" + string(api.PodFailed)) | ||
return cache.NewListWatchFromClient(factory.Client, "pods", api.NamespaceAll, selector) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "pods", api.NamespaceAll, selector) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not just build ListWatch function with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with that. Though, scheduler is not the only occurrence of the phenomenon. There are other places in kubelet, apiserver, proxy, etc. that are subjects to the replacement. I would suggest to open separate PR for it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. That's fair. Could you open an issue to track it? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see you are calling There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The reason why not build a function like that is the go type system makes it impossible to make a general interface, since the specific X/XList types are different for each object. :( There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @caesarxuchao I think the best option, if you don't want people to break the glass, is to add a NewListWatch function to each client. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Right. I missed it. @ingvagabund mostly looks good. I left a comment regarding the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I like this approach. That could be done as another PR on top or as a part of #34138. |
||
} | ||
|
||
// Returns a cache.ListWatch that finds all pods that are | ||
// already scheduled. | ||
// TODO: return a ListerWatcher interface instead? | ||
func (factory *ConfigFactory) createAssignedNonTerminatedPodLW() *cache.ListWatch { | ||
selector := fields.ParseSelectorOrDie("spec.nodeName!=" + "" + ",status.phase!=" + string(api.PodSucceeded) + ",status.phase!=" + string(api.PodFailed)) | ||
return cache.NewListWatchFromClient(factory.Client, "pods", api.NamespaceAll, selector) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "pods", api.NamespaceAll, selector) | ||
} | ||
|
||
// createNodeLW returns a cache.ListWatch that gets all changes to nodes. | ||
func (factory *ConfigFactory) createNodeLW() *cache.ListWatch { | ||
// all nodes are considered to ensure that the scheduler cache has access to all nodes for lookups | ||
// the NodeCondition is used to filter out the nodes that are not ready or unschedulable | ||
// the filtered list is used as the super set of nodes to consider for scheduling | ||
return cache.NewListWatchFromClient(factory.Client, "nodes", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "nodes", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
// createPersistentVolumeLW returns a cache.ListWatch that gets all changes to persistentVolumes. | ||
func (factory *ConfigFactory) createPersistentVolumeLW() *cache.ListWatch { | ||
return cache.NewListWatchFromClient(factory.Client, "persistentVolumes", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "persistentVolumes", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
// createPersistentVolumeClaimLW returns a cache.ListWatch that gets all changes to persistentVolumeClaims. | ||
func (factory *ConfigFactory) createPersistentVolumeClaimLW() *cache.ListWatch { | ||
return cache.NewListWatchFromClient(factory.Client, "persistentVolumeClaims", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "persistentVolumeClaims", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
// Returns a cache.ListWatch that gets all changes to services. | ||
func (factory *ConfigFactory) createServiceLW() *cache.ListWatch { | ||
return cache.NewListWatchFromClient(factory.Client, "services", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "services", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
// Returns a cache.ListWatch that gets all changes to controllers. | ||
func (factory *ConfigFactory) createControllerLW() *cache.ListWatch { | ||
return cache.NewListWatchFromClient(factory.Client, "replicationControllers", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Core().GetRESTClient(), "replicationControllers", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
// Returns a cache.ListWatch that gets all changes to replicasets. | ||
func (factory *ConfigFactory) createReplicaSetLW() *cache.ListWatch { | ||
return cache.NewListWatchFromClient(factory.Client.ExtensionsClient, "replicasets", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
return cache.NewListWatchFromClient(factory.Client.Extensions().GetRESTClient(), "replicasets", api.NamespaceAll, fields.ParseSelectorOrDie("")) | ||
} | ||
|
||
func (factory *ConfigFactory) makeDefaultErrorFunc(backoff *podBackoff, podQueue *cache.FIFO) func(pod *api.Pod, err error) { | ||
|
@@ -547,7 +547,7 @@ func (factory *ConfigFactory) makeDefaultErrorFunc(backoff *podBackoff, podQueue | |
// Get the pod again; it may have changed/been scheduled already. | ||
getBackoff := initialGetBackoff | ||
for { | ||
pod, err := factory.Client.Pods(podID.Namespace).Get(podID.Name) | ||
pod, err := factory.Client.Core().Pods(podID.Namespace).Get(podID.Name) | ||
if err == nil { | ||
if len(pod.Spec.NodeName) == 0 { | ||
podQueue.AddIfNotPresent(pod) | ||
|
@@ -587,26 +587,26 @@ func (ne *nodeEnumerator) Get(index int) interface{} { | |
} | ||
|
||
type binder struct { | ||
*client.Client | ||
Client clientset.Interface | ||
} | ||
|
||
// Bind just does a POST binding RPC. | ||
func (b *binder) Bind(binding *api.Binding) error { | ||
glog.V(3).Infof("Attempting to bind %v to %v", binding.Name, binding.Target.Name) | ||
ctx := api.WithNamespace(api.NewContext(), binding.Namespace) | ||
return b.Post().Namespace(api.NamespaceValue(ctx)).Resource("bindings").Body(binding).Do().Error() | ||
return b.Client.Core().GetRESTClient().Post().Namespace(api.NamespaceValue(ctx)).Resource("bindings").Body(binding).Do().Error() | ||
// TODO: use Pods interface for binding once clusters are upgraded | ||
// return b.Pods(binding.Namespace).Bind(binding) | ||
} | ||
|
||
type podConditionUpdater struct { | ||
*client.Client | ||
Client clientset.Interface | ||
} | ||
|
||
func (p *podConditionUpdater) Update(pod *api.Pod, condition *api.PodCondition) error { | ||
glog.V(2).Infof("Updating pod condition for %s/%s to (%s==%s)", pod.Namespace, pod.Name, condition.Type, condition.Status) | ||
if api.UpdatePodCondition(&pod.Status, condition) { | ||
_, err := p.Pods(pod.Namespace).UpdateStatus(pod) | ||
_, err := p.Client.Core().Pods(pod.Namespace).UpdateStatus(pod) | ||
return err | ||
} | ||
return nil | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it a typo? do you want to remove client on https://github.com/kubernetes/kubernetes/pull/34084/files#diff-c2f9e998f6bc0386be67c0fa4a9db7d5R91?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do, still did not spent much time on removing it completely. Once all tests pass I can refactor the code more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the core clientset provides the Events functions so the client can be removed completely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, unversioned client (1) differs from clientset core client (2). Reverting the change.
EventInterface.Patch(event *api.Event, data []byte) (*api.Event, error)
EventInterface.Patch(name string, pt api.PatchType, data []byte, subresources ...string) (result *api.Event, err error)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about this method: https://github.com/kubernetes/kubernetes/blob/master/pkg/client/clientset_generated/internalclientset/typed/core/unversioned/event_expansion.go#L33
And there is an adapter here: https://github.com/kubernetes/kubernetes/blob/master/pkg/client/clientset_generated/internalclientset/typed/core/unversioned/event_expansion.go#L147-L161
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example use: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replication/replication_controller.go#L125
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@caesarxuchao looks exactly like the solution I am looking for. Thanks!!! Will update the PR and remove the unversioned.Client.