Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

manifests: switch network operator to deployment #36

Merged
merged 1 commit into from
Jan 29, 2019

Conversation

abhinavdahiya
Copy link
Contributor

openshift/cluster-version-operator#38 (comment)

@deads2k suggests that a deployment should be able to schedule even when node is not-ready.

@openshift-ci-robot openshift-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Nov 13, 2018
@abhinavdahiya
Copy link
Contributor Author

@deads2k

sh-4.2$ oc -n openshift-cluster-network-operator get pods
NAME                                       READY     STATUS    RESTARTS   AGE
cluster-network-operator-5cb656c65-s2s5n   0/1       Pending   0          30s
sh-4.2$ oc -n openshift-cluster-network-operator get pods cluster-network-operator-5cb656c65-s2s5n
NAME                                       READY     STATUS    RESTARTS   AGE
cluster-network-operator-5cb656c65-s2s5n   0/1       Pending   0          45s
sh-4.2$ oc -n openshift-cluster-network-operator describe pods cluster-network-operator-5cb656c65-s2s5n
Name:               cluster-network-operator-5cb656c65-s2s5n
Namespace:          openshift-cluster-network-operator
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=cluster-network-operator
                    pod-template-hash=176212721
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      ReplicaSet/cluster-network-operator-5cb656c65
Containers:
  cluster-network-operator:
    Image:      registry.svc.ci.openshift.org/ci-op-tl6yk2r1/stable@sha256:7210108e0a9bddbe8773dc23d53f92d3ff6ebb2b8e158a1ebc1fa1ba74dcc6bd
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/cluster-network-operator
      --url-only-kubeconfig=/etc/kubernetes/kubeconfig
    Limits:
      cpu:     20m
      memory:  50Mi
    Requests:
      cpu:     20m
      memory:  50Mi
    Environment:
      NODE_IMAGE:        registry.svc.ci.openshift.org/ci-op-tl6yk2r1/stable@sha256:2915dc4741e62def2bf15886012e2d0ad3cb5dea0db2f4b516275c3537220000
      HYPERSHIFT_IMAGE:  registry.svc.ci.openshift.org/ci-op-tl6yk2r1/stable@sha256:d8c99f7fc22bd8e387ef5cbf85d69c4f1ed59bb17da0944beb33a333d963add7
      POD_NAME:          cluster-network-operator-5cb656c65-s2s5n (v1:metadata.name)
    Mounts:
      /etc/kubernetes/kubeconfig from host-kubeconfig (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s29cp (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  host-kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/kubeconfig
    HostPathType:
  default-token-s29cp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s29cp
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoSchedule
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  4s (x15 over 53s)  default-scheduler  0/3 nodes are available: 3 node(s) were not ready.

@squeed
Copy link
Contributor

squeed commented Nov 14, 2018

I assume we'll have to add some tolerations too...

@deads2k
Copy link
Contributor

deads2k commented Nov 14, 2018

@aveshagarwal @sjenning I thought it may have been because the scheduler wasn't starting on the bootstrap node, but if it is, then it appears something is still wrong :(

@aveshagarwal
Copy link

@deads2k suggests that a deployment should be able to schedule even when node is not-ready.

I think that is only applicable to daemonsets, not for deployments. If we want to have deployments scheduled to not-ready nodes, it would require adding not-ready tolerations to them.

@aveshagarwal
Copy link

Though it does not seem like a normal pattern to have deployments forced to schedule on not-ready nodes.

@aveshagarwal
Copy link

also can you share the output of "oc describe nodes" for master nodes to see what is going on with them and what do they have w.r.t. taints?

@deads2k
Copy link
Contributor

deads2k commented Nov 14, 2018

I think that is only applicable to daemonsets, not for deployments. If we want to have deployments scheduled to not-ready nodes, it would require adding not-ready tolerations to them.

Isn't that in the output above:

Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoSchedule

@aveshagarwal
Copy link

Isn't that in the output above:

           node.kubernetes.io/not-ready:NoSchedule

ok, that seems fine, but lets see what are all the taints on nodes to see if the pods have all required tolerations.

@aveshagarwal
Copy link

@aveshagarwal
Copy link

@aveshagarwal
Copy link

@deads2k what is the reason we are using deployments and not daemonsets, which are specifically (at least one of the main reasons) for bootstrapping purposes?

@aveshagarwal
Copy link

@deads2k also in 1.12, DS are scheduled by the default scheduler so should work as far as scheduler is running.

@aveshagarwal
Copy link

node.kubernetes.io/not-ready:NoSchedule

Or at the minimum try to change it to to add a new one: node.kubernetes.io/not-ready:NoExecute and see if helps.

@ravisantoshgudimetla
Copy link
Contributor

ravisantoshgudimetla commented Nov 14, 2018

What version of kube are we on? This could be related to openshift/cluster-version-operator#35 (comment), where we have not enabled TaintNodesByCondition and instead the pod created from deployment go through CheckNodeConditions predicate.

@deads2k
Copy link
Contributor

deads2k commented Nov 14, 2018

@aveshagarwal if you have a daemonset for an operator, it runs on every node. That's fine with 5 nodes. It's not fine with 500 nodes. In addition, you cannot scale them. The rollout functionality on updates isn't the same, so lease acquisition is weird.

Basically, deployments are the rigth resource. Let's figure out what's wrong in the scheduler or deployment that prevents it from scheduling.

@squeed
Copy link
Contributor

squeed commented Nov 14, 2018

fwiw, the network operator daemonset only runs on the masters, so it's not so bad.

@aveshagarwal
Copy link

aveshagarwal commented Nov 14, 2018

@aveshagarwal if you have a daemonset for an operator, it runs on every node. That's fine with 5 nodes. It's not fine with 500 nodes. In addition, you cannot scale them. The rollout functionality on updates isn't the same, so lease acquisition is weird.

Basically, deployments are the rigth resource. Let's figure out what's wrong in the scheduler or deployment that prevents it from scheduling.

I think there is nothing wrong in scheduler as thats how the scheduler has been so far (until 1.12). By default, scheduler eliminates unschedulable nodes (including not-ready), and as deployments are scheduled by the default scheduler, deployments can not be scheduled on unschedulable nodes by the default scheduler. This behavior is different for DS because DS controller does the actual scheduling of DS pods (upto 1.11) so schedules pods on unschedulable nodes (including not-ready).

I think the behavior you are looking for deployments should be achievable in 1.12. Because in 1.12, TaintNodesByCondition is enabled by default and node conditions are managed by using taints. That means no matter what are the conditions on nodes, pods (irrespective of whether via deployments or DS, job, cronjob, statefulsets etc) can be scheduled on any node as far as the pods are assigned right tolerations.

@deads2k
Copy link
Contributor

deads2k commented Nov 14, 2018

Because in 1.12, TaintNodesByCondition is enabled by default and node conditions are managed by using taints

Can you enable it in 1.11? Is there a reason not to?

@aveshagarwal
Copy link

Because in 1.12, TaintNodesByCondition is enabled by default and node conditions are managed by using taints

Can you enable it in 1.11? Is there a reason not to?

I would not be comfortable in enabling alpha feature in 1.11 because

  1. In 1.12, the feature has seen some testing so there is a bit of confidence about its stability.
  2. That would also mean bringing those patches that have gone into kube to make it more stable. Dont remember off the top of my head what those are.

Also if the operator stuff is for 4.0 that means 1.12, why would we want to enable the alpha feature in 1.11?

@smarterclayton
Copy link
Contributor

I'm confused - why can't we enable this in origin master so that we can verify that this works before we land the rebase and then find out it doesn't?

@squeed
Copy link
Contributor

squeed commented Nov 20, 2018

I've already run the operator as a deployment against a vanilla 1.13 cluster. I can test 1.12.

@aveshagarwal
Copy link

I've already run the operator as a deployment against a vanilla 1.13 cluster. I can test 1.12.

Yes please.

@aveshagarwal
Copy link

I've already run the operator as a deployment against a vanilla 1.13 cluster. I can test 1.12.

Did you try with nodes with not-ready conditions?

@squeed
Copy link
Contributor

squeed commented Nov 27, 2018

Whoops, the CVO still had an old override for a cluster-network-operator Deployment from ages ago. PR to remove it is openshift/installer#735

@abhinavdahiya
Copy link
Contributor Author

only 2 tests failed Woohoo! with 1.12 rebase in this seems to be ready for review.
cc @squeed

failures from e2e-aws

Failing tests:

[sig-storage] Dynamic Provisioning DynamicProvisioner should provision storage with different parameters [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] Volume limits should verify that all nodes have volume limits [Suite:openshift/conformance/parallel] [Suite:k8s]

/retest

@danwinship
Copy link
Contributor

/retest
(Note since I already confused myself about this: This does not conflict with #72, since that only assumes that the SDN pods are deployed via DaemonSet. It doesn't care about how the operator itself is deployed.)

@squeed
Copy link
Contributor

squeed commented Jan 28, 2019

/lgtm
/approve

@squeed
Copy link
Contributor

squeed commented Jan 28, 2019

We should switch the openshift-sdn controller to a Deployment now too.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jan 28, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, squeed

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 28, 2019
@abhinavdahiya
Copy link
Contributor Author

/retest

@squeed
Copy link
Contributor

squeed commented Jan 28, 2019

I can see from the e2e run that the network operator succeeded, but bootstrap didn't succeed. So that's a flake (albeit a troubling one).

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit e029709 into openshift:master Jan 29, 2019
@sjenning
Copy link
Contributor

the file needs to be renamed as well

wking added a commit to wking/cluster-network-operator that referenced this pull request Oct 30, 2019
…ployment

This was originally renamed from deployment to daemonset in 8ecd1ba
(Refactor the operator for operator-sdk v0.1.0, 2018-11-02, openshift#25).  The
content was changed back to a Deployment in 19de4ca (manifests:
switch network operator to deployment, 2018-11-13, openshift#36), and this
commit catches the manifest name back up.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet