Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to change kube-scheduler entrypoint and config? #1841

Closed
pan87232494 opened this issue Dec 19, 2019 · 7 comments
Closed

how to change kube-scheduler entrypoint and config? #1841

pan87232494 opened this issue Dec 19, 2019 · 7 comments

Comments

@pan87232494
Copy link

pan87232494 commented Dec 19, 2019

I just want to add one more command args into kube-scheduler (policy-config-file), how to do that, is rancher kube-scheduler support this?

  • command:
    • kube-scheduler
    • --address=127.0.0.1
    • --kubeconfig=/etc/kubernetes/scheduler.conf
    • --policy-config-file=/etc/kubernetes/scheduler-policy-config.json
    • --leader-elect=true

RKE version:
rke version v0.3.2
Docker version: (docker version,docker info preferred)
18.09.2
Operating system and kernel: (cat /etc/os-release, uname -r preferred)
redhat 7.7
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Bare-metal
cluster.yml file:

nodes:
  - address: 192.168.2.229
    port: "22"
    internal_address: ""
    role:
      - controlplane
      - worker
      - etcd
    hostname_override: ""
    user: xjera
    docker_socket: /var/run/docker.sock
    ssh_key: ""
    ssh_key_path: ~/.ssh/id_rsa
    ssh_cert: ""
    ssh_cert_path: ""
    labels:
      app: ingress
    taints: []
  - address: 192.168.2.140
    port: "22"
    internal_address: ""
    role:
      - worker
      - etcd
    hostname_override: ""
    user: xjera
    docker_socket: /var/run/docker.sock
    ssh_key: ""
    ssh_key_path: ~/.ssh/id_rsa
    ssh_cert: ""
    ssh_cert_path: ""
    labels:
      app: ingress
    taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config:
      enabled: true
      interval_hours: 1
      retention: 30
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: "2999-32767"
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
  node_selector: {}
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: 
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries:
- url: docker.domain.com:5000
  user: rancher
  password: "rancher"
  is_default: false
ingress:
  provider: "nginx"
  options: 
    map-hash-bucket-size: "128"
    ssl-protocols: SSLv2
  node_selector:
    app: ingress
  extra_args: 
    enable-ssl-passthrough: ""
  dns_policy: ""
cluster_name: "demo"
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
restore:
  restore: false
  snapshot_name: ""
dns:
  provider: coredns

Steps to Reproduce:
want to add gpushare-scheduler-extender to existing kube-scheduler, how to inlcude policyjson into existing scheduler, the AliyunContainerService/gpushare-scheduler-extender
Results:
not success

@pan87232494
Copy link
Author

pan87232494 commented Dec 19, 2019

as I can see the kube-scheduler config yaml file and kube-scheduler container(also its entry point), but I don't konw how rke pull up kube-scheduler container, so I have no idea how to customerize configuration for kube-scheduler.

I have get gpushare-schd-extender-886d94bf6-fl5mf running, but as you can see, when using rke to start up k8s, there is no scheduler config I can change for scheduler container. then how could I change include this json?

[root@k8s-demo-slave1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-demo-slave1 kubernetes]# ls
scheduler-policy-config.json  ssl
[root@k8s-demo-slave1 kubernetes]# cd ssl
[root@k8s-demo-slave1 ssl]# ls
kube-apiserver-key.pem                   kube-apiserver-requestheader-ca.pem           kubecfg-kube-controller-manager.yaml  kube-controller-manager-key.pem  kube-etcd-192-168-2-229.pem  kube-scheduler-key.pem
kube-apiserver.pem                       kube-ca-key.pem                               kubecfg-kube-node.yaml                kube-controller-manager.pem      kube-node-key.pem            kube-scheduler.pem
kube-apiserver-proxy-client-key.pem      kube-ca.pem                                   kubecfg-kube-proxy.yaml               kube-etcd-192-168-2-140-key.pem  kube-node.pem                kube-service-account-token-key.pem
kube-apiserver-proxy-client.pem          kubecfg-kube-apiserver-proxy-client.yaml      kubecfg-kube-scheduler.yaml           kube-etcd-192-168-2-140.pem      kube-proxy-key.pem           kube-service-account-token.pem
kube-apiserver-requestheader-ca-key.pem  kubecfg-kube-apiserver-requestheader-ca.yaml  kubecfg-kube-scheduler.yaml.bak       kube-etcd-192-168-2-229-key.pem  kube-proxy.pem

[root@k8s-demo-slave1 ssl]# cat kubecfg-kube-scheduler.yaml
apiVersion: v1
kind: Config
clusters:
- cluster:
    api-version: v1
    certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
    server: "https://127.0.0.1:6443"
  name: "local"
contexts:
- context:
    cluster: "local"
    user: "kube-scheduler-local"
  name: "local"
current-context: "local"
users:
- name: "kube-scheduler-local"
  user:
    client-certificate: /etc/kubernetes/ssl/kube-scheduler.pem
    client-key: /etc/kubernetes/ssl/kube-scheduler-key.pem


[root@k8s-demo-slave1 ssl]# kubectl get pods -A

NAMESPACE       NAME                                      READY   STATUS      RESTARTS   AGE
ingress-nginx   default-http-backend-5bcc9fd598-8ggs8     0/1     Evicted     0          17d
ingress-nginx   default-http-backend-5bcc9fd598-ch87f     1/1     Running     0          17d
ingress-nginx   default-http-backend-5bcc9fd598-jbw26     0/1     Evicted     0          21d
ingress-nginx   nginx-ingress-controller-df7sh            1/1     Running     0          21d
ingress-nginx   nginx-ingress-controller-mr89d            1/1     Running     0          17d
kube-system     canal-2bflt                               2/2     Running     0          16d
kube-system     canal-h5sjc                               2/2     Running     0          16d
kube-system     coredns-799dffd9c4-vzvrw                  1/1     Running     0          21d
kube-system     coredns-autoscaler-84766fbb4-5xpk8        1/1     Running     0          21d
kube-system     gpushare-schd-extender-886d94bf6-fl5mf    1/1     Running     0          28m
kube-system     metrics-server-59c6fd6767-2ct2h           1/1     Running     0          17d
kube-system     metrics-server-59c6fd6767-tphk6           0/1     Evicted     0          17d
kube-system     metrics-server-59c6fd6767-vlbgp           0/1     Evicted     0          21d
kube-system     rke-coredns-addon-deploy-job-8dbml        0/1     Completed   0          21d
kube-system     rke-ingress-controller-deploy-job-7h6vd   0/1     Completed   0          21d
kube-system     rke-metrics-addon-deploy-job-sbrlp        0/1     Completed   0          21d
kube-system     rke-network-plugin-deploy-job-5r7d6       0/1     Completed   0          21d

[root@k8s-demo-slave1 _data]# docker ps | grep scheduler
588ff515bd00        rancher/hyperkube:v1.15.5-rancher1                                 "/opt/rke-tools/entr…"   3 weeks ago         Up About an hour                        kube-scheduler

[root@k8s-demo-slave1 _data]# docker inspect 588ff515bd00 
[
    {
        "Id": "588ff515bd00a5a2843f07d9ac981cb3ad701e9e16299d6942c9cec34ec7e76e",
        "Created": "2019-11-27T03:23:27.062736987Z",
        "Path": "/opt/rke-tools/entrypoint.sh",
        "Args": [
            "kube-scheduler",
            "--kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml",
            "--leader-elect=true",
            "--v=2",
            "--address=0.0.0.0",
            "--profiling=false"
        ],

@pan87232494
Copy link
Author

find way to add it.

@jkw552403
Copy link

Could you share how to change kube-scheduler setting in rke? Thank you!

@gl0wa
Copy link

gl0wa commented Feb 19, 2020

@jkw552403 you can add extra_args in service.scheduler in RKE cluster yaml config, e.g.:

services:
  scheduler:
    extra_args:
      policy-config-file: "/etc/kubernetes/scheduler-policy-config.json"

will add
"--policy-config-file=/etc/kubernetes/scheduler-policy-config.json"
to kube-scheduler command

@201508876PMH
Copy link

201508876PMH commented Nov 5, 2021

@gl0wa Where would you find the RKE cluster yaml config file? :) I've tried following the thread here, but i end up stuck. Can you point me in the right direction?
image

@631068264
Copy link

Error: unknown flag: --policy-config-file

@imcom
Copy link

imcom commented Aug 2, 2022

Error: unknown flag: --policy-config-file

This is due to 1.23 introduces changes in scheduler itself. Use config instead, or --config. @631068264

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants