-
Notifications
You must be signed in to change notification settings - Fork 583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to change kube-scheduler entrypoint and config? #1841
Comments
as I can see the kube-scheduler config yaml file and kube-scheduler container(also its entry point), but I don't konw how rke pull up kube-scheduler container, so I have no idea how to customerize configuration for kube-scheduler. I have get gpushare-schd-extender-886d94bf6-fl5mf running, but as you can see, when using rke to start up k8s, there is no scheduler config I can change for scheduler container. then how could I change include this json? [root@k8s-demo-slave1 kubernetes]# pwd
/etc/kubernetes
[root@k8s-demo-slave1 kubernetes]# ls
scheduler-policy-config.json ssl
[root@k8s-demo-slave1 kubernetes]# cd ssl
[root@k8s-demo-slave1 ssl]# ls
kube-apiserver-key.pem kube-apiserver-requestheader-ca.pem kubecfg-kube-controller-manager.yaml kube-controller-manager-key.pem kube-etcd-192-168-2-229.pem kube-scheduler-key.pem
kube-apiserver.pem kube-ca-key.pem kubecfg-kube-node.yaml kube-controller-manager.pem kube-node-key.pem kube-scheduler.pem
kube-apiserver-proxy-client-key.pem kube-ca.pem kubecfg-kube-proxy.yaml kube-etcd-192-168-2-140-key.pem kube-node.pem kube-service-account-token-key.pem
kube-apiserver-proxy-client.pem kubecfg-kube-apiserver-proxy-client.yaml kubecfg-kube-scheduler.yaml kube-etcd-192-168-2-140.pem kube-proxy-key.pem kube-service-account-token.pem
kube-apiserver-requestheader-ca-key.pem kubecfg-kube-apiserver-requestheader-ca.yaml kubecfg-kube-scheduler.yaml.bak kube-etcd-192-168-2-229-key.pem kube-proxy.pem
[root@k8s-demo-slave1 ssl]# cat kubecfg-kube-scheduler.yaml
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
certificate-authority: /etc/kubernetes/ssl/kube-ca.pem
server: "https://127.0.0.1:6443"
name: "local"
contexts:
- context:
cluster: "local"
user: "kube-scheduler-local"
name: "local"
current-context: "local"
users:
- name: "kube-scheduler-local"
user:
client-certificate: /etc/kubernetes/ssl/kube-scheduler.pem
client-key: /etc/kubernetes/ssl/kube-scheduler-key.pem
[root@k8s-demo-slave1 ssl]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx default-http-backend-5bcc9fd598-8ggs8 0/1 Evicted 0 17d
ingress-nginx default-http-backend-5bcc9fd598-ch87f 1/1 Running 0 17d
ingress-nginx default-http-backend-5bcc9fd598-jbw26 0/1 Evicted 0 21d
ingress-nginx nginx-ingress-controller-df7sh 1/1 Running 0 21d
ingress-nginx nginx-ingress-controller-mr89d 1/1 Running 0 17d
kube-system canal-2bflt 2/2 Running 0 16d
kube-system canal-h5sjc 2/2 Running 0 16d
kube-system coredns-799dffd9c4-vzvrw 1/1 Running 0 21d
kube-system coredns-autoscaler-84766fbb4-5xpk8 1/1 Running 0 21d
kube-system gpushare-schd-extender-886d94bf6-fl5mf 1/1 Running 0 28m
kube-system metrics-server-59c6fd6767-2ct2h 1/1 Running 0 17d
kube-system metrics-server-59c6fd6767-tphk6 0/1 Evicted 0 17d
kube-system metrics-server-59c6fd6767-vlbgp 0/1 Evicted 0 21d
kube-system rke-coredns-addon-deploy-job-8dbml 0/1 Completed 0 21d
kube-system rke-ingress-controller-deploy-job-7h6vd 0/1 Completed 0 21d
kube-system rke-metrics-addon-deploy-job-sbrlp 0/1 Completed 0 21d
kube-system rke-network-plugin-deploy-job-5r7d6 0/1 Completed 0 21d
[root@k8s-demo-slave1 _data]# docker ps | grep scheduler
588ff515bd00 rancher/hyperkube:v1.15.5-rancher1 "/opt/rke-tools/entr…" 3 weeks ago Up About an hour kube-scheduler
[root@k8s-demo-slave1 _data]# docker inspect 588ff515bd00
[
{
"Id": "588ff515bd00a5a2843f07d9ac981cb3ad701e9e16299d6942c9cec34ec7e76e",
"Created": "2019-11-27T03:23:27.062736987Z",
"Path": "/opt/rke-tools/entrypoint.sh",
"Args": [
"kube-scheduler",
"--kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-scheduler.yaml",
"--leader-elect=true",
"--v=2",
"--address=0.0.0.0",
"--profiling=false"
],
|
find way to add it. |
Could you share how to change |
@jkw552403 you can add extra_args in service.scheduler in RKE cluster yaml config, e.g.: services:
scheduler:
extra_args:
policy-config-file: "/etc/kubernetes/scheduler-policy-config.json" will add |
@gl0wa Where would you find the RKE cluster yaml config file? :) I've tried following the thread here, but i end up stuck. Can you point me in the right direction? |
Error: unknown flag: --policy-config-file |
This is due to 1.23 introduces changes in scheduler itself. Use |
I just want to add one more command args into kube-scheduler (policy-config-file), how to do that, is rancher kube-scheduler support this?
RKE version:
rke version v0.3.2
Docker version: (
docker version
,docker info
preferred)18.09.2
Operating system and kernel: (
cat /etc/os-release
,uname -r
preferred)redhat 7.7
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Bare-metal
cluster.yml file:
Steps to Reproduce:
want to add gpushare-scheduler-extender to existing kube-scheduler, how to inlcude policyjson into existing scheduler, the AliyunContainerService/gpushare-scheduler-extender
Results:
not success
The text was updated successfully, but these errors were encountered: