Skip to content

Latest commit

 

History

History

cks-kao-shi

CKS考试

1. 要求

考试模式:线上考试

考试时间:2小时

认证有效期:2年

软件版本:Kubernetes v1.19

系统:Ubuntu 18.4

有效期:考试资格自购买之日起12个月内有效

重考政策:可接受1次重考

经验水平:中级

考试报名https://training.linuxfoundation.cn/certificates/16


2. 考点内容

2.1 集群安装:10%

  • 使用网络安全策略来限制集群级别的访问
  • 使用CIS基准检查Kubernetes组件(etcd, kubelet, kubedns, kubeapi)的安全配置
  • 正确设置带有安全控制的Ingress对象
  • 保护节点元数据和端点
  • 最小化GUI元素的使用和访问
  • 在部署之前验证平台二进制文件

课程:

2.2 集群强化:15%

  • 限制访问Kubernetes API
  • 使用基于角色的访问控制来最小化暴露
  • 谨慎使用服务帐户,例如禁用默认设置,减少新创建帐户的权限
  • 经常更新Kubernetes

课程:

2.3 系统强化:15%

  • 最小化主机操作系统的大小(减少攻击面)
  • 最小化IAM角色
  • 最小化对网络的外部访问
  • 适当使用内核强化工具,如AppArmor, seccomp

课程:

2.4 微服务漏洞最小化:20%

  • 设置适当的OS级安全域,例如使用PSP, OPA,安全上下文
  • 管理Kubernetes机密
  • 在多租户环境中使用容器运行时 (例如gvisor, kata容器)
  • 使用mTLS实现Pod对Pod加密

课程:

2.5 供应链安全:20%

  • 最小化基本镜像大小
  • 保护您的供应链:将允许的注册表列入白名单,对镜像进行签名和验证
  • 使用用户工作负载的静态分析(例如kubernetes资源,Docker文件)
  • 扫描镜像,找出已知的漏洞

课程:

2.6 监控、日志记录和运行时安全:20%

  • 在主机和容器级别执行系统调用进程和文件活动的行为分析,以检测恶意活动
  • 检测物理基础架构,应用程序,网络,数据,用户和工作负载中的威胁
  • 检测攻击的所有阶段,无论它发生在哪里,如何扩散
  • 对环境中的不良行为者进行深入的分析调查和识别
  • 确保容器在运行时不变
  • 使用审计日志来监视访问

课程:


3. 需要掌握内容

3.1 群集设置 10%

1.使用网络安全策略限制群集级别的访问

2.使用CIS基准来检查Kubernetes组件(etcd,kubelet,kubedns,kubeapi)的安全配置

3.配置ingress的安全设置

4.保护节点元数据

5.最大限度地减少对dashboard的使用和访问

6.部署前验证kubernetes二进制文件

3.2 群集强化 15%

1.限制对Kubernetes API的访问

2.使用RBAC最大程度的减少资源暴露

3.SA的安全设置,例如禁用默认值,最小化对新创建sa的权限

4.更新Kubernetes

3.3 系统强化 15%

1.服务器的安全设置

2.最小化IAM角色

3.最小化外部网络访问

4.适当使用内核强化工具,例如AppArmor,seccomp

3.4 最小化微服务漏洞 20%

1.使用PSP,OPA,安全上下文提高安全性

2.管理Kubernetes secret

3.在多租户环境中使用沙箱运行容器(例如gvisor,kata容器)

4.使用mTLS实施Pod到Pod的加密

3.5 供应链安全 20%

1.减小image的大小

2.保护供应链:将允许的镜像仓库列入白名单,对镜像进行签名和验证

3.分析文件及镜像安全隐患(例如Kubernetes的yaml文件,Dockerfile)

4.扫描图像,找出已知的漏洞

3.6 监控、审计和runtime 20%

1.分析容器系统调用,以检测恶意进程

2.检测物理基础设施、应用程序、网络、数据、用户和工作负载中的威胁

3.检测攻击的所有阶段,无论它发生在哪里,如何传播

6.Kubernetes审计

4. 考试资料

相关阅读:

5. 考试命令

NetworkPolicy

k run frontend --image=nginx
k run backend --image=nginx
k expose pod frontend --port 80
k expose pod backend --port 80
k get pods,svc
k exec frontend -- curl backend
k exec backend -- curl frontend

vim default-deny.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress
  - Ingress


vim frontend.yaml
# allows frontend pods to communicate with backend pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: frontend
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          run: backend


vim backend.yaml
# allows backend pods to have incoming traffic from frontend pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: frontend


k exec frontend -- curl 192.168.104.27
k exec backend -- curl 192.168.166.179 


kubectl create ns cassandra
kubectl edit ns cassandra
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2021-04-20T07:19:22Z"
  name: cassandra
  resourceVersion: "533198"
  uid: 766ae069-4dc9-4acd-a4db-ce852c293cc6
  labels:  #添加
    ns: cassandra #添加
spec:
  finalizers:
  - kubernetes
status:
  phase: Active


k  -n cassandra run cassandra --image=nginx
k -n cassandra get pod -owide
k exec backend -- curl 192.168.104.26
vim backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: backend
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
      - podSelector:
          matchLabels:
            run: frontend
  egress:
    - to:
      - namespaceSelector:
          matchLabels:
            ns: cassandra

k exec backend -- curl 192.168.104.26

cat cassandra-deny.yaml
# deny all incoming and outgoing traffic from all pods in namespace cassandra
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cassandra-deny
  namespace: cassandra
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

k exec backend -- curl 192.168.104.26
(通)

cat cassandra-deny.yaml
# deny all incoming and outgoing traffic from all pods in namespace cassandra
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cassandra-deny
  namespace: cassandra
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress

k exec backend -- curl 192.168.104.26  
(拒绝)

vim  cassandra.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cassandra
  namespace: cassandra
spec:
  podSelector:
    matchLabels:
      run: cassandra
  policyTypes:
    - Ingress
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            ns: default


k edit ns default
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2021-01-19T03:27:58Z"
  labels: #添加
    ns: default   #添加
  name: default
  resourceVersion: "541475"
  uid: 2d566715-f0a4-49b3-b590-dfa7df30d0ba
spec:
  finalizers:
  - kubernetes
status:
  phase: Active


k exec backend -- curl 192.168.104.26

Dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created

k -n kubernetes-dashboard get pod,svc

k -n kubernetes-dashboard edit deploy kubernetes-dashboard
.....
      containers:
      - args:
        - --auto-generate-certificates
        - --namespace=kubernetes-dashboard
        image: kubernetesui/dashboard:v2.1.0
        imagePullPolicy: Always
......
改为
    spec:
      containers:
      - args:
        - --namespace=kubernetes-dashboard
        - --insecure-port=9090
        image: kubernetesui/dashboard:v2.1.0


k -n kubernetes-dashboard get pod,svc

k -n kubernetes-dashboard edit svc kubernetes-dashboard
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: "2021-04-21T02:55:03Z"
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  resourceVersion: "557996"
  uid: bd515d85-4dc6-4ac0-9890-ca2a711a7b26
spec:
  clusterIP: 10.99.150.161
  clusterIPs:
  - 10.99.150.161
  ports:
  - port: 9090             #443改为9090
    protocol: TCP
    targetPort: 9090        #8443改为9090
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort           #ClusterIP改为NodePort
status:
  loadBalancer: {}

k -n kubernetes-dashboard get svc

#RBAC for the Dashboard

k -n kubernetes-dashboard get sa
k get clusterroles  |grep view
k -n kubernets-dashboard create rolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard --clusterrole view 

k -n kubernetes-dashboard create clusterrolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard --clusterrole view

Secure Ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.yaml


k get pod,svc -n ingress-nginx

cat secure-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /service1
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 80
      - path: /service2
        pathType: Prefix
        backend:
          service:
            name: service2
            port:
              number: 80


k create -f secure-ingress.yaml 
k get ing
k run pod1 --image=nginx
k run pod2 --image=httpd
k expose pod pod1 --port 80 --name service1
k expose pod pod2 --port 80 --name service2
curl  http://192.168.211.40:31459/service1
curl  http://192.168.211.40:31459/service2


curl  https://192.168.211.40:32300/service1 -kv
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
k create secret tls secure-ingress --cert=cert.pem --key=key.pem
k get sec
k get secret

vim secure-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: secure-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
      - secure-ingress.com
    secretName: secure-ingress
  rules:
  - host: secure-ingress.com  
    http:
      paths:
      - path: /service1
        pathType: Prefix
        backend:
          service:
            name: service1
            port:
              number: 80

      - path: /service2
        pathType: Prefix
        backend:
          service:
            name: service2
            port:
              number: 80

 k apply -f secure-ingress.yaml 
 curl  https://secure-ingress.com:32300/service2 -kv --resolv secure-ingress.com:32300:192.168.211.41

Node Metadata

curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/0/" -H "Metadata-Flavor: Google"
k run nginx --image=nginx
k get pods
k exec -ti nginx bash
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"
cat deny.yaml
# all pods in namespace cannot access metadata endpoint
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cloud-metadata-deny
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32

 k create -f deny.yaml 
 k exec -ti nginx bash
 curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"         ## 卡住

cat allow.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: cloud-metadata-allow
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: metadata-accessor
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 169.254.169.254/32


k create -f allow.yaml 
k label pod nginx role=metadata-accessor
k get pods nginx --show-labels
k exec -ti nginx bash
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"  #正常访问


k edit pod nginx
metadata:
  annotations:
    cni.projectcalico.org/podIP: 192.168.104.31/32
  creationTimestamp: "2021-04-22T03:17:45Z"
  labels:
    role: metadata-accessor   #删除
    run: nginx
  name: nginx
  namespace: default


 k exec -ti nginx bash
 curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google"  #卡住无法访问

CIS Benchmarks

kubectl get nodes
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20

useradd etcd
chown etcd:etcd /var/lib/etcd
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest master --version 1.20

docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest node --version 1.20


./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml master
kube-bench --config-dir /data/software/kube-bench/cfg --config /data/software/kube-bench/cfg/config.yaml node

Verify

sha512sum kubernetes-server-linux-arm64.tar.gz  > compare
sha512sum kubernetes/server/bin/kube-apiserver
k -n kube-system get pod | grep api
k -n kube-system get pod kube-apiserver-master -o yaml | grep image
docker cp 0fb5321dfd57:/ container-fs
find container-fs/ | grep kube-apiserver
sha512sum container-fs/usr/local/bin/kube-apiserver

Restrict API Access

curl https://localhost:6443
curl https://localhost:6443 -k
vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
    - kube-apiserver
    - --advertise-address=192.168.211.40
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=8080  #0改成8080
  .....

curl http://localhost:8080
curl https://192.168.211.40:6443 --cacert ca --cert  ca.crt --key ca.key

ServiceAccounts

k get sa,secrets
k describe sa default
k create sa accessor
k describe secret accessor-token-bnd4s
k run accessor --image=nginx --dry-run=client -oyaml > accessor.yaml
cat accessor.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: accessor
  name: accessor
spec:
  serviceAccountName: accessor  #添加此行
  containers:
  - image: nginx
    name: accessor
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}


k create -f accessor.yaml
k exec -ti accessor -- bash
mount |grep sec
cd /run/secrets/kubernetes.io/serviceaccount
cat token 
curl https://kubernetes
curl https://kubernetes -k


cat accessor.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: accessor
  name: accessor
spec:
  serviceAccountName: accessor
  automountServiceAccountToken: false   #添加此行
  containers:
  - image: nginx
    name: accessor
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

k -f accessor.yaml replace --force
k exec -ti accessor -- bash
mount |grep ser
k get pod
k auth can-i delete secrets --as system:serviceaccount:default:accessor
k create clusterrolebinding accessor --clusterrole edit --serviceaccount default:accessor
k auth can-i delete secrets --as system:serviceaccount:default:accessor

RBAC

k create ns red
k create ns blue
k -n red create role secret-manager -verb=get --resource=secrets -oyaml --dry-run=client
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: secret-manager
  namespace: red
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get


k -n red create rolebinding secret-manager --role=secret-manager --user=jane

k -n red auth can-i get secrets --as jane
k -n red auth can-i get secrets --as tom
k -n red auth can-i delete secrets --as jane
k -n red auth can-i list secrets --as jane
k -n blue auth can-i list secrets --as jane
k -n blue auth can-i get secrets --as jane
k -n blue auth can-i get pods --as jane

k create clusterrole deploy-deleter --verb delete --resource deployments

k create clusterrolebinding deploy-deleter --user jane --clusterrole deploy-deleter

k -n red create rolebinding deploy-deleter  --user jim --clusterrole deploy-deleter
k auth can-i delete deployments --as jane
k auth can-i delete deployments --as jane -n default
k auth can-i delete deployments --as jane -n red
k auth can-i delete pods --as jane -n red
k auth can-i delete deployments --as jim -n default
k auth can-i delete deployments --as jim -A
k auth can-i delete deployments --as jim -n red

Upgrade Kubernetes

k drain master --ignore-daemonsets
k get nodes
apt-cache show kubeadm |grep 1.20
apt-get install kubeadm=1.20.2-00 kubectl=1.20.2-00 kubelet=1.20.2-00
kubeadm upgrade plan
kubeadm upgrade apply v1.20.6
k get nodes

k drain node1 --ignore-daemonsets
k uncordon node1 
kubeadm version
apt-cache show kubeadm  |grep -e '1.20'
apt-get install kubeadm=1.20.2-00 kubectl=1.20.2-00 kubelet=1.20.2-00
kubeadm version
kubectl version
kubelet version
k uncordon node1 

securityContext与podsecurityPolicies

k run pod --image=busybox --command -oyaml --dry-run=client > pod.yaml -- sh -c 'sleep 1d'
cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod
  name: pod
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
  containers:
  - command:
    - sh
    - -c
    - sleep 1d
    image: busybox
    name: pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

k exec -ti pod -- sh
/ $ id
uid=1000 gid=3000

/ $ touch test
touch: test: Permission denied
/ $ cd /tmp
/tmp $ touch test
/tmp $ ls -lh
total 0      
-rw-r--r--    1 1000     3000           0 May 15 15:00 test

seccomp and apparmor

$ cat /etc/apparmor.d/docker-nginx
$ apparmor_parser /etc/apparmor.d/docker-nginx 
$ aa-status 
$ docker run nginx
$ docker run --security-opt apparmor=docker-default nginx
$ docker run --security-opt apparmor=docker-nginx nginx
/docker-entrypoint.sh: 13: /docker-entrypoint.sh: cannot create /dev/null: Permission denied
/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration

$ docker run --security-opt apparmor=docker-nginx -d  nginx
$ docker exec -ti f608a4a126e2e2b145dcf094b41c29bea1f7b8beeb38871178e0ea0ae8eab061 bash
$ touch /root/test
touch: cannot touch '/root/test': Permission denied
$ sh
bash: /bin/sh: Permission denied
$ touch /test


$ apparmor_parser /etc/apparmor.d/docker-nginx 
$ aa-status 
$ k run secure --image=nginx -oyaml --dry-run=client > pod.yaml
$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  annotations:    #添加此行
    container.apparmor.security.beta.kubernetes.io/secure: localhost/hello  #添加此行
  labels:
    run: secure
  name: secure
spec:
  containers:
  - image: nginx
    name: secure
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

$ k create  -f pod.yaml 
$ k get pods secure
NAME     READY   STATUS    RESTARTS   AGE
secure   0/1     Blocked   0          6s
$ k describe pod secure
nnotations:  container.apparmor.security.beta.kubernetes.io/secure: localhost/hello
Status:       Pending
Reason:       AppArmor
Message:      Cannot enforce AppArmor: profile "hello" is not loaded



$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  annotations: 
    container.apparmor.security.beta.kubernetes.io/secure: localhost/docker-nginx  #修改此行
  labels:
    run: secure
  name: secure
spec:
  containers:
  - image: nginx
    name: secure
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}


$ k create -f pod.yaml 
$ k get pod secure
NAME     READY   STATUS    RESTARTS   AGE
secure   1/1     Running   0          10s