Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to launch consul client with podsecuritypolicy. #1037

Closed
kevinlmadison opened this issue Feb 17, 2022 · 9 comments · Fixed by #1090
Closed

Unable to launch consul client with podsecuritypolicy. #1037

kevinlmadison opened this issue Feb 17, 2022 · 9 comments · Fixed by #1090
Labels
type/bug Something isn't working

Comments

@kevinlmadison
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.

We are running in a secure environment with RKE2 and thus need to manually set permissions with podsecuritypolicies. Enabling the podsecurity policy with helm allows us to use [8500,8502,8301] but not 8600 so the daemonset is failing to deploy. In the client psp I don't see any option to allow port 8600, does this need to be added to the template?

Logs

  Warning  FailedCreate  1s (x10 over 22s)  daemonset-controller  Error creating: pods "consul-client-" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 8600: Host port 8600 is not allowed to be used. Allowed ports: [8500,8502,8301] spec.containers[0].hostPort: Invalid value: 8600: Host port 8600 is not allowed to be used. Allowed ports: [8500,8502,8301] spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.containers[0].hostPort: Invalid value: 8500: Host port 8500 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 8502: Host port 8502 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 8301: Host port 8301 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 8301: Host port 8301 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 8600: Host port 8600 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 8600: Host port 8600 is not allowed to be used. Allowed ports: []]

This goes away if I manually add port 8600 to the psp

Expected behavior

Environment details

If not already included, please provide the following:

  • consul-k8s version: 1.11.2
  • values.yaml:
global:
  name: consul
  enablePodSecurityPolicies: true
#  image: "consul:1.9.4"
#  domain: consul
#  datacenter: primarydc
  # Bootstrap ACLs within Consul. This is highly recommended.
# Configure your Consul servers in this section.
server:
  # Specify three servers that wait until all are healthy to bootstrap the Consul cluster.
  storageClass: "rook-ceph-block"
  replicas: 3
  bootstrapExpect: 3
  # Specify the resources that servers request for placement. These values will serve a large environment.
  resources: |
    requests:
      memory: "5Gi"
      #cpu: "4"
    limits:
      memory: "5Gi"
      #cpu: "4"
  # Prevent Consul servers from co-location on Kubernetes nodes.
  #affinity: |
  #  podAntiAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #     - labelSelector:
  #         matchLabels:
  #           app: {{ template "consul.name" . }}
  #           release: "{{ .Release.Name }}"
  #           component: server
  #     topologyKey: kubernetes.io/hostname
# Configure Consul clients in this section
client:
  exposeGossipPorts: true
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet # Specify the resources that clients request for deployment.
  resources: |
    requests:
      memory: "5Gi"
      #cpu: "2"
    limits:
      memory: "5Gi"
      #cpu: "2"
# Enable and configure the Consul UI.
ui:
  enabled: true

Additionally, please provide details regarding the Kubernetes Infrastructure, as shown below:

  • Kubernetes version: v1.22.x
  • Cloud Provider: AWS
  • Networking CNI plugin in use: Calico
@kevinlmadison kevinlmadison added the type/bug Something isn't working label Feb 17, 2022
@ishustava
Copy link
Contributor

Hey @kevinlmadison

I'm a bit confused. We don't expose port 8600 as hostPort:

- containerPort: 8600
name: dns-tcp
protocol: "TCP"
- containerPort: 8600
name: dns-udp
protocol: "UDP"

I don't know how this error would be possible. We do also have end-to-end tests for our helm chart with pod security policies and have not had any failures.

@ishustava ishustava added the waiting-reply Waiting on the issue creator for a response before taking further action label Feb 17, 2022
@kevinlmadison
Copy link
Author

That is odd, I'm definitely seeing a hostport open when I describe the daemonset.

╰$ k describe ds -n admin consul-client
Name:           consul-client
Selector:       app=consul,chart=consul-helm,component=client,hasDNS=true,release=consul
Node-Selector:  <none>
Labels:         app=consul
                app.kubernetes.io/managed-by=Helm
                chart=consul-helm
                component=client
                heritage=Helm
                release=consul
Annotations:    deprecated.daemonset.template.generation: 1
                meta.helm.sh/release-name: consul
                meta.helm.sh/release-namespace: admin
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=consul
                    chart=consul-helm
                    component=client
                    hasDNS=true
                    release=consul
  Annotations:      consul.hashicorp.com/config-checksum: 5d0b0afa8bab03edcdb632ebc13a122af5883c4abea4e4595b638096cec0e9e3
                    consul.hashicorp.com/connect-inject: false
  Service Account:  consul-client
  Containers:
   consul:
    Image:       hashicorp/consul:1.11.2
    Ports:       8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8600/TCP, 8600/UDP
    Host Ports:  8500/TCP, 8502/TCP, 8301/TCP, 8301/UDP, 8600/TCP, 8600/UDP

Is there any info from my cluster that might be helpful? I'm deploying consul using this command and that exact values.yaml file I pasted above.
helm upgrade --install consul hashicorp/consul -f ../../manifests/consul/config.yaml --namespace=admin

@kevinlmadison
Copy link
Author

k get ds -n admin consul-client -o yaml
gives me the following.

        ports:
        - containerPort: 8500
          hostPort: 8500
          name: http
          protocol: TCP
        - containerPort: 8502
          hostPort: 8502
          name: grpc
          protocol: TCP
        - containerPort: 8301
          hostPort: 8301
          name: serflan-tcp
          protocol: TCP
        - containerPort: 8301
          hostPort: 8301
          name: serflan-udp
          protocol: UDP
        - containerPort: 8600
          hostPort: 8600
          name: dns-tcp
          protocol: TCP
        - containerPort: 8600
          hostPort: 8600
          name: dns-udp
          protocol: UDP

@ishustava
Copy link
Contributor

That is very strange.

Could you try to run helm template with your config and check if the rendered consul client daemonset YAML also has host ports:

helm template consul hashicorp/consul -f ../../manifests/consul/config.yaml --namespace=admin

@kevinlmadison
Copy link
Author

It does not....

          ports:
            - containerPort: 8500
              hostPort: 8500
              name: http
            - containerPort: 8502
              hostPort: 8502
              name: grpc
            - containerPort: 8301
              hostPort: 8301
              protocol: "TCP"
              name: serflan-tcp
            - containerPort: 8301
              hostPort: 8301
              protocol: "UDP"
              name: serflan-udp
            - containerPort: 8600
              name: dns-tcp
              protocol: "TCP"
            - containerPort: 8600
              name: dns-udp
              protocol: "UDP"

@ishustava
Copy link
Contributor

🤔 Could it be something in your cluster that adds those? I've never seen anything like this before!

@kevinlmadison
Copy link
Author

Yeah I'm not sure, I'm going to investigate and get back to you, but at least I know it's not the pod security policy haha.

@luanaBanana
Copy link

Hi there, did you manage to find the issue? I'm experiencing the same thing..

consul-k8s version: 1.11.3

global:
  name: consul
  acls:
    manageSystemACLs: false
    createReplicationToken: false
  gossipEncryption:
    autoGenerate: true
  tls:
    enabled: true
    verify: true
    enableAutoEncrypt: true
connectInject:
  enabled: true
  transparentProxy:
    defaultEnabled:  false
server:
  storageClass: nfs
  affinity: null
  extraConfig: |
    {
      "log_level": "DEBUG"
    }
meshGateway:
  enabled: true
  service:
    type: NodePort
    nodePort: **
tests:
  enabled: false
client:
  enabled: true
  grpc: true
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  extraConfig: |
    {
      "log_level": "DEBUG"
    }
ui:
  enabled: true
  #grpc: true
  service:
    type: ClusterIP

Daemonset Yaml:

            - containerPort: 8501
              hostPort: 8501
              name: https
            - containerPort: 8502
              hostPort: 8502
              name: grpc
            - containerPort: 8301
              protocol: "TCP"
              name: serflan-tcp
            - containerPort: 8301
              protocol: "UDP"
              name: serflan-udp
            - containerPort: 8600
              name: dns-tcp
              protocol: "TCP"
            - containerPort: 8600
              name: dns-udp
              protocol: "UDP"

Applied daemonset:

0$> kubectl get ds consul-client -o yaml
apiVersion: apps/v1
kind: DaemonSet
        .......
        ports:
        - containerPort: 8501
          hostPort: 8501
          name: https
          protocol: TCP
        - containerPort: 8502
          hostPort: 8502
          name: grpc
          protocol: TCP
        - containerPort: 8301
          hostPort: 8301
          name: serflan-tcp
          protocol: TCP
        - containerPort: 8301
          hostPort: 8301
          name: serflan-udp
          protocol: UDP
        - containerPort: 8600
          hostPort: 8600
          name: dns-tcp
          protocol: TCP
        - containerPort: 8600
          hostPort: 8600
          name: dns-udp
          protocol: UDP

Kubernetes version: v1.23.4
Cloud Provider: - (self hosted/managed)
Networking CNI plugin in use: Cilium

@lkysow
Copy link
Member

lkysow commented Mar 9, 2022

I think when hostNetwork: true is set that Kubernetes sets the ports as hostPorts (probably to reserve them). This looks like a bug we should fix by adding a conditional to our PSPs

@lkysow lkysow removed the waiting-reply Waiting on the issue creator for a response before taking further action label Mar 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants