Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Headless Service for StatefulSet hangs on create/update "Finding Pods to direct traffic to" #248

Closed
protometa opened this Issue Oct 23, 2018 · 26 comments

Comments

Projects
None yet
6 participants
@protometa
Copy link

protometa commented Oct 23, 2018

Headless service required for StatefulSets with appropriate label selectors and clusterIP set to "None"tries to wait for pods. Workaround is to abort the create, do a refresh, and then subsequent updates are ok.

@hausdorff hausdorff self-assigned this Oct 23, 2018

@hausdorff hausdorff added this to the 0.19 milestone Oct 23, 2018

hausdorff added a commit that referenced this issue Oct 23, 2018

Add support for headless `Service`
The await logic for `Service` hangs forever if the user supplies a
headless `Service` that has an empty selector (i.e., targets no `Pod`s).
This commit will re-use the logic we implemented for `Service`s of type
`ExternalName` (which also target 0 `Pod`s) to fix this purpose.

Background: Kubernetes exposes a notion of a "headless" `Service`, which
is a `Service` that's not associated with any IP (cluster-internal or
external). If a `.spec.selector` is not supplied, the `Service` targets
0 `Pod`s; if it is, it should target 1 or more. This commit essentially
refines the await logic to expect to target 0 `Pod`s if the service is
of type `ExternalName` OR if the `Service` is a headless service with an
empty selector.

Fixes #248.

hausdorff added a commit that referenced this issue Oct 25, 2018

Add support for headless `Service`
The await logic for `Service` hangs forever if the user supplies a
headless `Service` that has an empty selector (i.e., targets no `Pod`s).
This commit will re-use the logic we implemented for `Service`s of type
`ExternalName` (which also target 0 `Pod`s) to fix this purpose.

Background: Kubernetes exposes a notion of a "headless" `Service`, which
is a `Service` that's not associated with any IP (cluster-internal or
external). If a `.spec.selector` is not supplied, the `Service` targets
0 `Pod`s; if it is, it should target 1 or more. This commit essentially
refines the await logic to expect to target 0 `Pod`s if the service is
of type `ExternalName` OR if the `Service` is a headless service with an
empty selector.

Fixes #248.

hausdorff added a commit that referenced this issue Oct 25, 2018

Add support for headless `Service`
The await logic for `Service` hangs forever if the user supplies a
headless `Service` that has an empty selector (i.e., targets no `Pod`s).
This commit will re-use the logic we implemented for `Service`s of type
`ExternalName` (which also target 0 `Pod`s) to fix this purpose.

Background: Kubernetes exposes a notion of a "headless" `Service`, which
is a `Service` that's not associated with any IP (cluster-internal or
external). If a `.spec.selector` is not supplied, the `Service` targets
0 `Pod`s; if it is, it should target 1 or more. This commit essentially
refines the await logic to expect to target 0 `Pod`s if the service is
of type `ExternalName` OR if the `Service` is a headless service with an
empty selector.

Fixes #248.
@protometa

This comment has been minimized.

Copy link
Author

protometa commented Dec 19, 2018

I'm still having this issue with the latest Pulumi. The StatefulSet headless service still does have a pod selector. I don't think this case was addressed in the merge.

@lukehoban lukehoban reopened this Dec 19, 2018

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 8, 2019

I'm also having this issue. I think for headless StatefulSet service Pulumi shouldn't wait for any pods even if the service spec includes selector.

@joeduffy joeduffy modified the milestones: 0.19, 0.20 Jan 8, 2019

@joeduffy joeduffy added the priority/P1 label Jan 8, 2019

@hausdorff hausdorff assigned lblackstone and unassigned hausdorff Jan 8, 2019

@hausdorff

This comment has been minimized.

Copy link
Member

hausdorff commented Jan 8, 2019

THis seems to have been closed erroneously... I must have searched for issues with "headless" in them and selected this instead of a headless service issue. We'll fix soon.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 8, 2019

If I understand the problem correctly, I believe this issue is fixed in master already by #307

Prior to that change, the await logic didn't handle StatefulSet, so Pulumi would show it as ready as soon as it was created.

I took another look at the Service awaiter code, and it seems to be valid according to the current semantics; as you noted, the Service will not show ready until all selected Pods are ready, even for a headless Service.

We'll cut a new release soon, and if that doesn't fix your problem, I'll need some more detail on expected vs actual results you're seeing.

@hausdorff

This comment has been minimized.

Copy link
Member

hausdorff commented Jan 9, 2019

@lblackstone sounds like we should close the issue and let @protometa and @pbzdyl re-open if it persists?

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 14, 2019

@hausdorff I would like to test but has the fix been released? I updated all npm packages and the issue is still there.

@hausdorff

This comment has been minimized.

Copy link
Member

hausdorff commented Jan 14, 2019

@pbzdyl we're releasing today, I had hoped to have you try a dev release, though. Either way, though.

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 14, 2019

@hausdorff I can try a dev release but I would need some guidance how to do it :)

@hausdorff

This comment has been minimized.

Copy link
Member

hausdorff commented Jan 14, 2019

@pbzdyl you can just put "@pulumi/kubernetes": "dev" as the version in package.json

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 15, 2019

@hausdorff I tested and it seems it still doesn't work. I have changed the version to dev as you suggested, removed npm_modules and package-lock.json:


> grpc@1.17.0 install <redacted>/node_modules/grpc
> node-pre-gyp install --fallback-to-build --library=static_library

node-pre-gyp WARN Using needle for node-pre-gyp https download 
[grpc] Success: "<redacted>/node_modules/grpc/src/node/extension_binary/node-v67-darwin-x64-unknown/grpc_node.node" is installed via remote

> @pulumi/gcp@0.16.4 install <redacted>/node_modules/@pulumi/gcp
> node scripts/install-pulumi-plugin.js resource gcp v0.16.4

[resource plugin gcp-0.16.4] installing

> @pulumi/kubernetes@0.18.1-dev.1546907245 install <redacted>/node_modules/@pulumi/kubernetes
> node scripts/install-pulumi-plugin.js resource kubernetes v0.18.1-dev.1546907245+g69d6a81

[resource plugin kubernetes-0.18.1-dev.1546907245+g69d6a81] installing
Downloading plugin:  16.12 MiB / 16.12 MiB [=======================] 100.00% 11s

> protobufjs@6.8.8 postinstall <redacted>/node_modules/protobufjs
> node scripts/postinstall

npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN services@ No description
npm WARN services@ No repository field.
npm WARN services@ No license field.

added 221 packages from 623 contributors and audited 897 packages in 23.819s
found 0 vulnerabilities

$ pulumi up
Previewing update (<redacted>):

     Type                              Name                                   Plan       
     pulumi:pulumi:Stack               <redacted>             
 >-  ├─ pulumi:pulumi:StackReference   <redacted>                      read       
     └─ <redacted ComponentResource>                  <redacted>                                        
        └─ <redacted ComponentResource>              <redacted>                                        
 +         └─ kubernetes:core:Service  <redacted>                    create     
 
Resources:
    + 1 to create
    14 unchanged

Do you want to perform this update? yes
Updating (<redacted>):

     Type                              Name                                   Status       Info
     pulumi:pulumi:Stack               <redacted>               
 >-  ├─ pulumi:pulumi:StackReference   <redacted>                      read         
     └─ <redacted ComponentResource>                  <redacted>                                          
        └─ <redacted ComponentResource>              <redacted>                                          
 +         └─ kubernetes:core:Service  <redacted>                    **creating failed**     1 error

Diagnostics:
  kubernetes:core:Service (<redacted>):
    error: Plan apply failed: 2 errors occurred:
    
    * Timeout occurred for '<redacted>'
    * Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods

Headless service:

Name:              <redacted>
Namespace:         default
Labels:            app=example
                   deployment=example
Annotations:       <none>
Selector:          app=example,deployment=example
Type:              ClusterIP
IP:                None
Session Affinity:  None
Events:            <none>

Pods:

NAME           READY   STATUS    RESTARTS   AGE
<redacted>-0   1/1     Running   0          22h

I can provide more information if needed.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 15, 2019

@pbzdyl Sorry for the delay. I just released v0.19.0

Can you give it one more shot with that release, and then I'll follow up if you're still having problems?

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 16, 2019

@lblackstone I have just tested v0.19.0 and I got the same issue and output as in my previous comment.

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 16, 2019

Let me also add .status info for the impacted StatefulSet and its pod:

StatefulSet.status:

status:
  collisionCount: 0
  currentReplicas: 1
  currentRevision: <redacted>-f66f69fc
  observedGeneration: 4
  readyReplicas: 1
  replicas: 1
  updateRevision: <redacted>-f66f69fc
  updatedReplicas: 1

Pod.status:

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-01-14T10:49:39Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-01-14T10:49:58Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2019-01-14T10:49:39Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://4a4d134aa8b92e4bbdaa9c7325c5df01690d523416afae646529457907a39d24
    image: <redacted>
    imageID: <redacted>
    lastState: {}
    name: <redacted>
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2019-01-14T10:49:58Z
  hostIP: <redacted>
  phase: Running
  podIP: <redacted>
  qosClass: Burstable
  startTime: 2019-01-14T10:49:39Z```
@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 16, 2019

@pbzdyl What labels do you have set for the StatefulSet/Pod? Your Service is selecting for app=example,deployment=example, so I would expect that to match.

All of the following should be identical:

  1. Service .spec.selector.<label>
  2. StatefulSet .spec.selector.matchLabels.<label>
  3. StatefulSet .spec.template.metadata.labels.<label>

Reference: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 16, 2019

@lblackstone here it is:

StatefulSet .spec.selector.matchLabels = { app: example, deployment: example }
StatefulSet .spec.template.metadata.labels = { app: example, deployment: example }
Service .spec.selector = { app: example, deployment: example }

Also:

$ kubectl get pods --selector app=example,deployment=example
NAME           READY   STATUS    RESTARTS   AGE
<redacted>-0   1/1     Running   0          2d

which shows that all the metadata and selectors are configured correctly.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 16, 2019

@pbzdyl Alright, just wanted to confirm. I'll take another look at the provider code today.

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 16, 2019

I will be happy to provide any other information you might need to troubleshoot this issue.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 17, 2019

@pbzdyl I haven't been able to reproduce this behavior. I'm able to create a StatefulSet + headless Service on GKE like this:

Pulumi snippet

const ssLabels = {app: `ss-test`, deployment: `example`};
const ssName = "ss-service";
const ssService = new k8s.core.v1.Service("statefulservice", {
    metadata: {name: ssName},
    spec: {
        ports: [{port: 80, name: "web"}],
        clusterIP: "None",
        selector: ssLabels,
    }
}, {provider: k8sProvider});

const ss = new k8s.apps.v1.StatefulSet("statefulset", {
    metadata: {labels: ssLabels, name: "foo"},
    spec: {
        selector: {matchLabels: ssLabels},
        serviceName: ssName,
        replicas: 3,
        template: {
            metadata: {labels: ssLabels},
            spec: {
                terminationGracePeriodSeconds: 10,
                containers: [
                    {
                        name: "nginx",
                        image: "nginx:stable",
                        ports: [
                            {
                                containerPort: 80,
                                name: "web"
                            }
                        ],
                        volumeMounts: [
                            {
                                name: "www",
                                mountPath: "/usr/share/nginx/html"
                            }
                        ]
                    }
                ],
            },
        },
        volumeClaimTemplates: [
            {
                metadata: {name: "www"},
                spec: {
                    accessModes: ["ReadWriteOnce"],
                    resources: {requests: {storage: "1Gi"}}
                }
            }
        ]
    },
}, {provider: k8sProvider});

StatefulSet info

{
    "apiVersion": "apps/v1",
    "kind": "StatefulSet",
    "metadata": {
        "creationTimestamp": "2019-01-17T20:23:37Z",
        "generation": 1,
        "labels": {
            "app": "ss-test",
            "deployment": "example"
        },
        "name": "foo",
        "namespace": "default",
        "resourceVersion": "20781",
        "selfLink": "/apis/apps/v1/namespaces/default/statefulsets/foo",
        "uid": "c595cfde-1a95-11e9-9fc4-42010a8a0120"
    },
    "spec": {
        "podManagementPolicy": "OrderedReady",
        "replicas": 3,
        "revisionHistoryLimit": 10,
        "selector": {
            "matchLabels": {
                "app": "ss-test",
                "deployment": "example"
            }
        },
        "serviceName": "ss-service",
        "template": {
            "metadata": {
                "creationTimestamp": null,
                "labels": {
                    "app": "ss-test",
                    "deployment": "example"
                }
            },
            "spec": {
                "containers": [
                    {
                        "image": "nginx:stable",
                        "imagePullPolicy": "IfNotPresent",
                        "name": "nginx",
                        "ports": [
                            {
                                "containerPort": 80,
                                "name": "web",
                                "protocol": "TCP"
                            }
                        ],
                        "resources": {},
                        "terminationMessagePath": "/dev/termination-log",
                        "terminationMessagePolicy": "File",
                        "volumeMounts": [
                            {
                                "mountPath": "/usr/share/nginx/html",
                                "name": "www"
                            }
                        ]
                    }
                ],
                "dnsPolicy": "ClusterFirst",
                "restartPolicy": "Always",
                "schedulerName": "default-scheduler",
                "securityContext": {},
                "terminationGracePeriodSeconds": 10
            }
        },
        "updateStrategy": {
            "rollingUpdate": {
                "partition": 0
            },
            "type": "RollingUpdate"
        },
        "volumeClaimTemplates": [
            {
                "metadata": {
                    "creationTimestamp": null,
                    "name": "www"
                },
                "spec": {
                    "accessModes": [
                        "ReadWriteOnce"
                    ],
                    "resources": {
                        "requests": {
                            "storage": "1Gi"
                        }
                    }
                },
                "status": {
                    "phase": "Pending"
                }
            }
        ]
    },
    "status": {
        "collisionCount": 0,
        "currentReplicas": 3,
        "currentRevision": "foo-778b5999d8",
        "observedGeneration": 1,
        "readyReplicas": 3,
        "replicas": 3,
        "updateRevision": "foo-778b5999d8",
        "updatedReplicas": 3
    }
}

Service info

Name:              ss-service
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=ss-test,deployment=example
Type:              ClusterIP
IP:                None
Port:              web  80/TCP
TargetPort:        80/TCP
Endpoints:         10.40.1.10:80,10.40.1.9:80,10.40.2.8:80
Session Affinity:  None
Events:            <none>

The main difference I noticed from the output you provided is that your service doesn't include any Endpoints; not sure if you omitted that, or if they are missing. The await logic is based on the endpoints being present, so it may be a configuration problem in your program.

Happy to help troubleshoot if you can provide the relevant code snippet, either here or DM on our community slack channel.

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 18, 2019

Thank you @lblackstone. This is it: I have a headless service without any ports but it is a valid use case for statefulsets: having such a service doesn't publish any ports but manages a DNS subdomain for DNS names of the statefulset pods. In such scenario there will be no service endpoint created for the service and there will be ...svc.cluster.local DNS entries created.

Thus I think for headless services without any ports the awaiter logic must be different.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 18, 2019

@pbzdyl Can you also verify that your StatefulSet's .spec.serviceNamematches the Service .metadata.name?

I still don't think we're on the same page. A headless Service will not have an external IP, but my understanding is that it should still have Endpoint(s) that point to the internal Pod IPs from your StatefulSet.

It would be really helpful to see the pulumi snippets that you're using to create the Service and StatefulSet (with values redacted is fine).

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 19, 2019

Hi @lblackstone. Let me include all the info I have:

statefulApp.ts:

import * as pulumi from '@pulumi/pulumi'
import * as k8s from '@pulumi/kubernetes'

export interface StatefulAppArgs {
    readonly namespace: pulumi.Input<string>
    readonly name: pulumi.Input<string>
    readonly dataVolumeSize: pulumi.Input<string>
    readonly logsVolumeSize: pulumi.Input<string>
}

export class StatefulApp extends pulumi.ComponentResource {
    readonly selector: pulumi.Input<{ [key: string]: pulumi.Input<string> }>

    constructor(resourceName: string, args: StatefulAppArgs, opts: any) {
        super('StatefulApp', resourceName, {}, opts)
        this.selector = {
            app: args.name
        }
        const headlessServiceName = `${args.name}-statefulset`
        const metadata = {
            namespace: args.namespace,
            labels: this.selector
        }

        new k8s.apps.v1.StatefulSet(resourceName, {
            metadata: {
                ...metadata,
                name: args.name
            },
            spec: {
                serviceName: headlessServiceName,
                selector: { matchLabels: metadata.labels },
                template: {
                    metadata: {
                        ...metadata,
                        name: args.name
                    },
                    spec: {
                        securityContext: { fsGroup: 100 },
                        containers: [{
                            name: args.name,
                            image: 'busybox',
                            command: ['sleep', '3600000'],
                            volumeMounts: [
                                { name: 'data', mountPath: '/app/data' },
                                { name: 'logs', mountPath: '/ap/logs' }
                            ]
                        }]
                    }
                },
                volumeClaimTemplates: [
                    {
                        metadata: { name: 'data' },
                        spec: {
                            accessModes: ['ReadWriteOnce'],
                            storageClassName: 'standard',
                            resources: { requests: { storage: args.dataVolumeSize } }
                        }
                    },
                    {
                        metadata: { name: 'logs' },
                        spec: {
                            accessModes: ['ReadWriteOnce'],
                            storageClassName: 'standard',
                            resources: { requests: { storage: args.logsVolumeSize } }
                        }
                    }
                ]
            }
        }, {
                parent: this,
                provider: opts.providers.kubernetes
            })

        new k8s.core.v1.Service(`${resourceName}-headless`, {
            apiVersion: 'v1',
            kind: 'Service',
            metadata: {
                ...metadata,
                name: headlessServiceName
            },
            spec: {
                clusterIP: 'None',
                selector: metadata.labels
            }
        }, {
                parent: this,
                provider: opts.providers.kubernetes
            })
    }
}

Used in index.ts like that:

import { StatefulApp } from './statefulApp'
import { config } from './config'

new StatefulApp('example', {
    namespace: 'default',
    name: 'example',
    dataVolumeSize: '1Gi',
    logsVolumeSize: '1Gi',
}, {
    providers: {
        kubernetes: config.k8sProvider
    }
})
$ pulumi up
Previewing update (<redacted>):

     Type                               Name                                   Plan       
     pulumi:pulumi:Stack                <redacted>                                        
 +   ├─ StatefulApp                     example                                create     
 +   │  ├─ kubernetes:core:Service      example-headless                       create     
 +   │  └─ kubernetes:apps:StatefulSet  example                                create     
 >-  └─ pulumi:pulumi:StackReference    <redacted>                             read       
 
Resources:
    + 3 to create
    15 unchanged

Do you want to perform this update? yes
Updating (<redacted>):

     Type                               Name                                   Status                  Info
     pulumi:pulumi:Stack                <redacted>                                                     
 +   ├─ StatefulApp                     example                                created                 
 +   │  ├─ kubernetes:core:Service      example-headless                       **creating failed**     1 error
 +   │  └─ kubernetes:apps:StatefulSet  example                                created                 
 >-  └─ pulumi:pulumi:StackReference    <redacted>                             read                    
 
Diagnostics:
  kubernetes:core:Service (example-headless):
    error: Plan apply failed: 2 errors occurred:
    
    * Timeout occurred for 'example-statefulset'
    * Service does not target any Pods. Selected Pods may not be ready, or field '.spec.selector' may not match labels on any Pods
 
Resources:
    + 2 created
    15 unchanged

Duration: 11m3s

Permalink: https://app.pulumi.com/<redacted>/<redacted>/updates/44
error: update failed
$ kubectl get statefulset -o yaml example
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: 2019-01-19T09:41:03Z
  generation: 1
  labels:
    app: example
  name: example
  namespace: default
  resourceVersion: "408683"
  selfLink: /apis/apps/v1/namespaces/default/statefulsets/example
  uid: 5640bbc7-1bce-11e9-99cd-4201ac100387
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: example
  serviceName: example-statefulset
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: example
      name: example
      namespace: default
    spec:
      containers:
      - command:
        - sleep
        - "3600000"
        image: busybox
        imagePullPolicy: Always
        name: example
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /app/data
          name: data
        - mountPath: /ap/logs
          name: logs
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 100
      terminationGracePeriodSeconds: 30
  updateStrategy:
    rollingUpdate:
      partition: 0
    type: RollingUpdate
  volumeClaimTemplates:
  - metadata:
      creationTimestamp: null
      name: data
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: standard
      volumeMode: Filesystem
    status:
      phase: Pending
  - metadata:
      creationTimestamp: null
      name: logs
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: standard
      volumeMode: Filesystem
    status:
      phase: Pending
status:
  collisionCount: 0
  currentReplicas: 1
  currentRevision: example-68477db846
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updateRevision: example-68477db846
  updatedReplicas: 1
kubectl get pod -o yaml example-0
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
      example'
  creationTimestamp: 2019-01-19T09:41:03Z
  generateName: example-
  labels:
    app: example
    controller-revision-hash: example-68477db846
    statefulset.kubernetes.io/pod-name: example-0
  name: example-0
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: example
    uid: 5640bbc7-1bce-11e9-99cd-4201ac100387
  resourceVersion: "408682"
  selfLink: /api/v1/namespaces/default/pods/example-0
  uid: 5644c264-1bce-11e9-99cd-4201ac100387
spec:
  containers:
  - command:
    - sleep
    - "3600000"
    image: busybox
    imagePullPolicy: Always
    name: example
    resources:
      requests:
        cpu: 100m
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /app/data
      name: data
    - mountPath: /ap/logs
      name: logs
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-5d6lx
      readOnly: true
  dnsPolicy: ClusterFirst
  hostname: example-0
  nodeName: gke-default-pool-1-5f32c1bf-8hfl
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 100
  serviceAccount: default
  serviceAccountName: default
  subdomain: example-statefulset
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: data-example-0
  - name: logs
    persistentVolumeClaim:
      claimName: logs-example-0
  - name: default-token-5d6lx
    secret:
      defaultMode: 420
      secretName: default-token-5d6lx
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-01-19T09:42:04Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-01-19T09:44:36Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: null
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: 2019-01-19T09:42:04Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://1237cec318b10ca01c35693cd30bd2409f1832d0d54b430629002db774c9c216
    image: docker.io/library/busybox:latest
    imageID: docker.io/library/busybox@sha256:bbb143159af9eabdf45511fd5aab4fd2475d4c0e7fd4a5e154b98e838488e510
    lastState: {}
    name: example
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2019-01-19T09:44:35Z
  hostIP: 10.88.0.22
  phase: Running
  podIP: 10.89.2.3
  qosClass: Burstable
  startTime: 2019-01-19T09:42:04Z
kubectl get svc -o yaml example-statefulset
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-01-19T09:40:56Z
  labels:
    app: example
  name: example-statefulset
  namespace: default
  resourceVersion: "407942"
  selfLink: /api/v1/namespaces/default/services/example-statefulset
  uid: 51ed105a-1bce-11e9-99cd-4201ac100387
spec:
  clusterIP: None
  selector:
    app: example
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Dependencies:

        "@pulumi/gcp": "^0.16.5"
        "@pulumi/kubernetes": "^0.19.0"
        "@pulumi/pulumi": "^0.16.11"
@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 19, 2019

I think this might be caused by a bug in Kubernetes API <1.12 and it was fixed in 1.12 by this PR.

I guess it will take a while until Kubernetes 1.12 is available in GKE so it would be very helpful to have the awaiter logic changed for Kubernetes API with version <1.12 so it doesn't block pulumi update for so long when there is a headless service without ports.

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 22, 2019

@pbzdyl Interesting! Thanks for the updated info. I'll take another look today.

@hausdorff

This comment has been minimized.

Copy link
Member

hausdorff commented Jan 23, 2019

@pbzdyl I just wanted to say, thank you so much for being so persistent! If you'd like a shirt or a beanie, let me know and I'll send it your way. Drop me a line @ alex@pulumi.com

@lblackstone

This comment has been minimized.

Copy link
Member

lblackstone commented Jan 25, 2019

@pbzdyl You should be able to test out this fix in the latest dev release.(@pulumi/kubernetes@0.19.1-dev.1548452037)

@pbzdyl

This comment has been minimized.

Copy link

pbzdyl commented Jan 26, 2019

It worked - thank you @lblackstone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.