Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

application redeployment is getting failed for OpenShift v3.7 #17705

Closed
hrishin opened this issue Dec 10, 2017 · 5 comments
Closed

application redeployment is getting failed for OpenShift v3.7 #17705

hrishin opened this issue Dec 10, 2017 · 5 comments
Assignees
Labels
component/apps kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@hrishin
Copy link

hrishin commented Dec 10, 2017

In redeploying scenario on OpenShift v3.7/minishift v1.7.0 may be Deployment/DeploymentConfig is not happening correctly.
Its unable to pull the images from OpenShift's internal registry as pod status keeps shuffling between ErrImagePull/ImagePullBackOff for a long time before pod comes up. Tough it looks like imagestream build happening correctly.

fabric8io/fabric8-maven-plugin#1130

Version

oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://192.168.42.170:8443
openshift v3.7.0+7ed6862
kubernetes v1.7.6+a08f5eeb62

Steps To Reproduce

1.deploy the application
2. redeploy the same application either with some change/without any change.

Current Result

Keep throwing ErrImagePull/ImagePullBackOff

Expected Result

Application pod should come up in one go

@pweil-
Copy link
Contributor

pweil- commented Dec 11, 2017

Are you saying that it stays in backoff for some time and then it can all of a sudden pull the image again?
What errors is it getting from the registry during the fail-to-pull scenario? Is the deployment being launched by a trigger? If you can link to the DC that will help folks debug. Thanks!

@hrishin
Copy link
Author

hrishin commented Dec 14, 2017

@pweil-

Are you saying that it stays in backoff for some time and then it can all of a sudden pull the image again?

Yes

What errors is it getting from the registry during the fail-to-pull scenario?

how can I find it? This is the event log

Time Severity Reason Message
12:24:14 PM Warning Failed Error: ImagePullBackOff6 times in the last 4 minutes
12:24:14 PM Normal Back-off Back-off pulling image "http-vertx:latest"6 times in the last 4 minutes
12:23:48 PM Warning Failed Error: ErrImagePull4 times in the last 4 minutes
12:23:48 PM Warning Failed Failed to pull image "http-vertx:latest": rpc error: code = 2 desc = Error: image library/http-vertx:latest not found4 times in the last 4 minutes
12:23:33 PM Normal Pulling pulling image "http-vertx:latest"4 times in the last 4 minutes
12:21:54 PM Normal Successful Mount Volume MountVolume.SetUp succeeded for volume "default-token-8fbmt"
12:21:53 PM Normal Scheduled Successfully assigned http-vertx-3-cn8qc to localhost

Is the deployment being launched by a trigger? If you can link to the DC that will help folks debug.

Yes

Deployment Status Created Trigger
#3 (latest) Running, 1 replica a few seconds ago Image change
#2 Cancelled a few seconds ago Image change
#1 Active, 1 replica a minute ago Config change

DeploymentConfig:

kind: DeploymentConfig
metadata:
  annotations:
    fabric8.io/git-branch: master
    fabric8.io/git-commit: 91bf222de7a18a712e0aef03cf77cd2656fed930
    fabric8.io/iconUrl: img/icons/vertx.svg
    fabric8.io/metrics-path: >-
      dashboard/file/kubernetes-pods.json/?var-project=http-vertx&var-version=18-SNAPSHOT
    fabric8.io/scm-con-url: 'scm:git:https://github.com/openshiftio/booster-parent.git/http-vertx'
    fabric8.io/scm-devcon-url: 'scm:git:git:@github.com:openshiftio/booster-parent.git/http-vertx'
    fabric8.io/scm-tag: booster-parent-13
    fabric8.io/scm-url: 'https://github.com/openshiftio/http-vertx'
    vertx-testKey: vertx-testValue
  creationTimestamp: '2017-12-14T06:50:38Z'
  generation: 5
  labels:
    app: http-vertx
    group: io.openshift.booster
    provider: fabric8
    version: 18-SNAPSHOT
    vertx-testKey: vertx-testValue
  name: http-vertx
  namespace: myproject
  resourceVersion: '58749'
  selfLink: /apis/apps.openshift.io/v1/namespaces/myproject/deploymentconfigs/http-vertx
  uid: 17fd8653-e09b-11e7-a6fa-fe39fb55aa50
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    app: http-vertx
    group: io.openshift.booster
    provider: fabric8
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 3600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      annotations:
        fabric8.io/git-branch: master
        fabric8.io/git-commit: 91bf222de7a18a712e0aef03cf77cd2656fed930
        fabric8.io/iconUrl: img/icons/vertx.svg
        fabric8.io/metrics-path: >-
          dashboard/file/kubernetes-pods.json/?var-project=http-vertx&var-version=18-SNAPSHOT
        fabric8.io/scm-con-url: 'scm:git:https://github.com/openshiftio/booster-parent.git/http-vertx'
        fabric8.io/scm-devcon-url: 'scm:git:git:@github.com:openshiftio/booster-parent.git/http-vertx'
        fabric8.io/scm-tag: booster-parent-13
        fabric8.io/scm-url: 'https://github.com/openshiftio/http-vertx'
        vertx-testKey: vertx-testValue
      creationTimestamp: null
      labels:
        app: http-vertx
        group: io.openshift.booster
        provider: fabric8
        version: 18-SNAPSHOT
        vertx-testKey: vertx-testValue
    spec:
      containers:
        - env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          image: >-
            172.30.1.1:5000/myproject/http-vertx@sha256:ac60a2ecbb577b952b09432359a63f32e8611968bd43f25347c757b7d633f680
          imagePullPolicy: IfNotPresent
          name: vertx
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
            - containerPort: 9779
              name: prometheus
              protocol: TCP
            - containerPort: 8778
              name: jolokia
              protocol: TCP
          resources: {}
          securityContext:
            privileged: false
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
  test: false
  triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
          - vertx
        from:
          kind: ImageStreamTag
          name: 'http-vertx:latest'
          namespace: myproject
        lastTriggeredImage: '
          172.30.1.1:5000/myproject/http-vertx@sha256:ac60a2ecbb577b952b09432359a63f32e8611968bd43f25347c757b7d633f680'
      type: ImageChange
status:
  availableReplicas: 1
  conditions:
    - lastTransitionTime: '2017-12-14T06:51:01Z'
      lastUpdateTime: '2017-12-14T06:51:01Z'
      message: Deployment config has minimum availability.
      status: 'True'
      type: Available
    - lastTransitionTime: '2017-12-14T06:59:46Z'
      lastUpdateTime: '2017-12-14T06:59:48Z'
      message: replication controller "http-vertx-4" successfully rolled out
      reason: NewReplicationControllerAvailable
      status: 'True'
      type: Progressing
  details:
    causes:
      - imageTrigger:
          from:
            kind: DockerImage
            name: >-
              172.30.1.1:5000/myproject/http-vertx@sha256:ac60a2ecbb577b952b09432359a63f32e8611968bd43f25347c757b7d633f680
        type: ImageChange
    message: image change
  latestVersion: 4
  observedGeneration: 5
  readyReplicas: 1
  replicas: 1
  unavailableReplicas: 0
  updatedReplicas: 1

 

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 15, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 14, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/apps kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

5 participants