New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build fails but Build pod completes successfully #21154

Open
kalgary opened this Issue Oct 3, 2018 · 5 comments

Comments

Projects
None yet
5 participants
@kalgary

kalgary commented Oct 3, 2018

Our jenkins pipeline fails from time to time to build the application. The build exits after 0 sec throwing an Error but the build pod completes successfully.

The build is configured to run on 3 of our working nodes, there are no specific information about the reason of the failure neither in the nodes logs nor in the build logs.

Version

oc v3.9.0+191fece
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://master.test.cloud.cp.local:8443
openshift v3.7.1+c2ce2c0-1
kubernetes v1.7.6+a08f5eeb62

Steps To Reproduce
  1. Run the jenkins pipeline

$ oc start-build bc/XXX-pipeline

Current Result

Pipeline fails with an Error state despite the build pod completed successfully.

$ oc get pods

XXX-34-build 0/1 Completed 0 34m

$ oc get builds

XXX-34 Docker Git@5db6904 Error

Expected Result

Pipeline runs succesful.

Additional Information

$ oc logs build/XXX-34 --loglevel=10

I1003 04:31:30.341225 4746 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"build XXX-34 is in an error state. No logs are available.","reason":"BadRequest","code":400}
I1003 04:31:30.341450 4746 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "build XXX-34 is in an error state. No logs are available.",
"reason": "BadRequest",
"code": 400
}]
F1003 04:31:30.341466 4746 helpers.go:119] Error from server (BadRequest): build XXX-34 is in an error state. No logs are available.

Jenkins Stages:

stage('Maven') {
steps {
echo 'Start MVN build'
sh 'mvn --update-snapshots clean verify'
echo 'archiving artifacts to Jenkins master'
archiveArtifacts artifacts: '**/target/*.war', fingerprint: true
}
}

stage('Image') {
  steps {
    sh 'printenv | sort'
    echo 'trigger openshift docker build ...'
    // Jenkins 2.73:
    openshiftBuild bldCfg: "${env.APP_NAME}", namespace: "${env.PROJECT_NAME}", env: [[ name: "ARTIFACT_URL", value: "${JENKINS_URL}job/${JOB_NAME}/lastBuild/artifact/target/quotation.war" ]]

    sleep time: 10, unit: "SECONDS"

    echo 'wait for docker build to complete ... '
    openshiftVerifyBuild bldCfg: "${env.APP_NAME}", namespace: "${env.PROJECT_NAME}", waitTime: '20', waitUnit: 'min'
  }
}

$ oc edit bc XXX

apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"BuildConfig","metadata":{"annotations":{},"labels":{"app":"XXX"},"name":"XXX","namespace":"----XXX"},"spec":{"output":{"to":{"kind":"ImageStreamTag","name":"XXX:latest"}},"postCommit":{},"resources":{},"runPolicy":"Serial","source":{"contextDir":"src/main/setup/kubernetes/image","git":{"ref":"develop","uri":"https://git.dev.+++/r/XXX/XXX.git"},"type":"Docker"},"strategy":{"dockerStrategy":{"env":[{"name":"GIT_SSL_NO_VERIFY","value":"true"}]},"type":"Docker"},"triggers":[]}}
creationTimestamp: 2018-08-01T09:23:13Z
labels:
app: XXX
name: XXX
namespace: ----XXX
resourceVersion: "74108076"
selfLink: /apis/build.openshift.io/v1/namespaces/----XXX/buildconfigs/XXX
uid: 83bea1d0-956c-11e8-9f01-005056be20f3
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: XXX:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
contextDir: src/main/setup/kubernetes/image
git:
ref: develop
uri: https://git.dev.+++/r/XXX/XXX.git
type: Git
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: BUILD_LOGLEVEL
value: "10"
type: Docker
triggers: []
status:
lastVersion: 37

@jwforres

This comment has been minimized.

Show comment
Hide comment
Member

jwforres commented Oct 11, 2018

@bparees

This comment has been minimized.

Show comment
Hide comment
@bparees

bparees Oct 11, 2018

Contributor

/assign @gabemontero

Contributor

bparees commented Oct 11, 2018

/assign @gabemontero

@bparees bparees removed their assignment Oct 11, 2018

@gabemontero

This comment has been minimized.

Show comment
Hide comment
@gabemontero

gabemontero Oct 11, 2018

Contributor

@kalgary - need some help with the details you provided

I could use

  • the complete yaml (where you redact as needed) the pipeline build object, docker build object, and docker pod.

  • the jenkins console logs for the job that failed, and the jenkins pod log when the failure occurs

  • on the oc logs --loglevel=10 error you noted, that looks more like a server/controller side situation ... is that the only output you got? If not, could your provide the complete output?

Also, @bparees - you think the build controller or master logs would be beneficial in sorting out

I1003 04:31:30.341225 4746 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"build XXX-34 is in an error state. No logs are available.","reason":"BadRequest","code":400}
I1003 04:31:30.341450 4746 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "build XXX-34 is in an error state. No logs are available.",
"reason": "BadRequest",
"code": 400
}]
F1003 04:31:30.341466 4746 helpers.go:119] Error from server (BadRequest): build XXX-34 is in an error state. No logs are available.

from the oc logs invocation @kalgary noted?

Contributor

gabemontero commented Oct 11, 2018

@kalgary - need some help with the details you provided

I could use

  • the complete yaml (where you redact as needed) the pipeline build object, docker build object, and docker pod.

  • the jenkins console logs for the job that failed, and the jenkins pod log when the failure occurs

  • on the oc logs --loglevel=10 error you noted, that looks more like a server/controller side situation ... is that the only output you got? If not, could your provide the complete output?

Also, @bparees - you think the build controller or master logs would be beneficial in sorting out

I1003 04:31:30.341225 4746 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"build XXX-34 is in an error state. No logs are available.","reason":"BadRequest","code":400}
I1003 04:31:30.341450 4746 helpers.go:201] server response object: [{
"metadata": {},
"status": "Failure",
"message": "build XXX-34 is in an error state. No logs are available.",
"reason": "BadRequest",
"code": 400
}]
F1003 04:31:30.341466 4746 helpers.go:119] Error from server (BadRequest): build XXX-34 is in an error state. No logs are available.

from the oc logs invocation @kalgary noted?

@bparees

This comment has been minimized.

Show comment
Hide comment
@bparees

bparees Oct 11, 2018

Contributor

Also, @bparees - you think the build controller or master logs would be beneficial in sorting out

First i think getting the build yaml and build pod yaml would be useful.

usually builds end up in an error state when the build pod is deleted or terminates abnormally. It would be good to understand what happened to this one. Especially since @kalgary claims the build pod completed successfully.

Contributor

bparees commented Oct 11, 2018

Also, @bparees - you think the build controller or master logs would be beneficial in sorting out

First i think getting the build yaml and build pod yaml would be useful.

usually builds end up in an error state when the build pod is deleted or terminates abnormally. It would be good to understand what happened to this one. Especially since @kalgary claims the build pod completed successfully.

@kalgary

This comment has been minimized.

Show comment
Hide comment
@kalgary

kalgary Oct 15, 2018

Hi guys,
thanks, here the files that you have requested:

build-34.txt
build-obj.txt
build-pipeline.txt

As already descreibed the pipeline is started by a build (build-pipeline), the pipeline has been provided above, afterwards the stage('Image') runs another build (build-obj.txt) which is the one that, according to openshift, has failed. As you can see in the logs (build-34.txt), the build has completed successfully.

I cannot provide you any Jenkins logs since they were deleted, and I cannot reproduce the issue easily, since it randomly occurs. I can tell you that the jenkins logs notify that the build object failed.

The build object is also marked as failed

build-34 Docker Git@5db6904 Error

Thanks in advance for your help.

kalgary commented Oct 15, 2018

Hi guys,
thanks, here the files that you have requested:

build-34.txt
build-obj.txt
build-pipeline.txt

As already descreibed the pipeline is started by a build (build-pipeline), the pipeline has been provided above, afterwards the stage('Image') runs another build (build-obj.txt) which is the one that, according to openshift, has failed. As you can see in the logs (build-34.txt), the build has completed successfully.

I cannot provide you any Jenkins logs since they were deleted, and I cannot reproduce the issue easily, since it randomly occurs. I can tell you that the jenkins logs notify that the build object failed.

The build object is also marked as failed

build-34 Docker Git@5db6904 Error

Thanks in advance for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment