Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jenkins Pipeline fails on the Image build step #17019

Closed
tienhngnguyen opened this issue Oct 24, 2017 · 14 comments
Closed

Jenkins Pipeline fails on the Image build step #17019

tienhngnguyen opened this issue Oct 24, 2017 · 14 comments

Comments

@tienhngnguyen
Copy link

tienhngnguyen commented Oct 24, 2017

Description

Jenkins pipeline fails on the Build image stage with OpenShift.

Version

This is my environment:
Jenkins 2.73.2 on OpenShift (Persistent)
Plugins:
OpenShift Pipeline Jenkins Plugin 1.0.52
OpenShift Sync 0.1.31

oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth

Server https://127.0.0.1:8443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7

=> I'm running a local OpenShift Origin installation running via Docker for Mac. I installed it followinth this tutorial https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md with the CLI command: oc cluster up

Steps To Reproduce

Tutorial is found under this link: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/
==> I installed the Jenkins Persistent template instead of the non-persistent.

Current Result

Log of the Jenkins build:
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build Image)
[Pipeline] unstash
[Pipeline] sh
[cicd-cart-service-pipeline] Running shell script

  • oc start-build cart --from-file=target/cart.jar --follow
    Uploading file "target/cart.jar" as binary input for the build ...
    build "cart-4" started
    Receiving source from STDIN as file cart.jar
    ==================================================================
    Starting S2I Java Build .....
    S2I source build with plain binaries detected
    Copying binaries from /tmp/src to /deployments ...
    ... done

Pushing image 172.30.1.1:5000/cicd/cart:latest ...
Pushed 5/6 layers, 84% complete
Pushed 6/6 layers, 100% complete
Push successful
Error from server (BadRequest): No field label conversion function found for version: build.openshift.io/v1
[Pipeline] }
[Pipeline] // stage

Expected Result

Jenkins pipelines builds, tests and deploys the application successfully without any errors according the tutorial.

Additional Information

I have a problem with setting up a Jenkins pipeline while carrying out this tutorial: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/

When I try to Start the pipeline, the build image stage fails witth the following log message in the Jenkins pod:
...
INFO: Waiting for Jenkins to be started

| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher start
| INFO: Now handling startup build configs!!
| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.ConfigMapWatcher start
| INFO: Now handling startup config maps!!
| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.ImageStreamWatcher start
| INFO: Now handling startup image streams!!
| Oct 20, 2017 3:26:02 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
| INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@1538cca: display name [Root WebApplicationContext]; startup date [Fri Oct 20 15:26:02 UTC 2017]; root of context hierarchy
| Oct 20, 2017 3:26:02 PM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
| INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@1538cca]: org.springframework.beans.factory.support.DefaultListableBeanFactory@1dc8eb5
| Oct 20, 2017 3:26:02 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
| INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1dc8eb5: defining beans [filter,legacy]; root of factory hierarchy
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.ConfigMapWatcher$1 doRun
| INFO: creating ConfigMap watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher$1 doRun
| INFO: creating BuildConfig watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.BuildWatcher$1 doRun
| INFO: creating Build watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM hudson.WebAppMain$3 run
| INFO: Jenkins is fully up and running
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:04 PM io.fabric8.jenkins.openshiftsync.ImageStreamWatcher$1 doRun
| INFO: creating ImageStream watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:05 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer: https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize token ep: https://127.0.0.1:8443/oauth/token
| Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth returning true with namespace ci SA dir null default /run/secrets/kubernetes.io/serviceaccount SA name null default jenkins client ID null default system:serviceaccount:ci:jenkins secret null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTRkZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmSRRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default https://127.0.0.1:8443 server null default https://openshift.default.svc
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer: https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize token ep: https://127.0.0.1:8443/oauth/token
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth returning true with namespace ci SA dir null default /run/secrets/kubernetes.io/serviceaccount SA name null default jenkins client ID null default system:serviceaccount:ci:jenkins secret null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTRkZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmSRRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default https://127.0.0.1:8443 server null default https://openshift.default.svc
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm updateAuthorizationStrategy
| INFO: OpenShift OAuth: user developer, stored in the matrix as developer-admin, based on OpenShift roles [view, edit, admin] already exists in Jenkins
| Oct 20, 2017 3:27:10 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher updateJob
| INFO: Updated job ci-cart-service-pipeline from BuildConfig NamespaceName{ci:cart-service-pipeline} with revision: 8462
| Oct 20, 2017 3:27:10 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onStarted
| INFO: starting polling build job/ci-cart-service-pipeline/7/
| Oct 20, 2017 3:27:42 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
| INFO: Excess workload after pending Spot instances: 1
| Oct 20, 2017 3:27:42 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
| INFO: Template: Kubernetes Pod Template
| Oct 20, 2017 3:27:42 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:43 PM hudson.slaves.NodeProvisioner$StandardStrategyImpl apply
| INFO: Started provisioning Kubernetes Pod Template from openshift with 1 executors. Remaining excess workload: 0
| Oct 20, 2017 3:27:43 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:43 PM org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback call
| INFO: Created Pod: maven-xf7cn in namespace ci
| Oct 20, 2017 3:27:43 PM org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback call
| INFO: Waiting for Pod to be scheduled (0/100): maven-xf7cn
| Oct 20, 2017 3:27:45 PM hudson.TcpSlaveAgentListener$ConnectionHandler run
| INFO: Accepted JNLP4-connect connection #1 from /172.17.0.2:49622
| Oct 20, 2017 3:27:49 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:52 PM hudson.slaves.NodeProvisioner$2 run
| INFO: Kubernetes Pod Template provisioning successfully completed. We have now 2 computer(s)
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Terminating Kubernetes instance for agent maven-xf7cn
| Oct 20, 2017 3:29:39 PM org.jenkinsci.plugins.workflow.job.WorkflowRun finish
| INFO: ci-cart-service-pipeline #7 completed: FAILURE
| Oct 20, 2017 3:29:39 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Terminated Kubernetes instance for agent ci/maven-xf7cn
| Terminated Kubernetes instance for agent ci/maven-xf7cn
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Disconnected computer maven-xf7cn
| Oct 20, 2017 3:29:39 PM jenkins.slaves.DefaultJnlpSlaveReceiver channelClosed
| WARNING: Computer.threadPoolForRemoting [#15] for maven-xf7cn terminated
| java.nio.channels.ClosedChannelException
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
| at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
| at hudson.remoting.Channel.close(Channel.java:1403)
| at hudson.remoting.Channel.close(Channel.java:1356)
| at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
| at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
| at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
| at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
| at java.util.concurrent.FutureTask.run(FutureTask.java:266)
| at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
| at java.lang.Thread.run(Thread.java:748)
|
| Oct 20, 2017 3:29:39 PM hudson.remoting.Request$2 run
| WARNING: Failed to send back a reply to the request hudson.remoting.Request$2@1078adc
| hudson.remoting.ChannelClosedException: channel is already closed
| at hudson.remoting.Channel.send(Channel.java:667)
| at hudson.remoting.Request$2.run(Request.java:372)
| at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
| at org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
| at hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
| at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
| at java.util.concurrent.FutureTask.run(FutureTask.java:266)
| at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
| at java.lang.Thread.run(Thread.java:748)
| Caused by: java.nio.channels.ClosedChannelException
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
| at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
| at hudson.remoting.Channel.close(Channel.java:1403)
| at hudson.remoting.Channel.close(Channel.java:1356)
| at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
| at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
| at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
| at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
| ... 4 more
|
| Oct 20, 2017 3:29:39 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onCompleted
| INFO: onCompleted job/ci-cart-service-pipeline/7/
| Oct 20, 2017 3:29:39 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onFinalized
| INFO: onFinalized job/ci-cart-service-pipeline/7/

When I carry out the CLI command - oc describes nodes - I get the following informations:

Name: localhost
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=localhost
Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:
CreationTimestamp: Sat, 21 Oct 2017 10:42:06 +0200
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message

OutOfDisk False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Mon, 23 Oct 2017 23:16:21 +0200 Mon, 23 Oct 2017 22:58:39 +0200 KubeletReady kubelet is posting ready status
Addresses: 192.168.65.2,192.168.65.2,localhost
Capacity:
cpu: 4
memory: 6100352Ki
pods: 40
Allocatable:
cpu: 4
memory: 5997952Ki
pods: 40
System Info:
Machine ID:
System UUID: 69BC2037-4931-F334-95AC-C0CCC0A84389
Boot ID: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
Kernel Version: 4.9.49-moby
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.6.1+5115d708d7
Kube-Proxy Version: v1.6.1+5115d708d7
ExternalID: localhost
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits

ci cart-3-3plz6 200m (5%) 1 (25%) 512Mi (8%) 1Gi (17%)
ci jenkins-1-jq1wd 0 (0%) 0 (0%) 512Mi (8%) 512Mi (8%)
default docker-registry-1-j3pdr 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
default router-1-zg2cd 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits

400m (10%) 1 (25%) 1536Mi (26%) 1536Mi (26%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message

2d 1d 13 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
2d 1d 13 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
2d 1d 13 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
2d 1d 18 kubelet, localhost Normal NodeReady Node localhost status is now: NodeReady
1h 1h 1 kubelet, localhost Normal Starting Starting kubelet.
1h 1h 1 kubelet, localhost Warning ImageGCFailed unable to find data for container /
1h 1h 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
1h 1h 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
1h 1h 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
1h 1h 1 kubelet, localhost Warning Rebooted Node localhost has been rebooted, boot id: 16be43b9-bdb6-4048-9949-786989bf572c
18m 18m 1 kubelet, localhost Normal Starting Starting kubelet.
18m 18m 1 kubelet, localhost Warning ImageGCFailed unable to find data for container /
18m 18m 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
18m 18m 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
18m 18m 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
18m 18m 1 kubelet, localhost Warning Rebooted Node localhost has been rebooted, boot id: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
18m 18m 1 kubelet, localhost Normal NodeNotReady Node localhost status is now: NodeNotReady
17m 17m 1 kubelet, localhost Normal NodeReady Node localhost status is now: NodeReady

I would be very thankful if you can help me in this matter.

@xiaods
Copy link
Contributor

xiaods commented Oct 25, 2017

i am go through the tutorial, https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/

the jenkins pod always raised error:

Failed Scheduling 0/1 nodes are available: 1 Insufficient memory.

@tienhngnguyen
Copy link
Author

tienhngnguyen commented Oct 25, 2017 via email

@bparees
Copy link
Contributor

bparees commented Oct 25, 2017

Error from server (BadRequest): No field label conversion function found for version: build.openshift.io/v1

@gabemontero wasn't this the issue that was fixed by reverting the fabric dependency in one of the plugins?

@xiaods
Copy link
Contributor

xiaods commented Oct 25, 2017

@tienhngnguyen thanks for your reminder.

@gabemontero
Copy link
Contributor

@bparees , yes, the field label conversion error was the reason we had to revert the fabric dependency in the sync plugin.

However, the problem in the description is not a sync plugin issue ...we would have seen a java stack trace citing the field label conversion (which we do not). Also, at 0.1.31, they have a version where fabric was reverted to a level that would work with 3.6.0.

But plugins are not the only openshift client in this scenario from @tienhngnguyen and @xiaods , you've got the oc binary .. note in the description, the pipeline is doing an shell invocation ofoc start-build and the error occurs while that is processing.

It would imply the 3.6 oc/openshift versions employed by @xiaods and @tienhngnguyen when working together have the problem @deads2k pointed us to before.

What is unclear to me is what is occurring between

  • Push successful
    and
  • Error from server (BadRequest): No field label conversion function found for version: build.openshift.io/v1

If we better knew the exact REST invocation from oc start-build, perhaps we could nail down the client / server discrepancy, and line back up to @deads2k ... most likely they'll need to bump to the 3.6.x version @deads2k dropped.

@tienhngnguyen @xiaods - could you possibly re-run with --loglevel=10 ?

On the k8s plugin / slave pod stack traces ... I believe those stack traces are simply clean up hiccups after the job fails unexpectedly.

@bparees
Copy link
Contributor

bparees commented Oct 25, 2017

thanks @gabemontero that makes sense.

another option would to downgrade to the 3.6.0 jenkins image which would have an older (and therefore compatible) oc client binary.

or just patch the oc binary in the image being used.

@gabemontero
Copy link
Contributor

yep conceivably those too @bparees , assuming it is the aforementioned issue... if @tienhngnguyen or @xiaods has the bandwidth to get the oc start-build --loglevel=10, I'll look at it to confirm.

as an FYI, the 3.6.x commit from @deads2k was d452cf2

@tienhngnguyen
Copy link
Author

tienhngnguyen commented Oct 25, 2017

@gabemontero I have executed the pipeline again with the command oc start-build cart --from-file=target/cart.jar --follow --loglevel=10 as you wished.

This is the result of the log:

[...]
/home/jenkins/.kube/172.30.0.1_443/image.openshift.io/v1/serverresources.json
I1025 22:02:30.483947     317 cached_discovery.go:72] returning cached discovery info from /home/jenkins/.kube/172.30.0.1_443/v1/serverresources.json
build "cart-2" started
I1025 22:02:30.485287     317 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.7.6+a08f5eeb62 (linux/amd64) kubernetes/c84beff" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLTdzcXN6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyYThhNzQ4Ni1iODMyLTExZTctYWNlZi01MmJjMGJjYWE2NzYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.LBZ0Akk2pFTdpvwqcIgc_QsDG5tCIP1yAV4J9TOJCjfjJ5sRA1PBR2NY-6mq78rCDOtjyE9-7Mv6r0RAo4lw6Af9pks4gcLZZTiMpdg37KBzy76xY92MXTA_R4Hj92xTg6mZX5nywGgI18o_uCHhG3rDTpGVYh2PJgXAgvFYm3Mks3tjZiYooveXsS5CHlmnH0HFpEEeg58hCLlw4DFiMdGHhvh321MpCAHCnjhDwlW7SdoZ5It_6fHH0wBAZzjbH9SESp52OhfteddkSwXG_bdx1C5yotYPnTc6O5844379Bnhs-rzugFyYsLQoqZB-4V6jHmXTLzVBvwX5zxa-Ug" https://172.30.0.1:443/apis/build.openshift.io/v1/namespaces/ci/builds/cart-2/log?follow=true
I1025 22:02:30.499621     317 round_trippers.go:405] GET https://172.30.0.1:443/apis/build.openshift.io/v1/namespaces/ci/builds/cart-2/log?follow=true 200 OK in 14 milliseconds
I1025 22:02:30.499675     317 round_trippers.go:411] Response Headers:
I1025 22:02:30.499684     317 round_trippers.go:414]     Date: Wed, 25 Oct 2017 22:02:30 GMT
I1025 22:02:30.499689     317 round_trippers.go:414]     Cache-Control: no-store
I1025 22:02:30.499693     317 round_trippers.go:414]     Content-Type: text/plain
Receiving source from STDIN as file cart.jar
==================================================================
Starting S2I Java Build .....
S2I source build with plain binaries detected
Copying binaries from /tmp/src to /deployments ...
... done

Pushing image 172.30.1.1:5000/ci/cart:latest ...
Pushed 5/6 layers, 85% complete
Pushed 6/6 layers, 100% complete
Push successful
I1025 22:02:35.504246     317 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: oc/v1.7.6+a08f5eeb62 (linux/amd64) kubernetes/c84beff" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLTdzcXN6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyYThhNzQ4Ni1iODMyLTExZTctYWNlZi01MmJjMGJjYWE2NzYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.LBZ0Akk2pFTdpvwqcIgc_QsDG5tCIP1yAV4J9TOJCjfjJ5sRA1PBR2NY-6mq78rCDOtjyE9-7Mv6r0RAo4lw6Af9pks4gcLZZTiMpdg37KBzy76xY92MXTA_R4Hj92xTg6mZX5nywGgI18o_uCHhG3rDTpGVYh2PJgXAgvFYm3Mks3tjZiYooveXsS5CHlmnH0HFpEEeg58hCLlw4DFiMdGHhvh321MpCAHCnjhDwlW7SdoZ5It_6fHH0wBAZzjbH9SESp52OhfteddkSwXG_bdx1C5yotYPnTc6O5844379Bnhs-rzugFyYsLQoqZB-4V6jHmXTLzVBvwX5zxa-Ug" https://172.30.0.1:443/apis/build.openshift.io/v1/namespaces/ci/builds?fieldSelector=metadata.name%3Dcart-2
I1025 22:02:35.506747     317 round_trippers.go:405] GET https://172.30.0.1:443/apis/build.openshift.io/v1/namespaces/ci/builds?fieldSelector=metadata.name%3Dcart-2 400 Bad Request in 2 milliseconds
I1025 22:02:35.506769     317 round_trippers.go:411] Response Headers:
I1025 22:02:35.506775     317 round_trippers.go:414]     Cache-Control: no-store
I1025 22:02:35.506779     317 round_trippers.go:414]     Content-Type: application/json
I1025 22:02:35.506782     317 round_trippers.go:414]     Content-Length: 190
I1025 22:02:35.506786     317 round_trippers.go:414]     Date: Wed, 25 Oct 2017 22:02:35 GMT
I1025 22:02:35.506836     317 request.go:994] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"No field label conversion function found for version: build.openshift.io/v1","reason":"BadRequest","code":400}
I1025 22:02:35.507112     317 helpers.go:206] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "No field label conversion function found for version: build.openshift.io/v1",
  "reason": "BadRequest",
  "code": 400
}]
F1025 22:02:35.507128     317 helpers.go:120] Error from server (BadRequest): No field label conversion function found for version: build.openshift.io/v1
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE

Please, could you give me a detailed solution how to handle this?

@xiaods
Copy link
Contributor

xiaods commented Oct 26, 2017

use upstream oc cluster up --version=latest,
and go through the pipeline sample: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/
came across this issue, do you know any hint to handle it.
···

3:42:52 PM Warning Build Config Instantiate Failed gave up on Build for BuildConfig myproject/jenkins-blueocean (0) due to fatal error: the LastVersion(1) on build config myproject/jenkins-blueocean does not match the build request LastVersion(0)

···

@bparees
Copy link
Contributor

bparees commented Oct 26, 2017

use upstream oc cluster up --version=latest,
and go through the pipeline sample: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/
came across this issue, do you know any hint to handle it.

ignore it. it doesn't hurt anything, it's just a race condition between two things trying to start your build, one of them failed but one of them will have started it properly.

@bparees
Copy link
Contributor

bparees commented Oct 26, 2017

and @xiaods in the future please create unique issues, asking unrelated questions in an issue pollutes the thread and makes it difficult to focus on the original issue.

@xiaods
Copy link
Contributor

xiaods commented Oct 26, 2017

@bparees Oops. thanks for reminder. got it.

@gabemontero
Copy link
Contributor

And to complete the loop here, yep, with the trace analysis, the original issue is the known 3.6 problem we previously discussed, and as also previously discussed, using --version=latest is a viable workaround.

Closing this out. Thanks.

@gabemontero
Copy link
Contributor

sorry ... meant to clarify with use a different oc cluster or oc client version as the work around.

toschneck pushed a commit to toschneck/openshift-example-bakery-ci-pipeline that referenced this issue Oct 26, 2017
…ersion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
toschneck pushed a commit to toschneck/openshift-example-bakery-ci-pipeline that referenced this issue Oct 26, 2017
…ersion function found for version: build.openshift.io/v1" as long is not fixed

add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed

smaller fixes

add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
toschneck pushed a commit to toschneck/openshift-example-bakery-ci-pipeline that referenced this issue Oct 27, 2017
…ersion function found for version: build.openshift.io/v1" as long is not fixed

add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed

smaller fixes

add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
toschneck pushed a commit to toschneck/openshift-example-bakery-ci-pipeline that referenced this issue Jun 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants