-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jenkins Pipeline fails on the Image build step #17019
Comments
i am go through the tutorial, https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/ the jenkins pod always raised error:
|
@xiaods Check the memory of your VM or Docker Engine. Usually the memory in the
vm is too low compared to the assigned memory for the jenkins pod.
Try to assign 512mb to the jenkins pod and restart.
|
@gabemontero wasn't this the issue that was fixed by reverting the fabric dependency in one of the plugins? |
@tienhngnguyen thanks for your reminder. |
@bparees , yes, the field label conversion error was the reason we had to revert the fabric dependency in the sync plugin. However, the problem in the description is not a sync plugin issue ...we would have seen a java stack trace citing the field label conversion (which we do not). Also, at 0.1.31, they have a version where fabric was reverted to a level that would work with 3.6.0. But plugins are not the only openshift client in this scenario from @tienhngnguyen and @xiaods , you've got the oc binary .. note in the description, the pipeline is doing an shell invocation of It would imply the 3.6 oc/openshift versions employed by @xiaods and @tienhngnguyen when working together have the problem @deads2k pointed us to before. What is unclear to me is what is occurring between
If we better knew the exact REST invocation from @tienhngnguyen @xiaods - could you possibly re-run with --loglevel=10 ? On the k8s plugin / slave pod stack traces ... I believe those stack traces are simply clean up hiccups after the job fails unexpectedly. |
thanks @gabemontero that makes sense. another option would to downgrade to the 3.6.0 jenkins image which would have an older (and therefore compatible) oc client binary. or just patch the oc binary in the image being used. |
yep conceivably those too @bparees , assuming it is the aforementioned issue... if @tienhngnguyen or @xiaods has the bandwidth to get the |
@gabemontero I have executed the pipeline again with the command This is the result of the log:
Please, could you give me a detailed solution how to handle this? |
use upstream oc cluster up --version=latest,
··· |
ignore it. it doesn't hurt anything, it's just a race condition between two things trying to start your build, one of them failed but one of them will have started it properly. |
and @xiaods in the future please create unique issues, asking unrelated questions in an issue pollutes the thread and makes it difficult to focus on the original issue. |
@bparees Oops. thanks for reminder. got it. |
And to complete the loop here, yep, with the trace analysis, the original issue is the known 3.6 problem we previously discussed, and as also previously discussed, using Closing this out. Thanks. |
sorry ... meant to clarify with use a different oc cluster or oc client version as the work around. |
…ersion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
…ersion function found for version: build.openshift.io/v1" as long is not fixed add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed smaller fixes add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
…ersion function found for version: build.openshift.io/v1" as long is not fixed add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed smaller fixes add temporary validation fix to script for error "No field label conversion function found for version: build.openshift.io/v1" as long openshift/origin#17019 is not fixed
Description
Jenkins pipeline fails on the Build image stage with OpenShift.
Version
This is my environment:
Jenkins 2.73.2 on OpenShift (Persistent)
Plugins:
OpenShift Pipeline Jenkins Plugin 1.0.52
OpenShift Sync 0.1.31
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server https://127.0.0.1:8443
openshift v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
=> I'm running a local OpenShift Origin installation running via Docker for Mac. I installed it followinth this tutorial https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md with the CLI command: oc cluster up
Steps To Reproduce
Tutorial is found under this link: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/
==> I installed the Jenkins Persistent template instead of the non-persistent.
Current Result
Log of the Jenkins build:
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build Image)
[Pipeline] unstash
[Pipeline] sh
[cicd-cart-service-pipeline] Running shell script
Uploading file "target/cart.jar" as binary input for the build ...
build "cart-4" started
Receiving source from STDIN as file cart.jar
==================================================================
Starting S2I Java Build .....
S2I source build with plain binaries detected
Copying binaries from /tmp/src to /deployments ...
... done
Pushing image 172.30.1.1:5000/cicd/cart:latest ...
Pushed 5/6 layers, 84% complete
Pushed 6/6 layers, 100% complete
Push successful
Error from server (BadRequest): No field label conversion function found for version: build.openshift.io/v1
[Pipeline] }
[Pipeline] // stage
Expected Result
Jenkins pipelines builds, tests and deploys the application successfully without any errors according the tutorial.
Additional Information
I have a problem with setting up a Jenkins pipeline while carrying out this tutorial: https://blog.openshift.com/openshift-pipelines-jenkins-blue-ocean/
When I try to Start the pipeline, the build image stage fails witth the following log message in the Jenkins pod:
...
INFO: Waiting for Jenkins to be started
| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher start
| INFO: Now handling startup build configs!!
| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.ConfigMapWatcher start
| INFO: Now handling startup config maps!!
| Oct 20, 2017 3:26:02 PM io.fabric8.jenkins.openshiftsync.ImageStreamWatcher start
| INFO: Now handling startup image streams!!
| Oct 20, 2017 3:26:02 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
| INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@1538cca: display name [Root WebApplicationContext]; startup date [Fri Oct 20 15:26:02 UTC 2017]; root of context hierarchy
| Oct 20, 2017 3:26:02 PM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
| INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@1538cca]: org.springframework.beans.factory.support.DefaultListableBeanFactory@1dc8eb5
| Oct 20, 2017 3:26:02 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
| INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1dc8eb5: defining beans [filter,legacy]; root of factory hierarchy
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.ConfigMapWatcher$1 doRun
| INFO: creating ConfigMap watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher$1 doRun
| INFO: creating BuildConfig watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:03 PM io.fabric8.jenkins.openshiftsync.BuildWatcher$1 doRun
| INFO: creating Build watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:03 PM hudson.WebAppMain$3 run
| INFO: Jenkins is fully up and running
| Oct 20, 2017 3:26:03 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:04 PM io.fabric8.jenkins.openshiftsync.ImageStreamWatcher$1 doRun
| INFO: creating ImageStream watch for namespace ci and resource version 8430
| Oct 20, 2017 3:26:05 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer: https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize token ep: https://127.0.0.1:8443/oauth/token
| Oct 20, 2017 3:26:07 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth returning true with namespace ci SA dir null default /run/secrets/kubernetes.io/serviceaccount SA name null default jenkins client ID null default system:serviceaccount:ci:jenkins secret null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTRkZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmSRRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default https://127.0.0.1:8443 server null default https://openshift.default.svc
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth: provider: OpenShiftProviderInfo: issuer: https://127.0.0.1:8443 auth ep: https://127.0.0.1:8443/oauth/authorize token ep: https://127.0.0.1:8443/oauth/token
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm populateDefaults
| INFO: OpenShift OAuth returning true with namespace ci SA dir null default /run/secrets/kubernetes.io/serviceaccount SA name null default jenkins client ID null default system:serviceaccount:ci:jenkins secret null default eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjaSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJqZW5raW5zLXRva2VuLXhtMWN4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImplbmtpbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3NzE1ODQxNy1iNThjLTExZTctODU3NS0wZTRkZWZjODRiNDMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y2k6amVua2lucyJ9.TstDfIRNnKuiVV_SPB8kz9-I3t1XslIbjniahULDdmD4Z64v6C1ZY_WviauPqzta8oPLDt4mKm7XSw_6l1UtuFE_dIFaRhDgGR0rYoYsKrH4LOX979nnd0_zJa-COI4-Ew7yVHQwTGicQU9JSNg0cUlQRl4uxUf1IFkcaAiRfWKtGP3XAVHOEKNNHNaqVJ_i-zFHfPauFR4Y0nvxO3x3Qh3hsktt4bMihNoQSlNuEL7B7ktsITF942lRrSoXYGjKqu6hAh7vyZMs_c8ecaL25CVvyn8MunJsae1XmSRRO5Tvz42lwc5qJce3rOi3GWSGfMTwJleX4udMinFoi-7l2Q redirect null default https://127.0.0.1:8443 server null default https://openshift.default.svc
| Oct 20, 2017 3:26:33 PM org.openshift.jenkins.plugins.openshiftlogin.OpenShiftOAuth2SecurityRealm updateAuthorizationStrategy
| INFO: OpenShift OAuth: user developer, stored in the matrix as developer-admin, based on OpenShift roles [view, edit, admin] already exists in Jenkins
| Oct 20, 2017 3:27:10 PM io.fabric8.jenkins.openshiftsync.BuildConfigWatcher updateJob
| INFO: Updated job ci-cart-service-pipeline from BuildConfig NamespaceName{ci:cart-service-pipeline} with revision: 8462
| Oct 20, 2017 3:27:10 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onStarted
| INFO: starting polling build job/ci-cart-service-pipeline/7/
| Oct 20, 2017 3:27:42 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
| INFO: Excess workload after pending Spot instances: 1
| Oct 20, 2017 3:27:42 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision
| INFO: Template: Kubernetes Pod Template
| Oct 20, 2017 3:27:42 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:43 PM hudson.slaves.NodeProvisioner$StandardStrategyImpl apply
| INFO: Started provisioning Kubernetes Pod Template from openshift with 1 executors. Remaining excess workload: 0
| Oct 20, 2017 3:27:43 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:43 PM org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback call
| INFO: Created Pod: maven-xf7cn in namespace ci
| Oct 20, 2017 3:27:43 PM org.csanchez.jenkins.plugins.kubernetes.ProvisioningCallback call
| INFO: Waiting for Pod to be scheduled (0/100): maven-xf7cn
| Oct 20, 2017 3:27:45 PM hudson.TcpSlaveAgentListener$ConnectionHandler run
| INFO: Accepted JNLP4-connect connection #1 from /172.17.0.2:49622
| Oct 20, 2017 3:27:49 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:27:52 PM hudson.slaves.NodeProvisioner$2 run
| INFO: Kubernetes Pod Template provisioning successfully completed. We have now 2 computer(s)
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Terminating Kubernetes instance for agent maven-xf7cn
| Oct 20, 2017 3:29:39 PM org.jenkinsci.plugins.workflow.job.WorkflowRun finish
| INFO: ci-cart-service-pipeline #7 completed: FAILURE
| Oct 20, 2017 3:29:39 PM okhttp3.internal.platform.Platform log
| INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Terminated Kubernetes instance for agent ci/maven-xf7cn
| Terminated Kubernetes instance for agent ci/maven-xf7cn
| Oct 20, 2017 3:29:39 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Disconnected computer maven-xf7cn
| Oct 20, 2017 3:29:39 PM jenkins.slaves.DefaultJnlpSlaveReceiver channelClosed
| WARNING: Computer.threadPoolForRemoting [#15] for maven-xf7cn terminated
| java.nio.channels.ClosedChannelException
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
| at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
| at hudson.remoting.Channel.close(Channel.java:1403)
| at hudson.remoting.Channel.close(Channel.java:1356)
| at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
| at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
| at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
| at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
| at java.util.concurrent.FutureTask.run(FutureTask.java:266)
| at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
| at java.lang.Thread.run(Thread.java:748)
|
| Oct 20, 2017 3:29:39 PM hudson.remoting.Request$2 run
| WARNING: Failed to send back a reply to the request hudson.remoting.Request$2@1078adc
| hudson.remoting.ChannelClosedException: channel is already closed
| at hudson.remoting.Channel.send(Channel.java:667)
| at hudson.remoting.Request$2.run(Request.java:372)
| at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
| at org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
| at hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
| at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
| at java.util.concurrent.FutureTask.run(FutureTask.java:266)
| at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
| at java.lang.Thread.run(Thread.java:748)
| Caused by: java.nio.channels.ClosedChannelException
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:208)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
| at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
| at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
| at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
| at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
| at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:311)
| at hudson.remoting.Channel.close(Channel.java:1403)
| at hudson.remoting.Channel.close(Channel.java:1356)
| at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:708)
| at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:96)
| at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:626)
| at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
| ... 4 more
|
| Oct 20, 2017 3:29:39 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onCompleted
| INFO: onCompleted job/ci-cart-service-pipeline/7/
| Oct 20, 2017 3:29:39 PM io.fabric8.jenkins.openshiftsync.BuildSyncRunListener onFinalized
| INFO: onFinalized job/ci-cart-service-pipeline/7/
When I carry out the CLI command - oc describes nodes - I get the following informations:
Name: localhost
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=localhost
Annotations: volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:
CreationTimestamp: Sat, 21 Oct 2017 10:42:06 +0200
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
OutOfDisk False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 23 Oct 2017 23:16:21 +0200 Sun, 22 Oct 2017 22:27:45 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Mon, 23 Oct 2017 23:16:21 +0200 Mon, 23 Oct 2017 22:58:39 +0200 KubeletReady kubelet is posting ready status
Addresses: 192.168.65.2,192.168.65.2,localhost
Capacity:
cpu: 4
memory: 6100352Ki
pods: 40
Allocatable:
cpu: 4
memory: 5997952Ki
pods: 40
System Info:
Machine ID:
System UUID: 69BC2037-4931-F334-95AC-C0CCC0A84389
Boot ID: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
Kernel Version: 4.9.49-moby
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.6.1+5115d708d7
Kube-Proxy Version: v1.6.1+5115d708d7
ExternalID: localhost
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
ci cart-3-3plz6 200m (5%) 1 (25%) 512Mi (8%) 1Gi (17%)
ci jenkins-1-jq1wd 0 (0%) 0 (0%) 512Mi (8%) 512Mi (8%)
default docker-registry-1-j3pdr 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
default router-1-zg2cd 100m (2%) 0 (0%) 256Mi (4%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
400m (10%) 1 (25%) 1536Mi (26%) 1536Mi (26%)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
2d 1d 13 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
2d 1d 13 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
2d 1d 13 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
2d 1d 18 kubelet, localhost Normal NodeReady Node localhost status is now: NodeReady
1h 1h 1 kubelet, localhost Normal Starting Starting kubelet.
1h 1h 1 kubelet, localhost Warning ImageGCFailed unable to find data for container /
1h 1h 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
1h 1h 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
1h 1h 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
1h 1h 1 kubelet, localhost Warning Rebooted Node localhost has been rebooted, boot id: 16be43b9-bdb6-4048-9949-786989bf572c
18m 18m 1 kubelet, localhost Normal Starting Starting kubelet.
18m 18m 1 kubelet, localhost Warning ImageGCFailed unable to find data for container /
18m 18m 1 kubelet, localhost Normal NodeHasSufficientDisk Node localhost status is now: NodeHasSufficientDisk
18m 18m 1 kubelet, localhost Normal NodeHasSufficientMemory Node localhost status is now: NodeHasSufficientMemory
18m 18m 1 kubelet, localhost Normal NodeHasNoDiskPressure Node localhost status is now: NodeHasNoDiskPressure
18m 18m 1 kubelet, localhost Warning Rebooted Node localhost has been rebooted, boot id: 6f25d3c7-9564-41aa-90ce-0060181ed1a4
18m 18m 1 kubelet, localhost Normal NodeNotReady Node localhost status is now: NodeNotReady
17m 17m 1 kubelet, localhost Normal NodeReady Node localhost status is now: NodeReady
I would be very thankful if you can help me in this matter.
The text was updated successfully, but these errors were encountered: