Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The kaniko build pod failed to start #306

Closed
crystaldust opened this issue Dec 19, 2018 · 7 comments

Comments

Projects
None yet
2 participants
@crystaldust
Copy link
Contributor

commented Dec 19, 2018

I installed camel-k with kamel install and all the language build env pods seem to fail, complaining no Dockerfile is specified, the logs look like so:

$ kubectl logs camel-k-kotlin
Error: please provide a valid path to a Dockerfile within the build context with --dockerfile
Usage:
  executor [flags]
......

I've checked the PersistentVolume and find that the /workspace dir is empty, seems like that the context dir('/workspace/builder-NNNN/package/context') is not created. Any ideas on this?

@nicolaferraro

This comment has been minimized.

Copy link
Contributor

commented Dec 19, 2018

Hi @crystaldust can you tell which version of Camel K you're using and the cluster type (minikube?). We had such problems in the past, but I expected that to be solved in 0.1.0.

@crystaldust

This comment has been minimized.

Copy link
Contributor Author

commented Dec 20, 2018

I use Camel K 0.1.0 and a k8s cluster I manually deployed. I deploy the minikube registry addon to make Camel K "think" I'm using minikube, so the operator is created. Is there something else I should do? since I'm not using a typical k8s cluster solution :-)

@nicolaferraro

This comment has been minimized.

Copy link
Contributor

commented Dec 20, 2018

I've tried 0.1.0 on Minikube and it seems to work. It may be an old version of the operator running.

Can you dump your configuration?

kubectl get deployment,integration,integrationcontext,integrationplatform -o yaml > dump.yaml

Then provide the file to check.

You can also try to do a full uninstall and reinstall if there's a issue with versions (If so, we should make sure that kamel install always upgrade to latest version).

kubectl delete all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd -l 'app=camel-k'

Then

kamel install
@crystaldust

This comment has been minimized.

Copy link
Contributor Author

commented Dec 21, 2018

Here is the dump:

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
    creationTimestamp: 2018-12-18T02:34:45Z
    generation: 4
    labels:
      app: camel-k
      camel.apache.org/component: operator
    name: camel-k-operator
    namespace: default
    resourceVersion: "35484"
    selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/camel-k-operator
    uid: 7b909d78-026d-11e9-915f-080027c29701
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        name: camel-k-operator
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          camel.apache.org/component: operator
          name: camel-k-operator
      spec:
        containers:
        - command:
          - camel-k
          env:
          - name: WATCH_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: OPERATOR_NAME
            value: camel-k
          image: docker.io/apache/camel-k:0.1.0
          imagePullPolicy: IfNotPresent
          name: camel-k-operator
          ports:
          - containerPort: 60000
            name: metrics
            protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /workspace
            name: camel-k-builder
        dnsPolicy: ClusterFirst
        initContainers:
        - command:
          - chmod
          - "777"
          - /workspace
          image: busybox
          imagePullPolicy: IfNotPresent
          name: build-volume-permission
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /workspace
            name: camel-k-builder
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        serviceAccount: camel-k-operator
        serviceAccountName: camel-k-operator
        terminationGracePeriodSeconds: 30
        volumes:
        - name: camel-k-builder
          persistentVolumeClaim:
            claimName: camel-k-builder
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: 2018-12-18T04:06:54Z
      lastUpdateTime: 2018-12-18T04:06:54Z
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: 2018-12-18T04:06:54Z
      lastUpdateTime: 2018-12-18T04:06:54Z
      message: ReplicaSet "camel-k-operator-fffc47bd4" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    observedGeneration: 4
    readyReplicas: 1
    replicas: 1
    updatedReplicas: 1
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "2"
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"pvtest","namespace":"default"},"spec":{"template":{"metadata":{"labels":{"app":"pvtest"}},"spec":{"containers":[{"image":"pvtest","imagePullPolicy":"IfNotPresent","name":"pvtest","volumeMounts":[{"mountPath":"/workspace","name":"camel-k-builder"}]}],"volumes":[{"name":"camel-k-builder","persistentVolumeClaim":{"claimName":"camel-k-builder"}}]}}}}
    creationTimestamp: 2018-12-18T08:58:23Z
    generation: 2
    labels:
      app: pvtest
    name: pvtest
    namespace: default
    resourceVersion: "60358"
    selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/pvtest
    uid: 12ff1a82-02a3-11e9-915f-080027c29701
  spec:
    progressDeadlineSeconds: 2147483647
    replicas: 1
    revisionHistoryLimit: 2147483647
    selector:
      matchLabels:
        app: pvtest
    strategy:
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 1
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: pvtest
      spec:
        containers:
        - image: pvtest
          imagePullPolicy: IfNotPresent
          name: pvtest
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /workspace
            name: camel-k-builder
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        volumes:
        - name: camel-k-builder
          persistentVolumeClaim:
            claimName: camel-k-builder
  status:
    availableReplicas: 1
    conditions:
    - lastTransitionTime: 2018-12-18T08:58:23Z
      lastUpdateTime: 2018-12-18T08:58:23Z
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    observedGeneration: 2
    readyReplicas: 1
    replicas: 1
    updatedReplicas: 1
- apiVersion: camel.apache.org/v1alpha1
  kind: IntegrationContext
  metadata:
    creationTimestamp: 2018-12-18T04:06:21Z
    generation: 3
    labels:
      app: camel-k
      camel.apache.org/context.created.by.kind: Operator
      camel.apache.org/context.created.by.name: core
      camel.apache.org/context.type: platform
    name: groovy
    namespace: default
    resourceVersion: "38088"
    selfLink: /apis/camel.apache.org/v1alpha1/namespaces/default/integrationcontexts/groovy
    uid: 4738304f-027a-11e9-915f-080027c29701
  spec:
    dependencies:
    - runtime:jvm
    - runtime:groovy
    - camel:core
  status:
    digest: vmW0KEastH2zIKxrBRabWwKbbIy1MwmrR4JE0FgSOf7Q
    phase: Error
- apiVersion: camel.apache.org/v1alpha1
  kind: IntegrationContext
  metadata:
    creationTimestamp: 2018-12-18T04:06:21Z
    generation: 3
    labels:
      app: camel-k
      camel.apache.org/context.created.by.kind: Operator
      camel.apache.org/context.created.by.name: jvm
      camel.apache.org/context.type: platform
    name: jvm
    namespace: default
    resourceVersion: "38087"
    selfLink: /apis/camel.apache.org/v1alpha1/namespaces/default/integrationcontexts/jvm
    uid: 473579b1-027a-11e9-915f-080027c29701
  spec:
    dependencies:
    - runtime:jvm
    - camel:core
  status:
    digest: vZ1wvmwcDpuzL2C-v5leXFp2WIWzQfEIK4RnHm4DnjpU
    phase: Error
- apiVersion: camel.apache.org/v1alpha1
  kind: IntegrationContext
  metadata:
    creationTimestamp: 2018-12-18T04:06:21Z
    generation: 3
    labels:
      app: camel-k
      camel.apache.org/context.created.by.kind: Operator
      camel.apache.org/context.created.by.name: jvm
      camel.apache.org/context.type: platform
    name: kotlin
    namespace: default
    resourceVersion: "38089"
    selfLink: /apis/camel.apache.org/v1alpha1/namespaces/default/integrationcontexts/kotlin
    uid: 473a0b72-027a-11e9-915f-080027c29701
  spec:
    dependencies:
    - runtime:jvm
    - runtime:kotlin
    - camel:core
  status:
    digest: v7jMG6rTOufpRKk6kzoTROYL5Al90eLa9VzJfLAoPmKA
    phase: Error
- apiVersion: camel.apache.org/v1alpha1
  kind: IntegrationContext
  metadata:
    creationTimestamp: 2018-12-18T04:06:21Z
    generation: 3
    labels:
      app: camel-k
      camel.apache.org/context.created.by.kind: Operator
      camel.apache.org/context.created.by.name: jvm
      camel.apache.org/context.type: platform
    name: spring-boot
    namespace: default
    resourceVersion: "38990"
    selfLink: /apis/camel.apache.org/v1alpha1/namespaces/default/integrationcontexts/spring-boot
    uid: 47773917-027a-11e9-915f-080027c29701
  spec:
    dependencies:
    - runtime:jvm
    - runtime:spring-boot
    - camel:core
    traits:
      springboot:
        configuration:
          enabled: "true"
  status:
    digest: vNvTKGa9NgKin_G1pWIGtyC4S6wBoULg5csV1ytXtYyA
    phase: Error
- apiVersion: camel.apache.org/v1alpha1
  kind: IntegrationPlatform
  metadata:
    creationTimestamp: 2018-12-18T02:22:42Z
    generation: 4
    labels:
      app: camel-k
    name: camel-k
    namespace: default
    resourceVersion: "38092"
    selfLink: /apis/camel.apache.org/v1alpha1/namespaces/default/integrationplatforms/camel-k
    uid: cc6f5b2e-026b-11e9-915f-080027c29701
  spec:
    build:
      publishStrategy: Kaniko
      registry: 10.254.230.174
    cluster: Kubernetes
    profile: Kubernetes
  status:
    phase: Error
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
@crystaldust

This comment has been minimized.

Copy link
Contributor Author

commented Dec 21, 2018

After deleting all the resources and install camel-k again, everything seems to work. But the kubernetes go-client will print some errors:

ERROR: logging before flag.Parse: E1221 10:01:09.539561   13167 memcache.go:147] couldn't get resource list for camel.apache.org/v1alpha1: the server could not find the requested resource
ERROR: logging before flag.Parse: E1221 10:01:11.534965   13167 memcache.go:147] couldn't get resource list for camel.apache.org/v1alpha1: the server could not find the requested resource

This won't stop camel-k runing

@nicolaferraro

This comment has been minimized.

Copy link
Contributor

commented Dec 21, 2018

The error is printed initially because the cluster does not immediately recognize the new CRDs. I don't know if we can get rid of it.
The configuration seems ok, I don't see why it didn't work at first..

@crystaldust

This comment has been minimized.

Copy link
Contributor Author

commented Dec 21, 2018

Uh, that's pretty strange and, interesting, I'll try to set up a new cluster and try again. Thanks for the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.