Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple Task creation not working #760

Closed
pdutta777 opened this issue Apr 15, 2019 · 4 comments
Closed

Simple Task creation not working #760

pdutta777 opened this issue Apr 15, 2019 · 4 comments

Comments

@pdutta777
Copy link

pdutta777 commented Apr 15, 2019

After installing onto an EKS cluster, tried running one of the sample task creation (with small edit):

$ cat hello-task.yaml 

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: echo-hello-world
spec:
  steps:
    - name: echo
      image: busybox
      command:
        - echo
      args:
        - "hello world"

The task does not get created, and fails.

Expected Behavior

The task should be created

Actual Behavior

$ kubectl apply -f hello-task.yaml
                      
Error from server (InternalError): error when creating "hello-task.yaml": Internal error occurred: failed calling admission webhook "webhook.tekton.dev": Post https://tekton-pipelines-webhook.tekton-pipelines.svc:443/?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Additional Info

EKS clusters are running 1.11 flavor, and I see that from release v0.2.0 there may be a minimum requirement of version of the Kubernetes cluster?

@vdemeester
Copy link
Member

@pdutta777 thanks for the issue. Looking at the errors, it looks like the admission webhook is not started. Can you add the output of the following command here ?

kubectl get all -n tekton-pipelines

@pdutta777
Copy link
Author

$ kubectl get all -n tekton-pipelines

NAME                                               READY   STATUS    RESTARTS   AGE
pod/tekton-pipelines-controller-6dcbb496b4-tppjm   1/1     Running   0          11h
pod/tekton-pipelines-webhook-597d5cbbbb-zn89c      1/1     Running   0          11h

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/tekton-pipelines-controller   ClusterIP   172.20.133.38   <none>        9090/TCP   11h
service/tekton-pipelines-webhook      ClusterIP   172.20.69.82    <none>        443/TCP    11h

NAME                                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tekton-pipelines-controller   1         1         1            1           11h
deployment.apps/tekton-pipelines-webhook      1         1         1            1           11h

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/tekton-pipelines-controller-6dcbb496b4   1         1         1       11h
replicaset.apps/tekton-pipelines-webhook-597d5cbbbb      1         1         1       11h
$ kubectl logs pod/tekton-pipelines-webhook-597d5cbbbb-zn89c -n tekton-pipelines
    
{"level":"info","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"sampling\": {\n    \"initial\": 100,\n    \"thereafter\": 100\n  },\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","caller":"logging/config.go:97","msg":"Logging level set to info"}
{"level":"warn","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","logger":"webhook","caller":"webhook/main.go:52","msg":"Starting the Configuration Webhook","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:175","msg":"Did not find existing secret, creating one","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:306","msg":"Found certificates for webhook...","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:425","msg":"Created a webhook","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:318","msg":"Successfully registered webhook","knative.dev/controller":"webhook"}

@bobcatfish
Copy link
Collaborator

Hm it does look like the webhook is running based on your output 🤔

EKS clusters are running 1.11 flavor, and I see that from release v0.2.0 there may be a minimum requirement of version of the Kubernetes cluster?

If nothing else we should update our install docs with the requirements.

Our dev docs indicate that a minimum of 1.11 should work. The cluster I've been running is 1.11.7.

So taking a look at istio/old_issues_repo#271 it looks like there are some known issues with webhook validation and EKS. https://aws.amazon.com/about-aws/whats-new/2018/10/amazon-eks-enables-support-for-kubernetes-dynamic-admission-cont/ seems to indicate that this should be fixed now, but it seems like something is going wrong here. @pdutta777 maybe there is something you need to do to enable validating webhooks in your EKS cluster?

@pdutta777
Copy link
Author

After the 0.3.0 release I no longer have issues on EKS

rupalibehera pushed a commit to rupalibehera/pipeline that referenced this issue Dec 1, 2021
Removing "0002-Skip-duplicated-task-they-consume-too-much-memory-on.patch" patch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants