Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration is waiting for a Revision to become ready #1971

Closed
zrss opened this issue Aug 29, 2018 · 4 comments
Closed

Configuration is waiting for a Revision to become ready #1971

zrss opened this issue Aug 29, 2018 · 4 comments
Labels
area/networking kind/bug Categorizes issue or PR as related to a bug. kind/doc Something isn't clear

Comments

@zrss
Copy link
Contributor

zrss commented Aug 29, 2018

Expected Behavior

i bootstrap a k8s 1.11 cluster in a Ubuntu 18.04.1 LTS vm through local-up-cluster.sh (and also install a one master and two nodes k8s 1.11.2 by kubeadm) and follow the startup demo here https://github.com/knative/docs/blob/master/install/Knative-with-any-k8s.md to install knative and play with a demo app https://github.com/knative/docs/blob/master/install/getting-started-knative-app.md

Finally, after the all effort i have done, maybe i can curl the demo app and get the expected response

Actual Behavior

zrss@kubernetes:~/work/src/k8s.io/kubernetes$ curl -vH "Host: helloworld-go.default.example.com" http://127.0.0.1:32380/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 32380 (#0)
> GET / HTTP/1.1
> Host: helloworld-go.default.example.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< location: http://helloworld-go.default.example.com/
< date: Wed, 29 Aug 2018 18:22:10 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact

Steps to Reproduce the Problem

Follow the guide https://github.com/knative/docs/blob/master/install/Knative-with-any-k8s.md and https://github.com/knative/docs/blob/master/install/getting-started-knative-app.md step by step

Additional Info

knative serving

v0.1.1

kubectl and cluster version

Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.8-beta.0.11+a888fe400e2a74-dirty", GitCommit:"a888fe400e2a747496cb322c9d0ba13f40b6825d", GitTreeState:"dirty", BuildDate:"2018-08-29T17:13:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.8-beta.0.11+a888fe400e2a74-dirty", GitCommit:"a888fe400e2a747496cb322c9d0ba13f40b6825d", GitTreeState:"dirty", BuildDate:"2018-08-29T17:13:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

pods status (all pods seems running well)

zrss@kubernetes:~/work/src/k8s.io/kubernetes$ ./cluster/kubectl.sh  get po --all-namespaces
NAMESPACE         NAME                                        READY     STATUS      RESTARTS   AGE
istio-system      istio-citadel-7bdc7775c7-dc5pq              1/1       Running     0          12m
istio-system      istio-cleanup-old-ca-tmq9z                  0/1       Completed   0          12m
istio-system      istio-egressgateway-795fc9b47-r6bhr         1/1       Running     0          12m
istio-system      istio-ingress-84659cf44c-b5m8s              1/1       Running     0          12m
istio-system      istio-ingressgateway-7d89dbf85f-69brd       1/1       Running     0          12m
istio-system      istio-mixer-post-install-tqtm2              0/1       Completed   0          12m
istio-system      istio-pilot-66f4dd866c-7vn7j                2/2       Running     0          12m
istio-system      istio-policy-76c8896799-nvgnp               2/2       Running     0          12m
istio-system      istio-sidecar-injector-645c89bc64-xfl86     1/1       Running     0          12m
istio-system      istio-statsd-prom-bridge-6677bccfd4-xv96p   1/1       Running     0          11m
istio-system      istio-telemetry-6554768879-c2lft            2/2       Running     0          12m
istio-system      knative-ingressgateway-5f5dc4b4cd-4v64d     1/1       Running     0          10m
istio-system      zipkin-795dcbf84b-hvstt                     1/1       Running     0          10m
knative-build     build-controller-64cd47df77-5jq2t           1/1       Running     0          10m
knative-build     build-webhook-864d45f66f-qbdwm              1/1       Running     0          10m
knative-serving   controller-5d7f46bfd6-jnppv                 1/1       Running     0          10m
knative-serving   webhook-7f8ddf4499-vdf9p                    1/1       Running     0          10m
kube-system       kube-dns-659bc9899c-djfct                   3/3       Running     0          4m
monitoring        grafana-798cf569ff-28rk2                    1/1       Running     0          10m
monitoring        kibana-logging-7d474fbb45-2qndf             1/1       Running     0          10m
monitoring        kube-state-metrics-77597b45f8-fxsh7         4/4       Running     0          10m
monitoring        node-exporter-zn89m                         2/2       Running     0          10m
monitoring        prometheus-system-0                         1/1       Running     0          10m
monitoring        prometheus-system-1                         1/1       Running     0          10m

and there are some relative CRD resources status

revisions.serving.knative.dev

zrss@kubernetes:~/work/src/k8s.io/kubernetes$ ./cluster/kubectl.sh  get revisions.serving.knative.dev -oyaml
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1alpha1
  kind: Revision
  metadata:
    annotations:
      serving.knative.dev/configurationGeneration: "1"
    clusterName: ""
    creationTimestamp: 2018-08-29T17:21:23Z
    generation: 1
    labels:
      serving.knative.dev/configuration: helloworld-go
    name: helloworld-go-00001
    namespace: default
    ownerReferences:
    - apiVersion: serving.knative.dev/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: Configuration
      name: helloworld-go
      uid: f3a9d398-abaf-11e8-9def-72fdd9b16985
    resourceVersion: "8700"
    selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/revisions/helloworld-go-00001
    uid: f3abfa21-abaf-11e8-9def-72fdd9b16985
  spec:
    concurrencyModel: Multi
    container:
      env:
      - name: TARGET
        value: Go Sample v1
      image: gcr.io/knative-samples/helloworld-go
      name: ""
      resources: {}
    generation: 1
    servingState: Active
  status:
    conditions:
    - lastTransitionTime: 2018-08-29T18:08:51Z
      reason: Updating
      status: Unknown
      type: ContainerHealthy
    - lastTransitionTime: 2018-08-29T18:28:44Z
      reason: Updating
      status: Unknown
      type: ResourcesAvailable
    - lastTransitionTime: 2018-08-29T18:28:44Z
      reason: Updating
      status: Unknown
      type: Ready
    logUrl: |
      http://localhost:8001/api/v1/namespaces/monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'f3abfa21-abaf-11e8-9def-72fdd9b16985',type:phrase))))
    serviceName: helloworld-go-00001-service
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

configurations.serving.knative.dev

zrss@kubernetes:~/work/src/k8s.io/kubernetes$ ./cluster/kubectl.sh  get configurations.serving.knative.dev -oyaml
apiVersion: v1
items:
- apiVersion: serving.knative.dev/v1alpha1
  kind: Configuration
  metadata:
    clusterName: ""
    creationTimestamp: 2018-08-29T17:21:23Z
    generation: 1
    labels:
      serving.knative.dev/route: helloworld-go
      serving.knative.dev/service: helloworld-go
    name: helloworld-go
    namespace: default
    ownerReferences:
    - apiVersion: serving.knative.dev/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: helloworld-go
      uid: f3a6ecb3-abaf-11e8-9def-72fdd9b16985
    resourceVersion: "6183"
    selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/configurations/helloworld-go
    uid: f3a9d398-abaf-11e8-9def-72fdd9b16985
  spec:
    generation: 1
    revisionTemplate:
      metadata:
        creationTimestamp: null
      spec:
        concurrencyModel: Multi
        container:
          env:
          - name: TARGET
            value: Go Sample v1
          image: gcr.io/knative-samples/helloworld-go
          name: ""
          resources: {}
  status:
    conditions:
    - lastTransitionTime: 2018-08-29T18:08:51Z
      status: Unknown
      type: Ready
    latestCreatedRevisionName: helloworld-go-00001
    observedGeneration: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

services.serving.knative.dev

hzs@kubernetes:~/work/src/k8s.io/kubernetes$ ./cluster/kubectl.sh  get services.serving.knative.dev helloworld-go  -oyaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"serving.knative.dev/v1alpha1","kind":"Service","metadata":{"annotations":{},"name":"helloworld-go","namespace":"default"},"spec":{"runLatest":{"configuration":{"revisionTemplate":{"spec":{"container":{"env":[{"name":"TARGET","value":"Go Sample v1"}],"image":"gcr.io/knative-samples/helloworld-go"}}}}}}}
  clusterName: ""
  creationTimestamp: 2018-08-29T17:21:23Z
  generation: 1
  name: helloworld-go
  namespace: default
  resourceVersion: "8946"
  selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/default/services/helloworld-go
  uid: f3a6ecb3-abaf-11e8-9def-72fdd9b16985
spec:
  generation: 1
  runLatest:
    configuration:
      revisionTemplate:
        metadata:
          creationTimestamp: null
        spec:
          concurrencyModel: Multi
          container:
            env:
            - name: TARGET
              value: Go Sample v1
            image: gcr.io/knative-samples/helloworld-go
            name: ""
            resources: {}
status:
  conditions:
  - lastTransitionTime: 2018-08-29T18:08:51Z
    status: Unknown
    type: ConfigurationsReady
  - lastTransitionTime: 2018-08-29T18:09:14Z
    message: Configuration "helloworld-go" is waiting for a Revision to become ready.
    reason: RevisionMissing
    status: Unknown
    type: RoutesReady
  - lastTransitionTime: 2018-08-29T18:30:44Z
    message: Configuration "helloworld-go" is waiting for a Revision to become ready.
    reason: RevisionMissing
    status: Unknown
    type: Ready
  domain: helloworld-go.default.example.com
  domainInternal: helloworld-go.default.svc.cluster.local
  latestCreatedRevisionName: helloworld-go-00001
  observedGeneration: 1

it seems that services.serving.knative.dev is waiting for a revision to become ready, but from the view of revisions.serving.knative.dev and configurations.serving.knative.dev crd, i can't found any exception about its status except for the condition.status is all Unknown

by the way, i notice that crazy idea #638 and its work

and find there is a go-containerregistry remote call to get the digest of image

img, err := remote.Image(tag, remote.WithTransport(r.transport), remote.WithAuthFromKeychain(kc))
if err != nil {
return err
}
digest, err := img.Digest()

pay attention to that will failed in nslookup gcr.io as there is no upstreamServer in kube-dns

i have to add following config to the kube-dns configmap, and kill the pod of kube-dns

data:
  upstreamNameservers: |
    ["8.8.8.8", "8.8.4.4"]

to fix Get https//gcr.io/v2/ i/o timeout in revisions.serving.knative.dev status.conditions

  status:
    conditions:
    - lastTransitionTime: 2018-08-29T19:03:43Z
      reason: Deploying
      status: Unknown
      type: ResourcesAvailable
    - lastTransitionTime: 2018-08-29T19:04:13Z
      message: 'Get https://gcr.io/v2/: dial tcp: i/o timeout'
      reason: ContainerMissing
      status: "False"
      type: ContainerHealthy
    - lastTransitionTime: 2018-08-29T19:04:13Z
      message: 'Get https://gcr.io/v2/: dial tcp: i/o timeout'
      reason: ContainerMissing
      status: "False"
      type: Ready

but it still not work

zrss@kubernetes:~/work/src/k8s.io/kubernetes$ curl -vH "Host: helloworld-go.default.example.com" http://127.0.0.1:32380/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 32380 (#0)
> GET / HTTP/1.1
> Host: helloworld-go.default.example.com
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< location: http://helloworld-go.default.example.com/
< date: Wed, 29 Aug 2018 19:10:01 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact
@knative-prow-robot knative-prow-robot added area/networking kind/bug Categorizes issue or PR as related to a bug. kind/doc Something isn't clear labels Aug 29, 2018
@zrss
Copy link
Contributor Author

zrss commented Aug 30, 2018

finally i make it work in a kubernetes 1.11.2 cluster which bootstrap by kubeadm (proxy is the major problem)

root@kubeadm-master-01:~# curl -w %{time_namelookup}---%{time_connect}---%{time_starttransfer}---%{time_total}"\n" -H "Host: helloworld-go.default.example.com" http://10.162.238.206:32380
Hello World: Go Sample v1!
0.000---0.001---0.005---0.005

u should set http_proxy, https_proxy and no_proxy env in knative-serving controller yaml when u run knative behind a proxy as controller will connect the docker registry to get the digest of the image

back to my problem it seems revisions.serving.knative.dev crd status should look like

  status:
    conditions:
    - lastTransitionTime: 2018-08-30T09:09:18Z
      reason: Updating
      status: Unknown
      type: ResourcesAvailable
    - lastTransitionTime: 2018-08-30T09:09:18Z
      reason: Updating
      status: Unknown
      type: ContainerHealthy
    - lastTransitionTime: 2018-08-30T09:09:18Z
      reason: Inactive
      status: "False"
      type: Ready

and once we send a request, then it changes to

  status:
    conditions:
    - lastTransitionTime: 2018-08-30T09:36:46Z
      status: "True"
      type: ResourcesAvailable
    - lastTransitionTime: 2018-08-30T09:36:46Z
      status: "True"
      type: ContainerHealthy
    - lastTransitionTime: 2018-08-30T09:36:46Z
      status: "True"
      type: Ready

rathen then my previous post, it remains in

  status:
    conditions:
    - lastTransitionTime: 2018-08-29T18:08:51Z
      reason: Updating
      status: Unknown
      type: ContainerHealthy
    - lastTransitionTime: 2018-08-29T18:28:44Z
      reason: Updating
      status: Unknown
      type: ResourcesAvailable
    - lastTransitionTime: 2018-08-29T18:28:44Z
      reason: Updating
      status: Unknown
      type: Ready

need sometimes to dive into it

@zrss
Copy link
Contributor Author

zrss commented Oct 8, 2018

an old story now, close it

@zrss zrss closed this as completed Oct 8, 2018
@indreshmishra
Copy link

@zrss how did you set proxy in serving controllers, can you please elaborate.

@zrss
Copy link
Contributor Author

zrss commented Oct 12, 2018

@indreshmishra , hope u have found it out, i set the proxy in serving controller in this way

see the deployment.yaml of serving controller

containers:
- name: controller
# This is the Go import path for the binary that is containerized
# and substituted here.
image: github.com/knative/serving/cmd/controller
ports:
- name: metrics
containerPort: 9090
volumeMounts:
- name: config-logging
mountPath: /etc/config-logging

and add kind of these envs to deployment.yaml (see here define-an-environment-variable-for-a-container)

env:
- name: http_proxy
  value: {your_http_proxy}
- name: https_proxy
  value: {your_https_proxy}
- name: no_proxy
  value: {your_no_proxy}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Categorizes issue or PR as related to a bug. kind/doc Something isn't clear
Projects
None yet
Development

No branches or pull requests

3 participants