You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unfortunately, when adding the most basic example of a service, it cannot create a correct service - DOMAIN is none and there's no URL for the service.
NAME READY STATUS RESTARTS AGE
pod/helloworld-python-00001-deployment-6fbdcfbccc-557g8 2/2 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 63m
service/helloworld-python-00001-private ClusterIP 10.152.183.209 <none> 80/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 37m
service/helloworld-python-00001 ClusterIP 10.152.183.201 <none> 80/TCP 37m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helloworld-python-00001-deployment 0/1 0 0 37m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helloworld-python-00001-deployment-6fbdcfbccc 1 1 1 37m
NAME URL LATESTCREATED LATESTREADY READY REASON
service.serving.knative.dev/helloworld-python
NAME URL READY REASON
route.serving.knative.dev/helloworld-python
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/helloworld-python-00001 helloworld-python 1
NAME LATESTCREATED LATESTREADY READY REASON
configuration.serving.knative.dev/helloworld-python
microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # CoreDNS
ha-cluster # Configure high availability on the current node
istio # Core Istio service mesh services
knative # The Knative framework on Kubernetes.
disabled:
ambassador # Ambassador API Gateway and Ingress
cilium # SDN, fast with full network policy
dashboard # The Kubernetes dashboard
fluentd # Elasticsearch-Fluentd-Kibana logging and monitoring
gpu # Automatic enablement of Nvidia CUDA
helm # Helm 2 - the package manager for Kubernetes
helm3 # Helm 3 - Kubernetes package manager
host-access # Allow Pods connecting to Host services smoothly
ingress # Ingress controller for external access
jaeger # Kubernetes Jaeger operator with its simple config
kata # Kata Containers is a secure runtime with lightweight VMS
keda # Kubernetes-based Event Driven Autoscaling
kubeflow # Kubeflow for easy ML deployments
linkerd # Linkerd is a service mesh for Kubernetes and other frameworks
metallb # Loadbalancer for your Kubernetes cluster
metrics-server # K8s Metrics Server for API access to service metrics
multus # Multus CNI enables attaching multiple network interfaces to pods
openebs # OpenEBS is the open-source storage solution for Kubernetes
openfaas # openfaas serverless framework
portainer # Portainer UI for your Kubernetes cluster
prometheus # Prometheus operator for monitoring and logging
rbac # Role-Based Access Control for authorisation
registry # Private image registry exposed on localhost:32000
storage # Storage class; allocates storage from host directory
traefik # traefik Ingress controller for external access
microk8s inspect
Inspecting Certificates
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-kubelite is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
Inspecting juju
Inspect Juju
Inspecting kubeflow
Inspect Kubeflow
2021-11-09T17:17:40.855445652+01:00 stderr F {"severity":"INFO","timestamp":"2021-11-09T16:17:40.85527315Z","logger":"controller","caller":"configuration/configuration.go:104","message":"Revision \"helloworld-python-00001\" of configuration is not ready","commit":"c75484e","knative.dev/pod":"controller-788796f49d-rbpg7","knative.dev/controller":"knative.dev.serving.pkg.reconciler.configuration.Reconciler","knative.dev/kind":"serving.knative.dev.Configuration","knative.dev/traceid":"e99dc1e5-4c38-42b7-ba04-648143508138","knative.dev/key":"default/helloworld-python"}
2021-11-09T17:17:40.865510259+01:00 stderr F {"severity":"WARNING","timestamp":"2021-11-09T16:17:40.865348256Z","logger":"controller","caller":"configuration/reconciler.go:287","message":"Failed to update resource status","commit":"c75484e","knative.dev/pod":"controller-788796f49d-rbpg7","knative.dev/controller":"knative.dev.serving.pkg.reconciler.configuration.Reconciler","knative.dev/kind":"serving.knative.dev.Configuration","knative.dev/traceid":"e99dc1e5-4c38-42b7-ba04-648143508138","knative.dev/key":"default/helloworld-python","targetMethod":"ReconcileKind","error":"admission webhook \"webhook.serving.knative.dev\" denied the request: mutation failed: cannot decode incoming new object: json: unknown field \"subresource\""}
2021-11-09T17:17:40.865554468+01:00 stderr F {"severity":"ERROR","timestamp":"2021-11-09T16:17:40.865436299Z","logger":"controller","caller":"controller/controller.go:549","message":"Reconcile error","commit":"c75484e","knative.dev/pod":"controller-788796f49d-rbpg7","knative.dev/controller":"knative.dev.serving.pkg.reconciler.configuration.Reconciler","knative.dev/kind":"serving.knative.dev.Configuration","duration":"10.356986ms","error":"admission webhook \"webhook.serving.knative.dev\" denied the request: mutation failed: cannot decode incoming new object: json: unknown field \"subresource\"","stacktrace":"knative.dev/pkg/controller.(*Impl).handleErr\n\tknative.dev/pkg@v0.0.0-20210622173328-dd0db4b05c80/controller/controller.go:549\nknative.dev/pkg/controller.(*Impl).processNextWorkItem\n\tknative.dev/pkg@v0.0.0-20210622173328-dd0db4b05c80/controller/controller.go:532\nknative.dev/pkg/controller.(*Impl).RunContext.func3\n\tknative.dev/pkg@v0.0.0-20210622173328-dd0db4b05c80/controller/controller.go:468"}
2021-11-09T17:17:40.866220436+01:00 stderr F {"severity":"INFO","timestamp":"2021-11-09T16:17:40.866103059Z","logger":"controller.event-broadcaster","caller":"record/event.go:282","message":"Event(v1.ObjectReference{Kind:\"Configuration\", Namespace:\"default\", Name:\"helloworld-python\", UID:\"38b11bf5-251e-4a09-af7e-25907f1f0027\", APIVersion:\"serving.knative.dev/v1\", ResourceVersion:\"18330\", FieldPath:\"\"}): type: 'Warning' reason: 'UpdateFailed' Failed to update status for \"helloworld-python\": admission webhook \"webhook.serving.knative.dev\" denied the request: mutation failed: cannot decode incoming new object: json: unknown field \"subresource\"","commit":"c75484e","knative.dev/pod":"controller-788796f49d-rbpg7"}
Furthermore, I checked the logs of the failed default-domain--1-9n57t and found the following:
kubectl -n knative-serving logs default-domain--1-9n57t
W1109 15:28:06.351133 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
{"level":"fatal","ts":1636472886.6482625,"logger":"fallback.default-domain","caller":"default-domain/main.go:199","msg":"Error finding gateway address","error":"timed out waiting for the condition","stacktrace":"main.main\n\tknative.dev/serving/cmd/default-domain/main.go:199\nruntime.main\n\truntime/proc.go:204"}
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi!
I tried to enable
knative
on a fresh install following the tutorial on Ubuntu website and everything seems to work correctly - install.log.Unfortunately, when adding the most basic example of a service, it cannot create a correct service - DOMAIN is none and there's no URL for the service.
service.yaml
kubectl get ksvc helloworld-python --output=custom-columns=NAME:.metadata.name,DOMAIN:.status.domain
microk8s kubectl get pods -A -o wide
kubectl get all --namespace default
microk8s status
microk8s inspect
inspection-report-20211109_172050.tar.gz
I inspected the logs of the service controller and I found the following error:
sudo tail /var/log/pods/knative-serving_controller-788796f49d-rbpg7_00d9cc24-3b9a-4d17-b739-2936283b4498/controller/0.log
Furthermore, I checked the logs of the failed
default-domain--1-9n57t
and found the following:The text was updated successfully, but these errors were encountered: