Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s kubectl logs return empty logs #50

Closed
defims opened this issue Jan 12, 2023 · 3 comments
Closed

k3s kubectl logs return empty logs #50

defims opened this issue Jan 12, 2023 · 3 comments

Comments

@defims
Copy link
Contributor

defims commented Jan 12, 2023

ctr run directly got right stdout logs

sudo k3s ctr image import --all-platforms target/wasm32-wasi/debug/img.tar #img.tar is build from cd crates/wasi-demo-app && cargo build && cargo build --features oci-v1-tar && cd ../../
sudo k3s ctr run --rm --runtime=io.containerd.wasmtime.v1 ghcr.io/containerd/runwasi/wasi-demo-app:latest wasi-demo-app #need containerd-shim-wasmtime-v1 avaibled by running make && make install 

image

but when I run it with kubectl, I got empty pod logs

sudo k3s kubectl apply -f wasm.yml # need configure containerd with configure file like below
sudo k3s kubectl get pods
sudo k3s kubectl logs wasi-demo-xxx

image

the wasm.yml is:

---
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: wasmtime
handler: wasmtime
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wasi-demo
  labels:
    app: wasi-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wasi-demo
  template:
    metadata:
      labels:
        app: wasi-demo
    spec:
      runtimeClassName: wasmtime
      containers:
      - name: demo
        image: ghcr.io/containerd/runwasi/wasi-demo-app:latest
        imagePullPolicy: Never

/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl is:

version = 2

[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/rancher/k3s/agent/containerd"
[plugins."io.containerd.grpc.v1.cri"]
  stream_server_address = "127.0.0.1"
  stream_server_port = "10010"
  enable_selinux = false
  enable_unprivileged_ports = true
  enable_unprivileged_icmp = true
  sandbox_image = "rancher/mirrored-pause:3.6"

[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "overlayfs"
  disable_snapshot_annotations = true


[plugins."io.containerd.grpc.v1.cri".cni]
  bin_dir = "/var/lib/rancher/k3s/data/7c994f47fd344e1637da337b92c51433c255b387d207b30b3e0262779457afe4/bin"
  conf_dir = "/var/lib/rancher/k3s/agent/etc/cni/net.d"


[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmedge]
  runtime_type = "io.containerd.wasmedge.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.wasmtime]
  runtime_type = "io.containerd.wasmtime.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  runtime_type = "io.containerd.runc.v2"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  SystemdCgroup = true

containerd shims are in /usr/local/bin
image

@Mossaka
Copy link
Member

Mossaka commented Feb 6, 2023

Hey!

Could you please try cloning the repo to your local, and then run make test/k8s/deploy?

This will build the wasmtime shim binary, create a kind cluster and apply test/k8s/deploy.yaml into k8s.

When you do kubectl logs <pod-name, you will see the following:
image

@defims
Copy link
Contributor Author

defims commented Feb 7, 2023

Hey!

Could you please try cloning the repo to your local, and then run make test/k8s/deploy?

This will build the wasmtime shim binary, create a kind cluster and apply test/k8s/deploy.yaml into k8s.

When you do kubectl logs <pod-name, you will see the following: image

kind works, It seems like some missing configure or bug of k3s cause this.
image

@defims
Copy link
Contributor Author

defims commented Feb 7, 2023

Hey!
Could you please try cloning the repo to your local, and then run make test/k8s/deploy?
This will build the wasmtime shim binary, create a kind cluster and apply test/k8s/deploy.yaml into k8s.
When you do kubectl logs <pod-name, you will see the following: image

kind works, It seems like some missing configure or bug of k3s cause this. image

It's a proxy problem as k3s issue 6119 said.
add NO_PROXY=192.168.0.0/16 to /etc/systemd/system/k3s.service.env and restart k3s. everything works.

@defims defims closed this as completed Feb 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants