Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes support #1

Closed
kdrag0n opened this issue Feb 20, 2023 · 35 comments
Closed

Kubernetes support #1

kdrag0n opened this issue Feb 20, 2023 · 35 comments
Labels
f/containers Affects container users t/feature New feature
Milestone

Comments

@kdrag0n
Copy link
Member

kdrag0n commented Feb 20, 2023

This is about first-class support for Kubernetes.

You can already do it yourself with kind, k3s, or k3d: https://docs.orbstack.dev/docker/kubernetes

@kdrag0n kdrag0n added t/feature New feature planned and removed planned labels Feb 20, 2023
@631068264
Copy link

If k8s support,I will install orbstack it.

@kdrag0n
Copy link
Member Author

kdrag0n commented Mar 27, 2023

For now, you can run Kubernetes yourself with kind, k3s, or k3d: https://docs.orbstack.dev/docker/kubernetes

@631068264
Copy link

Download image too slow because #2. It's still a long way to replace Docker Desktop .By the way host network support is wonderful.

@killwing
Copy link

killwing commented Apr 1, 2023

error happens when installing k3s to a Ubuntu machine:
Process: 27663 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE)
Process: 27664 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)

@waveBoom

This comment was marked as spam.

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 2, 2023

error happens when installing k3s to a Ubuntu machine: Process: 27663 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE) Process: 27664 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)

@killwing That's normal because OrbStack doesn't use kernel modules. All the necessary modules are built in so k3s should work anyway.

If there's actually an error preventing k3s from starting, then please share the full output of systemctl status k3s and journalctl -u k3s. Otherwise you can safely ignore it — the service will start as usual.

@killwing
Copy link

killwing commented Apr 2, 2023

error happens when installing k3s to a Ubuntu machine: Process: 27663 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=1/FAILURE) Process: 27664 ExecStartPre=/sbin/modprobe overlay (code=exited, status=1/FAILURE)

@killwing That's normal because OrbStack doesn't use kernel modules. All the necessary modules are built in so k3s should work anyway.

If there's actually an error preventing k3s from starting, then please share the full output of systemctl status k3s and journalctl -u k3s. Otherwise you can safely ignore it — the service will start as usual.

Thanks, another error is about /dev/vdb1 not found, journalctl -u k3s:

Apr 01 11:47:13 ubuntu systemd[1]: Starting Lightweight Kubernetes...
Apr 01 11:47:13 ubuntu sh[766]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Apr 01 11:47:13 ubuntu sh[767]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Apr 01 11:47:13 ubuntu modprobe[768]: modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/6.1.21-orbstack-00098-g7d48b03fef38
Apr 01 11:47:13 ubuntu modprobe[769]: modprobe: FATAL: Module overlay not found in directory /lib/modules/6.1.21-orbstack-00098-g7d48b03fef38
Apr 01 11:47:13 ubuntu k3s[770]: time="2023-04-01T11:47:13+08:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Apr 01 11:47:13 ubuntu k3s[770]: time="2023-04-01T11:47:13+08:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/c0830be39589f4503a78572e92ac1ff62de74be5bc69c98a71ff0aac3cc8f847"
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Starting k3s v1.22.15+k3s1 (7b69bebd)"
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Database tables and indexes are up to date"
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Kine available at unix://kine.sock"
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="generated self-signed CA certificate CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14.910569741 +0000 UTC notAfter=>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC n>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC >
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 U>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC no>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="generated self-signed CA certificate CN=k3s-server-ca@1680320834: notBefore=2023-04-01 03:47:14.915324303 +0000 UTC notAfter=>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="generated self-signed CA certificate CN=k3s-request-header-ca@1680320834: notBefore=2023-04-01 03:47:14.916650723 +0000 UTC n>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAf>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="generated self-signed CA certificate CN=etcd-server-ca@1680320834: notBefore=2023-04-01 03:47:14.917413037 +0000 UTC notAfter>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03-3>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03-3>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="generated self-signed CA certificate CN=etcd-peer-ca@1680320834: notBefore=2023-04-01 03:47:14.918935628 +0000 UTC notAfter=2>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03-31 03>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03-31 0>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=warning msg="dynamiclistener [::]:6443: no cached certificate available for preload - deferring certificate load until storage initiali>
Apr 01 11:47:14 ubuntu k3s[770]: time="2023-04-01T11:47:14+08:00" level=info msg="Active TLS secret / (ver=) (count 11): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-100.115.93.210:100>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Tunnel server egress proxy mode: agent"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernete>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kube>
Apr 01 11:47:15 ubuntu k3s[770]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/contr>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Waiting for API server to become available"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/clou>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.008659     770 server.go:581] external host was not specified, using 100.115.93.210
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.008847     770 server.go:175] Version: v1.22.15+k3s1
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="To join server node to cluster: k3s server -s https://100.115.93.210:6443 -t ${SERVER_NODE_TOKEN}"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="To join agent node to cluster: k3s agent -s https://100.115.93.210:6443 -t ${AGENT_NODE_TOKEN}"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Run: k3s kubectl"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="certificate CN=ubuntu signed by CN=k3s-server-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 UTC notAfter=2024-03-31 03:4>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="certificate CN=system:node:ubuntu,O=system:nodes signed by CN=k3s-client-ca@1680320834: notBefore=2023-04-01 03:47:14 +0000 U>
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Module overlay was already loaded"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Module nf_conntrack was already loaded"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.205829     770 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 524288"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Apr 01 11:47:15 ubuntu k3s[770]: time="2023-04-01T11:47:15+08:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /r>
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.361479     770 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.362206     770 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,No>
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.362254     770 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priorit>
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.363080     770 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,No>
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.363110     770 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priorit>
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.373387     770 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.374346     770 instance.go:278] Using reconciler: lease
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.404090     770 rest.go:130] the default service ipfamily for this cluster is: IPv4
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.754520     770 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.755944     770 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.785689     770 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.791211     770 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.812372     770 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.823618     770 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.846636     770 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.846669     770 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.853095     770 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.853169     770 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.865713     770 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.872853     770 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.911239     770 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.911294     770 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.916513     770 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.922582     770 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,No>
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.922634     770 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priorit>
Apr 01 11:47:15 ubuntu k3s[770]: W0401 11:47:15.936992     770 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
Apr 01 11:47:15 ubuntu k3s[770]: I0401 11:47:15.992594     770 trace.go:205] Trace[498682492]: "List etcd3" key:/jobs,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-2023 11:47:15.478) (>
Apr 01 11:47:15 ubuntu k3s[770]: Trace[498682492]: [513.637336ms] [513.637336ms] END
Apr 01 11:47:16 ubuntu k3s[770]: I0401 11:47:16.033564     770 trace.go:205] Trace[1074395984]: "List etcd3" key:/networkpolicies,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-2023 11:>
Apr 01 11:47:16 ubuntu k3s[770]: Trace[1074395984]: [510.233278ms] [510.233278ms] END
Apr 01 11:47:16 ubuntu k3s[770]: I0401 11:47:16.035658     770 trace.go:205] Trace[1432758442]: "List etcd3" key:/poddisruptionbudgets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-202>
Apr 01 11:47:16 ubuntu k3s[770]: Trace[1432758442]: [501.844647ms] [501.844647ms] END
Apr 01 11:47:16 ubuntu k3s[770]: I0401 11:47:16.036709     770 trace.go:205] Trace[1542749609]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-2023 11:47:15.5>
Apr 01 11:47:16 ubuntu k3s[770]: Trace[1542749609]: [522.138682ms] [522.138682ms] END
Apr 01 11:47:16 ubuntu k3s[770]: I0401 11:47:16.043259     770 trace.go:205] Trace[1586435829]: "List etcd3" key:/runtimeclasses,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-2023 11:4>
Apr 01 11:47:16 ubuntu k3s[770]: Trace[1586435829]: [512.095383ms] [512.095383ms] END
Apr 01 11:47:16 ubuntu k3s[770]: I0401 11:47:16.044964     770 trace.go:205] Trace[1115019028]: "List etcd3" key:/cronjobs,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Apr-2023 11:47:15.5>
...skipping...
Apr 02 11:31:29 ubuntu k3s[11187]: I0402 11:31:29.155054   11187 node_ipam_controller.go:91] Sending events to api server.
Apr 02 11:31:29 ubuntu k3s[11187]: W0402 11:31:29.163503   11187 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Apr 02 11:31:29 ubuntu k3s[11187]: I0402 11:31:29.175900   11187 shared_informer.go:247] Caches are synced for tokens
Apr 02 11:31:29 ubuntu k3s[11187]: time="2023-04-02T11:31:29+08:00" level=info msg="Waiting for control-plane node ubuntu startup: nodes \"ubuntu\" not found"
Apr 02 11:31:30 ubuntu k3s[11187]: time="2023-04-02T11:31:30+08:00" level=info msg="Waiting for control-plane node ubuntu startup: nodes \"ubuntu\" not found"
Apr 02 11:31:30 ubuntu k3s[11187]: W0402 11:31:30.476922   11187 fs.go:214] stat failed on /dev/vdb1 with error: no such file or directory
Apr 02 11:31:30 ubuntu k3s[11187]: W0402 11:31:30.478923   11187 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484369   11187 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484526   11187 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484580   11187 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: Ku>
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484603   11187 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484611   11187 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484630   11187 state_mem.go:36] "Initialized new in-memory state store"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484778   11187 kubelet.go:418] "Attempting to sync node with API server"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484792   11187 kubelet.go:279] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484805   11187 kubelet.go:290] "Adding apiserver pod source"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.484814   11187 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.485785   11187 kuberuntime_manager.go:246] "Container runtime initialized" containerRuntime="containerd" version="v1.5.13-k3s1" apiVersion="v1alpha2"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.486107   11187 server.go:1213] "Started kubelet"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.486370   11187 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.486806   11187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Apr 02 11:31:30 ubuntu k3s[11187]: W0402 11:31:30.486906   11187 fs.go:588] stat failed on /dev/vdb1 with error: no such file or directory
Apr 02 11:31:30 ubuntu k3s[11187]: E0402 11:31:30.486933   11187 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="failed to get device for dir \"/var/lib/rancher/k3s>
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.487011   11187 volume_manager.go:291] "Starting Kubelet Volume Manager"
Apr 02 11:31:30 ubuntu k3s[11187]: E0402 11:31:30.487013   11187 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image file>
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.487054   11187 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.487341   11187 server.go:409] "Adding debug handlers to kubelet server"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.497221   11187 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.505481   11187 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.505540   11187 status_manager.go:160] "Starting to sync pod status with apiserver"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.505568   11187 kubelet.go:2018] "Starting kubelet main sync loop"
Apr 02 11:31:30 ubuntu k3s[11187]: E0402 11:31:30.505604   11187 kubelet.go:2042] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has>
Apr 02 11:31:30 ubuntu k3s[11187]: E0402 11:31:30.512005   11187 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ubuntu\" not found" node="ubuntu"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514251   11187 cpu_manager.go:209] "Starting CPU manager" policy="none"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514343   11187 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514358   11187 state_mem.go:36] "Initialized new in-memory state store"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514533   11187 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514544   11187 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.514549   11187 policy_none.go:49] "None policy: Start"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.515395   11187 memory_manager.go:168] "Starting memorymanager" policy="None"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.515448   11187 state_mem.go:35] "Initializing new in-memory state store"
Apr 02 11:31:30 ubuntu k3s[11187]: I0402 11:31:30.515537   11187 state_mem.go:75] "Updated machine memory state"
Apr 02 11:31:30 ubuntu k3s[11187]: W0402 11:31:30.515557   11187 fs.go:588] stat failed on /dev/vdb1 with error: no such file or directory
Apr 02 11:31:30 ubuntu k3s[11187]: E0402 11:31:30.515571   11187 kubelet.go:1423] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could n>
Apr 02 11:31:30 ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Apr 02 11:31:30 ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 3, 2023

@killwing Can't reproduce, but it should be fixed in the next version.

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 3, 2023

@killwing Fix released in v0.6.0.

@BenTheElder
Copy link

BenTheElder commented Apr 18, 2023

Hi, KIND maintainer here, orbstack seems to have unusual iptables which is breaking KIND for a user and it's unclear how to access the VM to inspect this or indeed exactly how docker is being managed kubernetes-sigs/kind#3171

https://docs.orbstack.dev/architecture seems a bit hand-wavy about VMs and how docker is packaged, from docker info it looks like it may be in a containerized alpine with dirty git state? 😬

Kernel Version: 6.1.23-orbstack-00113-g6614aaccb205-dirty
Operating System: Alpine Linux edge (containerized)

Would appreciate input from an orbstack maintainer on kubernetes-sigs/kind#3171

EDIT: got here from https://docs.orbstack.dev/docker/kubernetes

Builtin support for Kubernetes is planned, but not yet implemented. Please vote on the feature request if you're interested.

In the meantime, you can use k3d or kind to run a local Kubernetes cluster in Docker. Note that CPU and resource usage will be higher than OrbStack's native Kubernetes support
when it's ready.

(Aside: It seems a bit unreasonable to claim performance improvements on a non-existent feature ...)

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 18, 2023

Hey @BenTheElder, really sorry for the trouble. That issue is caused by missing support for CONFIG_NETFILTER_XT_MATCH_STATISTIC in our kernel config, which is more minimal than usual because we're still relatively new and still building up a baseline set of options to cover all common use cases. We've already fixed this and enabled the necessary modules in v0.7.0 (released just over a day ago), so updating OrbStack should fix the issue.

I've tested the cluster-v1.25-2nodes.yaml config from kubernetes-sigs/kind#3171 with OrbStack v0.7.1 and it seems to work fine:

kind create cluster --config cluster-v1.25-2nodes.yaml
Creating cluster "cluster-v1.25" ...
 ✓ Ensuring node image (kindest/node:v1.25.8) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-cluster-v1.25"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster-v1.25

Have a nice day! 👋

Our Docker engine runs in an Alpine container, under a custom container manager. We don't plan to expose the underlying VM. Similarly, we don't have an easy way to enter the Docker engine container because we've found that most of the time people only want that because they're trying to make up for a missing feature that wouldn't work for other reasons. You can do it manually, however:

docker run -it --rm --privileged --pid=host --net=host alpine
nsenter -m -u -i -n -p -t 1

Also, to clarify: we're not claiming any performance improvements for Kubernetes, but we've prototyped and experimented with Kubernetes support and preliminary measurements show that it uses less CPU in the background due to some tweaks. Let me know if you have any other concerns. Hope this helps!

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 18, 2023

It's possible that there will still be issues with IPVS, however. We haven't enabled IPVS yet because we found that the module increases background CPU usage even if it's unused. We'll prioritize enabling it and fixing the increased CPU usage for v0.7.2. Will update this issue when that's done.

@BenTheElder
Copy link

Thanks for the clarification :-)

@prokher
Copy link

prokher commented Apr 20, 2023

I am not sure it is relevant ticket to comment on, but it seems quite close. The issue I am facing relates to the K3s running inside the OrbStack Docker container. Naturally it is Docker in Docker configuration. While it starts quite well and simple tests pass OK. Running my main (rather huge) project leads to the errors about iptables-restore (see below) in the K3s log. I am not sure where to dig, so any help is appreciated. I can only note, that the same setup performs quite well in both Docker Desktop and Colima, which hints that this is something related to the OrbStack environment.

E0419 13:45:51.818968    7366 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"appworkerworkflowjobs\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=appworkerworkflowjobs pod=pseven-appworkerworkflowjobs-deploy-d8f7f9c78-f8828_default(6a5a7ae8-5192-4815-ab38-241df7ffd45b)\"" pod="default/pseven-appworkerworkflowjobs-deploy-d8f7f9c78-f8828" podUID=6a5a7ae8-5192-4815-ab38-241df7ffd45b
E0419 13:45:54.467771    7366 proxier.go:1546] "Failed to execute iptables-restore" err=<
        exit status 2: iptables-restore v1.8.4 (legacy): Couldn't load match `recent':No such file or directory

        Error occurred at line: 109
        Try `iptables-restore -h' or 'iptables-restore --help' for more information.
 >
I0419 13:45:54.467802    7366 proxier.go:854] "Sync failed" retryingTime="30s"

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 20, 2023

@prokher The recent match has been enabled for the next version, thanks for bringing it up. We enabled everything from WSL, but WSL didn't have that either.

@prokher
Copy link

prokher commented Apr 20, 2023

@kdrag0n, looking forward for this version when. Thank you.

@kdrag0n
Copy link
Member Author

kdrag0n commented Apr 27, 2023

We've added IPVS support in OrbStack v0.8.0.

@prokher v0.8.0 also includes support for the recent match.

@prokher
Copy link

prokher commented Apr 27, 2023

@kdrag0n, awesome! Starting testing...

@kdrag0n kdrag0n added the f/containers Affects container users label May 3, 2023
@hossein-bakhtiari-revolut

excuse me for repeated not so necessary question, but i wasn't able to resist, do you happen to know any estimate that when out of box kubernetes integration will be added to OrbStack?

@fmartingr
Copy link

As a new user to OrbStack, it would be awesome to have the option to select between arm and x86 when creating the cluster, as it would also be useful in VMs, for non-arm supported software.

@james-masson
Copy link

If you do ship Kubernetes support, please have the option to not do docker-in-docker.

Direct pod access from Mac is very useful, and it's is very difficult to get working with docker-in-docker.

@ysicing
Copy link

ysicing commented Jun 29, 2023

@kdrag0n k3s start failed on ubuntu

Jun 29 10:58:24 k3s-test k3s[3875]: time="2023-06-29T10:58:24+08:00" level=fatal msg="kubelet exited: failed to run Kubelet: could not detect clock speed from output: \"processor\\t: 0\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 1\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 2\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 3\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 4\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 5\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 6\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\nprocessor\\t: 7\\nBogoMIPS\\t: 48.00\\nFeatures\\t: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp flagm2 frint\\nCPU implementer\\t: 0x00\\nCPU architecture: 8\\nCPU variant\\t: 0x0\\nCPU part\\t: 0x000\\nCPU revision\\t: 0\\n\\n\""
Jun 29 10:58:24 k3s-test systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Jun 29 10:58:24 k3s-test systemd[1]: k3s.service: Failed with result 'exit-code'.
Jun 29 10:58:24 k3s-test systemd[1]: k3s.service: Consumed 20.870s CPU time.

k3s start script

/usr/local/bin/k3s \
    server \
      --tls-san apiserver.cluster.local \
      --cluster-cidr 10.42.0.0/16 \
      --service-cidr 10.43.0.0/16 \
      --cluster-init \
      --disable servicelb,traefik \
      --disable-cloud-controller \
      --disable-network-policy \
      --disable-helm-controller \
      --prefer-bundled-bin \
      --kube-proxy-arg "proxy-mode=ipvs" "masquerade-all=true" \
      --kube-proxy-arg "metrics-bind-address=0.0.0.0"

@danielfinke
Copy link

If you do ship Kubernetes support, please have the option to not do docker-in-docker.

Direct pod access from Mac is very useful, and it's is very difficult to get working with docker-in-docker.

@kdrag0n will OrbStack support multiple K8s clusters? The Docker-in-Docker implementation of minikube facilitates this and I find it particularly useful for running one cluster per supported version of our application stack. I don't have to tear down the K8s configuration, pods, etc. and I don't have to worry about the disparate versions using the same database volumes while needing different schemas respective of the app version. I can also stop all the unused clusters when I am not developing on them.
I don't know how this feature might be supported without DinD, but I can definitely understand the pain points

@kdrag0n
Copy link
Member Author

kdrag0n commented Jul 7, 2023

@danielfinke Hmm, the plan is currently to support a single cluster that uses a shared Docker engine for more convenient development wrt. images and debugging. Maybe you could open an issue for potential multi-cluster support in the future once this is implemented.

@james-masson
Copy link

@danielfinke Hmm, the plan is currently to support a single cluster that uses a shared Docker engine for more convenient development wrt. images and debugging. Maybe you could open an issue for potential multi-cluster support in the future once this is implemented.

Perfect - I think this approach makes it much easier to get Rosetta support too ( because you already have it in the mini-VM )

Completely agree with the ease of development benefits of sharing the docker engine, pushing containers separately into Kubernetes specific containerd is an annoying & unnecessary duplication.

Not having k8s-in-container-in-orbstack-vm makes it much easier to get the routing right for direct Pod network access - this is a key feature in the industry I work in - most of the software I use will never work with port-forwarding access, and will throw fits at MTU change if a tunnel approach is used

Colima gets a lot of this stuff right - it just lacks the polish to be a drop in replacement for Docker Desktop, Orbstack K8s would be the one I'd choose.

@samzong
Copy link

samzong commented Jul 27, 2023

#493 it's seem issue.

@tangkhaiphuong
Copy link

tangkhaiphuong commented Aug 11, 2023

https://github.com/tangkhaiphuong/kubernetes-setup/blob/master/orbstack-k3s-cluster.sh <-- Just sharing setup K3s cluster with 2 master + 3 worker on alma linux on Orbstack 0.16

@Ive4
Copy link

Ive4 commented Aug 22, 2023

support k8s,i every like orbstack,it's so smart

@kdrag0n
Copy link
Member Author

kdrag0n commented Aug 29, 2023

Great news: we've launched first-class Kubernetes support in OrbStack 0.17.0!

  • Seamless networking: Connect to pods, ClusterIPs, LoadBalancers, and NodePorts directly from Mac. Get a wildcard *.k8s.orb.local domain for free to use with Ingress.
  • Battery friendly: Up to 80% less power usage
  • Native macOS UI for pods & services

Docs: https://docs.orbstack.dev/kubernetes

Screenshot

image

@kdrag0n kdrag0n closed this as completed Aug 29, 2023
@kdrag0n kdrag0n added this to the v0.17.0 milestone Aug 29, 2023
@d0zingcat
Copy link

Well done, Danny! But there is one additional issue to consider - can we expose the k8s services externally? Right now it looks like only the local machine can access them, and I'd like to run orbstack's k8s on my Mac mini and expose services on the LAN for other devices.

@kdrag0n
Copy link
Member Author

kdrag0n commented Aug 29, 2023

@d0zingcat What types of services? ClusterIP, NodePort, LoadBalancer? Please open a new issue or discussion.

@d0zingcat
Copy link

LoadBalancer

I apologize for the simplicity of the information I provided. Here's a detailed explanation:

The simplest example is using the default command in the application: kubectl run nginx --image=nginx to launch an nginx container, and then kubectl expose pod nginx --type=NodePort --port=80 to expose the nginx service to a NodePort on your local machine. Afterward, you can find an nginx service on the service page, accessible at: http://localhost:32574/. By using this address, you can access the nginx page. However, I'm aiming to access it through my local network IP, like http://10.0.0.2:32574. Unfortunately, this doesn't work in reality (my suspicion is that the program is only listening on 127.0.0.1, and I haven't found a place where this can be configured).

@d0zingcat
Copy link

when I enable it,how can I disable the k8s cluster?

You can locate the Orbstack Settings in the top menu bar (alternatively, use command+, to access settings), then deselect the "Enable Kubernetes Cluster" option and click Apply.

@kdrag0n
Copy link
Member Author

kdrag0n commented Aug 31, 2023

@d0zingcat That's intentional for security. The next version will have an option to expose services to other devices on your LAN: https://docs.orbstack.dev/kubernetes/#exposing-ports-to-lan

Everyone, please open new issues or discussions for any new feature requests or questions to avoid spamming people subscribed to this issue. Thanks!

@ysicing
Copy link

ysicing commented Sep 1, 2023

@kdrag0n support multiple nodes ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
f/containers Affects container users t/feature New feature
Projects
Status: Done
Development

No branches or pull requests