Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

knative HelloWorld Serving Code Example #784

Closed
datadot opened this issue Nov 8, 2019 · 48 comments · Fixed by #1579
Closed

knative HelloWorld Serving Code Example #784

datadot opened this issue Nov 8, 2019 · 48 comments · Fixed by #1579

Comments

@datadot
Copy link

datadot commented Nov 8, 2019

Deploy a clean microk8s snap deployment:

snap install microk8s --current

Enable DNS, Istio + kNative

sudo microk8s.enable dns istio knative

Deploy HelloWorld go service example from:

apiVersion: serving.knative.dev/v1alpha1 # Current version of Knative
kind: Service
metadata:
  name: helloworld-go # The name of the app
  namespace: default # The namespace the app will use
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go # The URL to the image of the app
          env:
            - name: TARGET # The environment variable printed out by the sample app
              value: "Go Sample v1"

The pods will create and fail due to below error:

Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "fc02dba843b84f907e3054501f078791474de71dce1d68e37734af3ef30fcf22": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown

Result from kubectl get ksvc is "RevisionMissing".

Digging further but has anyone else experienced this?

@datadot
Copy link
Author

datadot commented Nov 8, 2019

Ok I have done some further testing, this works with clean 18.04.3 LTS (bionic) however issue exists in 19.10 (eoan).

@rbt
Copy link
Contributor

rbt commented Nov 11, 2019

I'm getting a similar error attempting to exec -ti into a pod, something I did regularly during development while under minikube (attempting to switch). The pod has the status of "Running" and has a single container.

/snap/bin/microk8s.kubectl exec -t -i web-6d64dd86f-dzb6q  -- bash
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "78186bf219f66adc86f6255eb98bbbe4683f6875c58022feb6cdced38b30f594": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown

@ktsakalozos
Copy link
Member

@rbt can you please attach the tarball from microk8s.inspect. I would like to see the error in dmesg you are probably seeing.

@rbt
Copy link
Contributor

rbt commented Nov 11, 2019

1.16/stable: inspection-report-20191111_173728.tar.gz

After fiddling with the above, I realized some pods worked. I've narrowed it down to the below setting in the failing deployments. It's possible minikube was incomplete/buggy by allowing access and microk8s is working as expected; albeit with a terrible error message.

       securityContext:
         allowPrivilegeEscalation: false

@antoineco
Copy link

antoineco commented Dec 2, 2019

Same issue on a fresh microk8s installation with any exec probe.

$ snap list microk8s
Name      Version  Rev   Tracking  Publisher   Notes
microk8s  v1.16.3  1079  stable    canonical✓  classic
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="19.10 (Eoan Ermine)"
# ------------------------ >8 ------------------------
          readinessProbe:
            exec:
              command:
              - /ko-app/queue
              - -probe-period
              - "0"
# ------------------------ >8 ------------------------
$ kubectl describe pod
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Started    70s                 kubelet, manny     Started container queue-proxy
  Warning  Unhealthy  69s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "66a30c11deb99fd1a8e6929c83e7d9e59512a506a8693766c7e3e873c6808c08": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  68s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "95a2a1c687a12900fa1f9bd0725971ff51418b99d9638ea38673f36240a46b86": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  67s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "8d7e2c2ae8a0e9161b70061a1c7bc2231c47dca2fe3e88a148a481e6379c6a20": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  66s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "9c4658f5c2352f41d335bddadb84341acc328ddfde5adc107160bb497f3fdc7e": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  65s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "a62f586b56f11a04828bb9b97652ce4da36a750cdb07a02d5b1d720288212018": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  64s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "de440bd965a761b7faf1dc17d668be728015cc885ef63c92f4e6bc9b4db6f925": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  63s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "ef93e3317c6cc9c28a133f9d6e3c89709314e740ecdff62bde8acf903f765bde": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  62s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "ed74b405332d2909a9acd5d6f6bd6e292959d23dd6bf3cc3be398e74775e416b": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown
  Warning  Unhealthy  61s                 kubelet, manny     Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "7f86e575600d371b92b14696c0e38b27164ef2730ae54d848e1ef9acd95b0f42": OCI runtime exec failed: exec failed: container_linux.go:345: starting container
 process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown

@ktsakalozos here is the report you asked for.

🗄 inspection-report-20191202_151718.tar.gz

@GavinRay97
Copy link

I am also having this issue on fresh Microk8s

@rsassPwC
Copy link

rsassPwC commented Jan 7, 2020

Im facing exactly the same issue regarding apparmor when starting the hello demo from knative (https://knative.dev/docs/serving/getting-started-knative-app/)

Readiness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "7612ded2f1b669ec2610e2f1fddd53fe39cf26c8f05c35c7626676fea1daf021": OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted": unknown

Are there already solutions for this issue or is a downgrade to 18.04.3 LTS (bionic) the only solution at the moment?

@ktsakalozos
Copy link
Member

I am seeing the following in the dmesg logs (inspection-report attached above in the apparmor folder):

[ 2640.194427] audit: type=1400 audit(1575281926.659:229): apparmor="ALLOWED" operation="exec" info="no new privs" error=-1 profile="snap.microk8s.daemon-containerd" name="/ko-app/activator" pid=11686 comm="runc:[2:INIT]" requested_mask="x" denied_mask="x" fsuid=65532 ouid=0 target="cri-containerd.apparmor.d"

@joedborg, @jdstrand have you seen this before?

@jdstrand
Copy link

jdstrand commented Jan 7, 2020

CC @anonymouse64

@antoineco
Copy link

antoineco commented Jan 7, 2020

@ktsakalozos both the cri-containerd.apparmor.d (/etc/apparmor.d) and snap.microk8s.daemon-kubelet (/var/lib/snapd/apparmor/profiles) AppArmor profiles are in complain mode, yet nothing gets raised in the audit logs.

aa-logprof doesn't suggest any addition either.

@antoineco
Copy link

antoineco commented Jan 7, 2020

A new finding: if I disable all profiles with aa-teardown (or /lib/apparmor/apparmor.systemd stop), and let containerd reload the cri-containerd.apparmor.d profile (happens eventually), the container starts without any issue.

$ sudo aa-status
apparmor module is loaded.
1 profiles are loaded.
1 profiles are in enforce mode.
   cri-containerd.apparmor.d
0 profiles are in complain mode.
2 processes have profiles defined.
2 processes are in enforce mode.
   /go/bin/helloworld (14862) cri-containerd.apparmor.d
   /ko-app/queue (14917) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Now comes the interesting part: if I reload all profiles with /lib/apparmor/apparmor.systemd reload and re-create the Deployment (Knative Service) from scratch, the container also starts.

$ sudo aa-status
apparmor module is loaded.
51 profiles are loaded.
13 profiles are in enforce mode.
   /sbin/dhclient
   /usr/bin/man
   /usr/lib/NetworkManager/nm-dhcp-client.action
   /usr/lib/NetworkManager/nm-dhcp-helper
   /usr/lib/connman/scripts/dhclient-script
   /usr/sbin/tcpdump
   cri-containerd.apparmor.d
   lsb_release
   man_filter
   man_groff
   nvidia_modprobe
   nvidia_modprobe//kmod
   snap.core.hook.configure
38 profiles are in complain mode.
   /snap/core/8268/usr/lib/snapd/snap-confine
   /snap/core/8268/usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   /usr/lib/snapd/snap-confine
   /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
   snap-update-ns.core
   snap-update-ns.microk8s
   snap.microk8s.add-node
   snap.microk8s.cilium
   snap.microk8s.config
   snap.microk8s.ctr
   snap.microk8s.daemon-apiserver
   snap.microk8s.daemon-apiserver-kicker
   snap.microk8s.daemon-cluster-agent
   snap.microk8s.daemon-containerd
   snap.microk8s.daemon-controller-manager
   snap.microk8s.daemon-etcd
   snap.microk8s.daemon-flanneld
   snap.microk8s.daemon-kubelet
   snap.microk8s.daemon-proxy
   snap.microk8s.daemon-scheduler
   snap.microk8s.disable
   snap.microk8s.enable
   snap.microk8s.helm
   snap.microk8s.hook.configure
   snap.microk8s.hook.install
   snap.microk8s.hook.remove
   snap.microk8s.inspect
   snap.microk8s.istioctl
   snap.microk8s.join
   snap.microk8s.juju
   snap.microk8s.kubectl
   snap.microk8s.leave
   snap.microk8s.linkerd
   snap.microk8s.remove-node
   snap.microk8s.reset
   snap.microk8s.start
   snap.microk8s.status
   snap.microk8s.stop
2 processes have profiles defined.
2 processes are in enforce mode.
   /go/bin/helloworld (2654) cri-containerd.apparmor.d
   /ko-app/queue (2704) cri-containerd.apparmor.d
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

If I do the same procedure after executing snap restart microk8s.daemon-containerd, the container fails to start again.

I believe the order in which AppArmor profiles are loaded matters here somehow.

@antoineco
Copy link

antoineco commented Jan 7, 2020

@ktsakalozos @joedborg, @jdstrand The culprit is the snap.microk8s.daemon-containerd profile. Containers start shortly after unloading it with sudo apparmor_parser -R /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd.

The same profile can then be re-loaded (same command with -a flag) without any issue. As long as cri-containerd.apparmor.d is loaded first, containers will start.

@jdstrand
Copy link

jdstrand commented Jan 7, 2020

The comment about tearing down the apparmor policy from underneath microk8s could explain things. Put another way, when microk8s is started, it is supposed to be started under an apparmor profile (lenient with classic snap). microk8s starts up and eventually loads cri-containerd.apparmor.d and containerd is allowed to start containers (via runc) under this profile. If some of these are not in place (eg, something isn't being setup right on start), then weird things can happen. Eg, if containerd isn't setting up runc right to use it, then might see the 'no new privs' issue. The 'failed to apply profile' issue might be because cri-containerd.apparmor.d isn't loaded yet.

That said, this comment from @rbt is important:

After fiddling with the above, I realized some pods worked. I've narrowed it down to the below setting in the failing deployments. It's possible minikube was incomplete/buggy by allowing access and microk8s is working as expected; albeit with a terrible error message.

       securityContext:
         allowPrivilegeEscalation: false

https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privilege-escalation tells us: "These options control the allowPrivilegeEscalation container option. This bool directly controls whether the no_new_privs flag gets set on the container process."

When nnp is invoked, it can't be revoked and it can complicate profile transitions that can block. OTOH, I don't know the order of operations, but it sounds like nnp is being applied before the profile transition to cri-containerd.apparmor.d, and the kernel is applying its heuristics to determine if that is ok and deciding it is not.

@anonymouse64
Copy link

Sorry for not seeing this earlier but I think @jdstrand 's analysis is spot-on, nnp is indeed applied by runC before the apparmor profile transition to cri-containerd. I would recommended to not set that pod security setting

@antoineco
Copy link

antoineco commented Jan 8, 2020

@jdstrand @anonymouse64 you both explained the case for the "no new privs" error, which is great and I thank you for the lengthy explanations, but it's not what this issue is about. This issue is about "exec" probes not working on microk8s. Please see my findings above.

@anonymouse64
Copy link

Is it possible there are multiple cri-containerd.apparmor.d profiles? The microk8s snap seems to copy over the cri-containerd.apparmor.d profile and load that when containerd starts:

https://github.com/ubuntu/microk8s/blob/fd296d64b37c72621a2a2de6c0865b4d84cbdb94/microk8s-resources/wrappers/run-containerd-with-args#L14-L22

When you say it works above, perhaps that is because when containerd is already running and it is asked to start a new container with an apparmor profile, it will generate a default one which is different from the one that the snap is copying in.

All that being said, it would be great to see a diff between the profile that is in the snap and the one that containerd generates for itself.

@antoineco also note that the exec probe issue you are having could be due to the no new privs issue, because you are seeing a failure to transition to the profile, which can happen if containerd drops privileges because of that config, and then the profile it tries to transition to has more privileges. Are you certain that your containers don't have that allowPrivilegeEscalation: false setting set for your containers?

@antoineco
Copy link

antoineco commented Jan 8, 2020

@anonymouse64 please ignore the first case where I unload everything, this was only the first step of my troubleshooting. The profile that I unload (an reload) is the one matching the systemd service, not the CRI profile loaded by containerd (the one that gets copied, as you mentioned). At that point containerd is using the profile included with the snap, not its own defaults.

Yes, I'm certain the containers created by Knative Services do not contain any specific security context options, unless set explicitly by the user.
My bad, the queue-proxy sidecar has "allowPrivilegeEscalation": false defined, but this is controlled by Knative and can not be changed. Knative is a microk8s addon, so I would expect it to work out-of-the-box.

The diff between the cri-containerd.apparmor.d from the snap and the one included with containerd doesn't contain anything relevant:

33c33
<   deny /sys/firmware/efi/efivars/** rwklx,
---
>   deny /sys/firmware/** rwklx,
36,37d35
<
<   # suppress ptrace denials when using 'docker ps' or using 'ps' inside a container
39,41d36
<
<   signal (receive) peer=snap.microk8s.daemon-kubelet,
<   signal (receive) peer=snap.microk8s.daemon-containerd,

Besides, I'm still puzzled about the fact that simply reloading the snap.microk8s.daemon-containerd profile resolves the issue durably:

$ sudo apparmor_parser -R /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd
$ sudo apparmor_parser -a /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd
# create the Knative Service ...
$ kubectl get pod
NAME                                              READY   STATUS    RESTARTS   AGE
helloworld-go-w88zp-deployment-5977bc5fff-7547v   2/2     Running   0          9s

Does it mean the profile is not re-applied until the snap.microk8s.daemon-containerd systemd service gets restarted?

@anonymouse64
Copy link

My bad, the queue-proxy sidecar has "allowPrivilegeEscalation": false defined, but this is controlled by Knative and can not be changed. Knative is a microk8s addon, so I would expect it to work out-of-the-box.

As an aside, the way I helped @joedborg handle this in the kubernetes-worker snap temporarily is to patch runC to never drop privileges. Perhaps microk8s needs to do that as well in order to work with knative.

@ktsakalozos
Copy link
Member

@anonymouse64 where can I find the runC patches you suggested? By not allowing runC to drop privileges are there any restrictions/limitations to the workloads we can serve?

@jtackaberry
Copy link

jtackaberry commented Apr 25, 2020

Besides, I'm still puzzled about the fact that simply reloading the snap.microk8s.daemon-containerd profile resolves the issue durably:

$ sudo apparmor_parser -R /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd
$ sudo apparmor_parser -a /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd

Indeed, this immediately solved the problem for me as well.

@jdstrand
Copy link

$ sudo apparmor_parser -R /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd

Indeed, this immediately solved the problem for me as well.

Please note that apparmor_parser -R removes the profile. To reload it, use -r.

@jtackaberry
Copy link

jtackaberry commented Apr 27, 2020

Please note that apparmor_parser -R removes the profile. To reload it, use -r.

I was terribly sloppy in my quoting of Antoine above (which I've fixed). I did execute -a after -R to re-add it. Although perhaps -r alone would work too, and more conveniently.

@kfirfer
Copy link

kfirfer commented Jul 24, 2020

Im facing the same problem

$ sudo apparmor_parser -r /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd

Does not works for me

But like mentioned before, this worked:

$ sudo apparmor_parser -R /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd
$ sudo apparmor_parser -a /var/lib/snapd/apparmor/profiles/snap.microk8s.daemon-containerd

@anonymouse64
Copy link

@ktsakalozos sorry I forgot to respond to your ping, were you able to sync with @joedborg about the runC patches ?

@ktsakalozos
Copy link
Member

@ktsakalozos sorry I forgot to respond to your ping, were you able to sync with @joedborg about the runC patches ?

Yes, thank you.

@giner
Copy link
Contributor

giner commented Aug 9, 2020

The same issue is reproducible on Ubuntu 20.04 (focal) and prevents us from upgrading to the latest LTS version of the OS

@giner
Copy link
Contributor

giner commented Aug 9, 2020

Apparently something changed between Linux kernel 4.15.0 (works) and 4.18.0 (doesn't work)

@jareks
Copy link

jareks commented Aug 16, 2020

I am experiencing same issue after upgrading to 20.04.
Luckily, fix mentioned earlier with removing/adding apparmor profile seems to work for me.

@jdstrand
Copy link

Besides, I'm still puzzled about the fact that simply reloading the snap.microk8s.daemon-containerd profile resolves the issue durably:

This 'works' for you because when you remove the profile (-R), the containerd process is put under the 'unconfined' profile and adding (-a) the profile back does not reattach the existing process to the added profile (ie, it is still unconfined). Whenever you stop/start containerd, I would expect the issue to come back up.

@jdstrand
Copy link

When people see this issue, are there any security violations in the logs? Eg: 'journalctl --since yesterday |grep audit'.

@antoineco
Copy link

@jdstrand unfortunately not: #784 (comment)

@jdstrand
Copy link

jdstrand commented Aug 18, 2020

Ok, I tried to reproduce this with the instructions in the initial report and I see many denials related to no new privs when enabling the various services and can confirm aa-status for the containers that are started is wrong when containerd is running confined and correct when it isn't. While I didn't see the same errors others did when creating the helloworld-go pod (perhaps I didn't create it properly?), I can say for sure that unless containerd is handling nnp properly, the kernel nnp feature will sometimes prevent containerd from transitioning a container into the cri-containerd.apparmor.d profile (regardless of if apparmor claims it is ALLOWed). The fact that it works when removing the snap.microk8s.daemon-containerd profile very strongly supports this.

I believe this issue will be resolved when microk8s (classic) is updated to include https://github.com/ubuntu/microk8s/blob/feature/jdb/strict/patches/runc-patches/snap-runc-no-prctl.patch. @ktsakalozos and I discussed this last week actually and I know that it is planned to incorporate this patch, but I don't know about the timelines.

@ktsakalozos
Copy link
Member

Hi all, thank you for your time, effort and patience. Could anyone provide feedback on the fix available through:

sudo snap install microk8s --classic --channel=latest/edge/runc-nnp

Many thanks.

@giner
Copy link
Contributor

giner commented Sep 4, 2020

@ktsakalozos, latest/edge/runc-nnp works for me
It was a bit too early to say that, the original issue seems to be gone but something else doesn't work now. Will try to dig in over the weekend.

@ktsakalozos
Copy link
Member

Will try to dig in over the weekend.

@giner any chance you gave it a run over the weekend? Thank you.

@giner
Copy link
Contributor

giner commented Sep 7, 2020

@ktsakalozos, I've done some tests. The original issue has gone however some of my workloads are failing when running on the mentioned version of microk8s and I haven't been able to find relevant logs or other helpful details so far.

@ktsakalozos
Copy link
Member

@giner can you give me an example workload I could look at?

@giner
Copy link
Contributor

giner commented Sep 7, 2020

I'm trying to run this https://github.com/cloudfoundry-community/eirini-on-microk8s

vagrant up works well and all components are eventually up however cf push is failing as eirini is not able to create containers (don't remember the exact message)

@giner
Copy link
Contributor

giner commented Sep 7, 2020

The app specific error:

   2020-09-07T14:54:19.75+0000 [API/0] ERR Failed to stage build: Container 'opi-task-downloader' in Pod 'hello-myspace-fxm7f' failed: CreateContainerError

k8s events (errors and warnings only):

42s         Warning   Failed             pod/hello-myspace-fxm7f   Error: failed to get sandbox container task: no running task found: task 4e9c20a086b140b86d2836d0d43ac6ddf4fb53f983063b34d4b24ec339e20ecf not found: not found
35s         Warning   Failed             pod/hello-myspace-fxm7f   Error: cannot find volume "certs-volume" to mount into container "opi-task-downloader"

@ktsakalozos
Copy link
Member

Thank you @giner . Any chance you could attach the microk8s.inspect tarball? I wonder if there are apparmor denials.

@giner
Copy link
Contributor

giner commented Sep 7, 2020

@ktsakalozos
Copy link
Member

@giner it is not so easy for me to verify the issue you report. A search on the error pointed to a fix on containerd containerd/containerd#2392 . Is it possible to narrow down the steps needed to reproduce what you see?

@giner
Copy link
Contributor

giner commented Sep 10, 2020

Install the latest virtualbox and vagrant, then:

git clone https://github.com/cloudfoundry-community/eirini-on-microk8s
cd eirini-on-microk8s
git checkout nnp
vagrant up
... wait for some time ...
There will be a few comands printed, just run them one after another
(wait for all containers to be up before running cf commands)

Note that the above requires at least 16G RAM and good internet connection. Also Vagrant file is configured to use bionic64 base image. Can be changed to focal64, the result will be the same.

@ktsakalozos
Copy link
Member

Hi @giner , everyone, just pushed a new snap under latest/edge/runc-nnp. It seems to work over here. Please, let me know if it works for you too:

sudo snap install microk8s --classic --channel=latest/edge/runc-nnp

Thank you

@giner
Copy link
Contributor

giner commented Sep 18, 2020

@ktsakalozos I can confirm, it works for me now too

@giner
Copy link
Contributor

giner commented Sep 26, 2020

It seems that revision 1710 doesn't have the fix yet and 1722 has it. Is there a generic way to know which revision has a fix without installing it?

  1.19/stable:      v1.19.2         2020-09-26 (1710) 214MB classic
  1.19/candidate:   v1.19.2         2020-09-22 (1710) 214MB classic
  1.19/beta:        v1.19.2         2020-09-22 (1710) 214MB classic
  1.19/edge:        v1.19.2         2020-09-23 (1722) 214MB classic

@ktsakalozos
Copy link
Member

Is there a generic way to know which revision has a fix without installing it?

Not without downloading and extracting the snap. You should expect the fix to land on the stable channel with v1.19.3

@AlexandreOuellet
Copy link

using the comme sudo snap install microk8s --classic --channel=latest/edge/runc-nnp fixes the "RevisionMissing" issue that I had, but I am now with an error when running microk8s kubectl get ksvc

NAME                URL                                            LATESTCREATED             LATESTREADY               READY     REASON
helloworld-python   http://helloworld-python.default.example.com   helloworld-python-8sqdh   helloworld-python-8sqdh   Unknown   IngressNotConfigured

Should I open new issue to track this specific error?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet