Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error on raspberry pi 3b+ #49

Closed
ghost opened this issue Jun 26, 2021 · 6 comments
Closed

error on raspberry pi 3b+ #49

ghost opened this issue Jun 26, 2021 · 6 comments
Labels
help wanted Extra attention is needed

Comments

@ghost
Copy link

ghost commented Jun 26, 2021

Image doesn't work on raspberry pi 3b+. I tested latest and edge tags.
logs:

s6-svscan: warning: unable to iopause: Operation not permitted
s6-svscan: warning: executing into .s6-svscan/crash
s6-supervise php-fpm8: fatal: unable to iopause: Operation not permitted
s6-svscan panicked! Dropping to a root shell.

s6-supervise nginx: fatal: unable to iopause: Operation not permitted
/bin/sh: can't access tty; job control turned off
/run/s6/services $
@elrido
Copy link
Contributor

elrido commented Jun 27, 2021

I've retested the 1.3.5 release and edge images on my k3s Pi3B cluster and it does work fine there. It must be a permission issue with the setup of your container runtime. For your reference, the log output on my containers looked like this:

[27-Jun-2021 06:23:57] NOTICE: fpm is running, pid 39
[27-Jun-2021 06:23:57] NOTICE: ready to handle connections
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.

The YAML file applied with kubectl was this (note the lack of persistence - config for testing, not production):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: privatebin
  labels:
    app: privatebin
spec:
  replicas: 3
  selector:
    matchLabels:
      app: privatebin
  template:
    metadata:
      labels:
        app: privatebin
    spec:
#      initContainers:
#      - name: privatebin-volume-permissions
#        image: privatebin/chown
#        args: ['65534:82', '/mnt']
#        securityContext:
#          runAsUser: 0
#          readOnlyRootFilesystem: True
#        volumeMounts:
#        - mountPath: /mnt
#          name: privatebin-data
#          readOnly: False
      containers:
      - name: privatebin
        image: privatebin/nginx-fpm-alpine:1.3.5
        ports:
        - containerPort: 8080
        env:
        - name: TZ
          value: Europe/Zurich
        - name: PHP_TZ
          value: Europe/Zurich
        securityContext:
          runAsUser: 65534
          runAsGroup: 82
          readOnlyRootFilesystem: True
#        volumeMounts:
#        - mountPath: /srv/data
#          name: privatebin-data
#          readOnly: False
---
apiVersion: v1
kind: Service
metadata:
  name: privatebin
  labels:
    app: privatebin
spec:
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  sessionAffinity: ClientIP
  ports:
  - name: http
    protocol: TCP
    port: 30088
    targetPort: 8080
  selector:
    app: privatebin

The kernel and k8s server versions were these:

5.4.34-0-rpi2
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2-k3s.1", GitCommit:"3d7d34a23ec464c08b81486aeca0b7d1bb6e044c", GitTreeState:"clean", BuildDate:"2020-04-19T05:33:19Z", GoVersion:"go1.13.10", Compiler:"gc", Platform:"linux/arm"}

If you could share some more information on how your running the container (command or YAML to start the container) and the runtime you used (docker, podman, k8s, ...) and if the runtime has some non-standard configurations (i.e. usernamespace remapping, cgroups version set in kernel) we might work out what is different on your setup - there may even be a way to work around the issue without changing the configuration.

@ghost
Copy link
Author

ghost commented Jun 27, 2021

I've tried starting container using this docker-compose configuration:

privatebin:
  image:  privatebin/nginx-fpm-alpine:latest
  restart: always
  volumes:
     - ./privatebin/:/srv/data

but also with

docker run --rm privatebin/nginx-fpm-alpine:latest

and results were the same. I run containers using Raspberry Pi OS with account which is in docker group. The kernel and docker versions are:

5.10.17-v7+
Docker version 20.10.7, build f0df350

I also run pihole and vaultwarden containers on this raspberry pi without issues.

PS I will not be able to reply for next week

@elrido
Copy link
Contributor

elrido commented Jun 27, 2021

It's gonna be an issue with the docker daemon configuration or the filesystem options or one of the kernels cgroup or memory settings. For my pi cluster I had to recompile the alpine Linux kernel to get the cgroups and memory settings working for kubernetes. The issue comes from s6 - it would be interesting to learn if other s6 users had issues on Raspberry Pi OS (Raspbian) and Docker 20.10. Unfortunately I don't have such a setup or a spare Pi that I could test this on.

If anyone else has some input or further configurations that do or don't work on a Raspberry Pi in 32bit mode, that would be helpful to know about to pinpoint the issue.

@elrido elrido added the help wanted Extra attention is needed label Jun 27, 2021
@rugk
Copy link
Member

rugk commented Jun 27, 2021

I do have a Raspberry Pi 4 Model B and with podman everything seems to work fine…

$ podman run --rm privatebin/nginx-fpm-alpine:latest
✔ docker.io/privatebin/nginx-fpm-alpine:latest
Trying to pull docker.io/privatebin/nginx-fpm-alpine:latest...
Getting image source signatures
Copying blob 58ab47519297 done  
Copying blob 5245aa3422fd done  
Copying blob 09faeb710a4a done  
Copying config dc185a08b7 done  
Writing manifest to image destination
Storing signatures
[27-Jun-2021 17:40:27] NOTICE: fpm is running, pid 33
[27-Jun-2021 17:40:27] NOTICE: ready to handle connections
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.

System:

$ podman -v
podman version 3.2.1
$ cat /etc/os-release 
NAME=Fedora
VERSION="34.20210624.0 (IoT Edition)"
ID=fedora
VERSION_ID=34
VERSION_CODENAME=""
PLATFORM_ID="platform:f34"
PRETTY_NAME="Fedora 34.20210624.0 (IoT Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:34"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/34/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=34
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=34
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="IoT Edition"
VARIANT_ID=iot
OSTREE_VERSION='34.20210624.0'
$ lscpu
Architecture:                    aarch64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       ARM
Model:                           3
Model name:                      Cortex-A72
Stepping:                        r0p3
CPU max MHz:                     1500.0000
CPU min MHz:                     600.0000
BogoMIPS:                        108.00
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm crc32 cpuid
$ uname -a
Linux testpi 5.12.13-300.fc34.aarch64 #1 SMP Wed Jun 23 16:03:11 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
$ sudo lshw
testpi                      
    description: Desktop Computer
    product: Raspberry Pi 4 Model B Rev 1.4
[…]

As for the image:

$ podman image inspect nginx-fpm-alpine
[
    {
        "Id": "dc185a08b790eb3e913ecd5bff7434bd5865830f2e9ed54599c5c48ed717159e",
        "Digest": "sha256:d0d224f5cc7a92ae686e34dea64e535c07b0180c07fc4e553dd081f07ec049c5",
        "RepoTags": [
            "docker.io/privatebin/nginx-fpm-alpine:latest"
        ],
        "RepoDigests": [
            "docker.io/privatebin/nginx-fpm-alpine@sha256:7d89a264441a10b83c20a3732f087f79cccf1b02f97e81c760ead0033a9c5d45",
            "docker.io/privatebin/nginx-fpm-alpine@sha256:d0d224f5cc7a92ae686e34dea64e535c07b0180c07fc4e553dd081f07ec049c5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2021-06-27T07:48:03.891772107Z",
 […]

@rugk
Copy link
Member

rugk commented Jun 27, 2021

Again, @rev1e we need more details on your system setup (e.g. as I did it above) to see what is the problem here.

@elrido
Copy link
Contributor

elrido commented Aug 8, 2021

Closing due to inactivity - please do re-open, once you were able to collect the additional details on your particular setup or, with the examples provided above, have been able to adjust your setup to work with the image.

@elrido elrido closed this as completed Aug 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants