Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is seaweedfs suitable for the back-end storage of mariadb and prometheus? #4241

Open
zhouhuahao opened this issue Feb 22, 2023 · 17 comments
Open

Comments

@zhouhuahao
Copy link

Describe the bug
use seaweedfs as the storage of kubernetes,mariadb and promethues failed to start.
if use nfs, mariadb and promethues started successfully.

System Setup
List the command line to start "weed master", "weed volume", "weed filer", "weed s3", "weed mount".
3 master、4 volume
nohup /usr/local/bin/weed master -ip=10.88.206.77 -port=9333 -defaultReplication="001" -mdir=/data1/weedfs/master -peers=10.88.206.78:9333,10.88.206.79:9333 > /var/log/weedfs/master/master.log 2>&1&
nohup /usr/local/bin/weed volume -port=10002 -dir=/data0/seaweedfs/volume/volume01 -max=7 -fileSizeLimitMB=200 -mserver=10.88.206.77:9333,10.88.206.78:9333,10.88.206.79:9333 -dataCenter=dc1 -rack=rack1 > /var/log/seaweedfs/volume2.log 2>&1&
nohup /usr/local/bin/weed filer -port=8888 -master=10.88.206.77:9333,10.88.206.78:9333,10.88.206.79:9333 -ip=10.88.206.77 > /var/log/weedfs/filter/filter.log 2>&1&
nohup /usr/local/bin/weed s3 -config=/etc/seaweedfs/s3-config.json -filer=10.88.206.77:8888 -ip.bind=10.88.206.77 -port=8333 > /var/log/weedfs/s3/s3.log 2>&1&

OS version:ubuntu20.04

weed version:3.42

filer.toml:
[leveldb2]
enabled = true
dir = "/data1/weedfs/filer/data"

Screenshots
this warn does not affect mariadb startup.
image

promethues-error

@chrislusf
Copy link
Collaborator

how to reproduce?

@zhouhuahao
Copy link
Author

zhouhuahao commented Feb 22, 2023

For example, mariadb:

helm upgrade --install -n seaweedfs-test --create-namespace mariadb /opt/deploy/helm/mariadb -f /opt/deploy/helm/mariadb/values.yaml

helm chart:
1

values.yaml:
image

@chrislusf
Copy link
Collaborator

Does it work with creating a simple file?

@chrislusf
Copy link
Collaborator

Seems the number of volumes may be too low.

@zhouhuahao
Copy link
Author

Seems the number of volumes may be too low.

image

@zhouhuahao
Copy link
Author

Does it work with creating a simple file?

it works

@chrislusf
Copy link
Collaborator

the command you showed only used -max=7, different from the screenshot.

nohup /usr/local/bin/weed volume -port=10002 -dir=/data0/seaweedfs/volume/volume01 -max=7 -fileSizeLimitMB=200 -mserver=10.88.206.77:9333,10.88.206.78:9333,10.88.206.79:9333 -dataCenter=dc1 -rack=rack1 > /var/log/seaweedfs/volume2.log 2>&1&

@chrislusf
Copy link
Collaborator

I can not reproduce by taking your values.xml screenshot and turn into a file.

@zhouhuahao
Copy link
Author

the command you showed only used -max=7, different from the screenshot.

nohup /usr/local/bin/weed volume -port=10002 -dir=/data0/seaweedfs/volume/volume01 -max=7 -fileSizeLimitMB=200 -mserver=10.88.206.77:9333,10.88.206.78:9333,10.88.206.79:9333 -dataCenter=dc1 -rack=rack1 > /var/log/seaweedfs/volume2.log 2>&1&

the screenshot is right

@zhouhuahao
Copy link
Author

zhouhuahao commented Feb 23, 2023

I can not reproduce by taking your values.xml screenshot and turn into a file.

fullnameOverride: "mariadb-nanhu"
image:
  registry: docker.io
  repository: proxy/bitnami/mariadb
  tag: 10.9.2-debian-11-r2
auth:
  database: cloud
  username: "test"
  existingSecret: "mariadb"
  #initdbScriptsConfigMap: "cloud.sql"
primary:
  configuration: |-
    [mysqld]
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mariadb
    plugin_dir=/opt/bitnami/mariadb/plugin
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    tmpdir=/opt/bitnami/mariadb/tmp
    max_allowed_packet=16M
    bind-address=*
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
    log-error=/opt/bitnami/mariadb/logs/mysqld.log
    character-set-server=UTF8
    collation-server=utf8_general_ci
    slow_query_log=0
    slow_query_log_file=/opt/bitnami/mariadb/logs/mysqld.log
    long_query_time=10.0
    default-time-zone='+08:00'

    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mariadb/plugin

    [manager]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
  service:
    type: ClusterIP
  persistence:
    size: 8Gi
    storageClass: "seaweedfs-storage"

@tnyeanderson
Copy link
Contributor

I am also experiencing issues using seaweedfs as storage for prometheus. It looks like prometheus tries to do a poll which is not supported by seaweedfs.

The following was run on ubuntu 23.10 with kernel 6.2.0-20-generic and seaweedfs 3.55.

Minimal reproduction steps

# Start in whatever test directory you want
# Create backing directories
mkdir data mnt

# Start seaweedfs in the background
weed server -filer -dir=data &

# Mount seaweedfs in the background
weed mount -filer=localhost:8888 -dir=mnt -filer.path=/ &

# Create a directory for prometheus data inside the seaweedfs mount
mkdir mnt/prom
sudo chown 65534:65534 mnt/prom

# Try to start prometheus using seaweedfs for storage
docker run --rm -it -v "$(pwd)/mnt/prom:/prometheus" prom/prometheus:v2.48.1

The final docker run command will recreate the error. The weed mount command outputs the following:

Unimplemented opcode POLL

The prometheus logs:

ts=2023-12-19T21:37:30.834Z caller=main.go:539 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2023-12-19T21:37:30.834Z caller=main.go:583 level=info msg="Starting Prometheus Server" mode=server version="(version=2.48.1, branch=HEAD, revision=63894216648f0d6be310c9d16fb48293c45c9310)"
ts=2023-12-19T21:37:30.834Z caller=main.go:588 level=info build_context="(go=go1.21.5, platform=linux/amd64, user=root@71f108ff5632, date=20231208-23:33:22, tags=netgo,builtinassets,stringlabels)"
ts=2023-12-19T21:37:30.834Z caller=main.go:589 level=info host_details="(Linux 6.2.0-20-generic #20-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr  6 07:48:48 UTC 2023 x86_64 d961e063126d (none))"
ts=2023-12-19T21:37:30.834Z caller=main.go:590 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2023-12-19T21:37:30.834Z caller=main.go:591 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2023-12-19T21:37:30.835Z caller=query_logger.go:74 level=error component=activeQueryTracker msg="Failed to read query log file" err=EOF
unexpected fault address 0x7fe264a21000
fatal error: fault
[signal SIGBUS: bus error code=0x2 addr=0x7fe264a21000 pc=0x4720a2]

goroutine 1 [running]:
runtime.throw({0x356fe87?, 0xc0007f27c0?})
  /usr/local/go/src/runtime/panic.go:1077 +0x5c fp=0xc000b7ed28 sp=0xc000b7ecf8 pc=0x43b25c
runtime.sigpanic()
  /usr/local/go/src/runtime/signal_unix.go:858 +0x116 fp=0xc000b7ed88 sp=0xc000b7ed28 pc=0x451f76
runtime.memmove()
  /usr/local/go/src/runtime/memmove_amd64.s:151 +0x102 fp=0xc000b7ed90 sp=0xc000b7ed88 pc=0x4720a2
github.com/prometheus/prometheus/promql.NewActiveQueryTracker({0x7fff6fc28ef7, 0xb}, 0x14, {0x3e94bc0, 0xc000135630})
  /app/promql/query_logger.go:126 +0x232 fp=0xc000b7ef00 sp=0xc000b7ed90 pc=0x2776c52
main.main()
  /app/cmd/prometheus/main.go:645 +0x7812 fp=0xc000b7ff40 sp=0xc000b7ef00 pc=0x28f3472
runtime.main()
  /usr/local/go/src/runtime/proc.go:267 +0x2bb fp=0xc000b7ffe0 sp=0xc000b7ff40 pc=0x43dc3b
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000b7ffe8 sp=0xc000b7ffe0 pc=0x4711a1

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007cfa8 sp=0xc00007cf88 pc=0x43e0ae
runtime.goparkunlock(...)
  /usr/local/go/src/runtime/proc.go:404
runtime.forcegchelper()
  /usr/local/go/src/runtime/proc.go:322 +0xb3 fp=0xc00007cfe0 sp=0xc00007cfa8 pc=0x43df13
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007cfe8 sp=0xc00007cfe0 pc=0x4711a1
created by runtime.init.6 in goroutine 1
  /usr/local/go/src/runtime/proc.go:310 +0x1a

goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007d778 sp=0xc00007d758 pc=0x43e0ae
runtime.goparkunlock(...)
  /usr/local/go/src/runtime/proc.go:404
runtime.bgsweep(0x0?)
  /usr/local/go/src/runtime/mgcsweep.go:321 +0xdf fp=0xc00007d7c8 sp=0xc00007d778 pc=0x42833f
runtime.gcenable.func1()
  /usr/local/go/src/runtime/mgc.go:200 +0x25 fp=0xc00007d7e0 sp=0xc00007d7c8 pc=0x41d3e5
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007d7e8 sp=0xc00007d7e0 pc=0x4711a1
created by runtime.gcenable in goroutine 1
  /usr/local/go/src/runtime/mgc.go:200 +0x66

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc00009c000?, 0x3e84e98?, 0x0?, 0x0?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007df70 sp=0xc00007df50 pc=0x43e0ae
runtime.goparkunlock(...)
  /usr/local/go/src/runtime/proc.go:404
runtime.(*scavengerState).park(0x5a1f7c0)
  /usr/local/go/src/runtime/mgcscavenge.go:425 +0x49 fp=0xc00007dfa0 sp=0xc00007df70 pc=0x425bc9
runtime.bgscavenge(0x0?)
  /usr/local/go/src/runtime/mgcscavenge.go:658 +0x59 fp=0xc00007dfc8 sp=0xc00007dfa0 pc=0x426179
runtime.gcenable.func2()
  /usr/local/go/src/runtime/mgc.go:201 +0x25 fp=0xc00007dfe0 sp=0xc00007dfc8 pc=0x41d385
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007dfe8 sp=0xc00007dfe0 pc=0x4711a1
created by runtime.gcenable in goroutine 1
  /usr/local/go/src/runtime/mgc.go:201 +0xa5

goroutine 5 [finalizer wait]:
runtime.gopark(0x40fc5e?, 0x400000?, 0x70?, 0xc6?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007c620 sp=0xc00007c600 pc=0x43e0ae
runtime.runfinq()
  /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00007c7e0 sp=0xc00007c620 pc=0x41c407
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007c7e8 sp=0xc00007c7e0 pc=0x4711a1
created by runtime.createfing in goroutine 1
  /usr/local/go/src/runtime/mfinal.go:163 +0x3d

goroutine 6 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007e750 sp=0xc00007e730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00007e7e0 sp=0xc00007e750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007e7e8 sp=0xc00007e7e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 7 [GC worker (idle)]:
runtime.gopark(0x49553089cc49f?, 0x1?, 0x89?, 0x71?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007ef50 sp=0xc00007ef30 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00007efe0 sp=0xc00007ef50 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007efe8 sp=0xc00007efe0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 8 [GC worker (idle)]:
runtime.gopark(0x49553089c8f97?, 0x3?, 0x16?, 0x8e?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007f750 sp=0xc00007f730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00007f7e0 sp=0xc00007f750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007f7e8 sp=0xc00007f7e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 9 [GC worker (idle)]:
runtime.gopark(0x49553089e97ae?, 0x1?, 0xa2?, 0x70?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00007ff50 sp=0xc00007ff30 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00007ffe0 sp=0xc00007ff50 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00007ffe8 sp=0xc00007ffe0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 10 [GC worker (idle)]:
runtime.gopark(0x49553089c8d48?, 0x1?, 0x15?, 0x7b?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000078750 sp=0xc000078730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0000787e0 sp=0xc000078750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0000787e8 sp=0xc0000787e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 17 [GC worker (idle)]:
runtime.gopark(0x4955308311f9d?, 0x3?, 0x68?, 0x56?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000110750 sp=0xc000110730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0001107e0 sp=0xc000110750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0001107e8 sp=0xc0001107e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 18 [GC worker (idle)]:
runtime.gopark(0x49553089c779a?, 0x3?, 0x45?, 0x47?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000110f50 sp=0xc000110f30 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000110fe0 sp=0xc000110f50 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000110fe8 sp=0xc000110fe0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 33 [GC worker (idle)]:
runtime.gopark(0x5a53820?, 0x3?, 0x4a?, 0xc?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00010c750 sp=0xc00010c730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00010c7e0 sp=0xc00010c750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00010c7e8 sp=0xc00010c7e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 34 [GC worker (idle)]:
runtime.gopark(0x49553089cced6?, 0x1?, 0xdb?, 0x7?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00010cf50 sp=0xc00010cf30 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00010cfe0 sp=0xc00010cf50 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00010cfe8 sp=0xc00010cfe0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 19 [GC worker (idle)]:
runtime.gopark(0x5a53820?, 0x1?, 0x9e?, 0x17?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000111750 sp=0xc000111730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc0001117e0 sp=0xc000111750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc0001117e8 sp=0xc0001117e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 20 [GC worker (idle)]:
runtime.gopark(0x49553089efcbb?, 0x1?, 0xff?, 0x73?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000111f50 sp=0xc000111f30 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc000111fe0 sp=0xc000111f50 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000111fe8 sp=0xc000111fe0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 35 [GC worker (idle)]:
runtime.gopark(0x49553089cc66c?, 0x3?, 0x26?, 0x7d?, 0x0?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00010d750 sp=0xc00010d730 pc=0x43e0ae
runtime.gcBgMarkWorker()
  /usr/local/go/src/runtime/mgc.go:1295 +0xe5 fp=0xc00010d7e0 sp=0xc00010d750 pc=0x41ef65
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00010d7e8 sp=0xc00010d7e0 pc=0x4711a1
created by runtime.gcBgMarkStartWorkers in goroutine 1
  /usr/local/go/src/runtime/mgc.go:1219 +0x1c

goroutine 11 [select]:
runtime.gopark(0xc00010ff88?, 0x3?, 0x20?, 0xc9?, 0xc00010ff72?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00010fe18 sp=0xc00010fdf8 pc=0x43e0ae
runtime.selectgo(0xc00010ff88, 0xc00010ff6c, 0xc000938b80?, 0x0, 0x0?, 0x1)
  /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00010ff38 sp=0xc00010fe18 pc=0x44e525
go.opencensus.io/stats/view.(*worker).start(0xc000938b80)
  /go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f fp=0xc00010ffc8 sp=0xc00010ff38 pc=0x13ddbff
go.opencensus.io/stats/view.init.0.func1()
  /go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x25 fp=0xc00010ffe0 sp=0xc00010ffc8 pc=0x13dcf25
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00010ffe8 sp=0xc00010ffe0 pc=0x4711a1
created by go.opencensus.io/stats/view.init.0 in goroutine 1
  /go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

goroutine 87 [select]:
runtime.gopark(0xc000c41f30?, 0x2?, 0xc5?, 0x3?, 0xc000c41efc?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc000c41d88 sp=0xc000c41d68 pc=0x43e0ae
runtime.selectgo(0xc000c41f30, 0xc000c41ef8, 0xc0000061a0?, 0x0, 0xc0000061a0?, 0x1)
  /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc000c41ea8 sp=0xc000c41d88 pc=0x44e525
github.com/prometheus/prometheus/util/logging.(*Deduper).run(0xc000c15100)
  /app/util/logging/dedupe.go:75 +0xdc fp=0xc000c41fc8 sp=0xc000c41ea8 pc=0x284089c
github.com/prometheus/prometheus/util/logging.Dedupe.func1()
  /app/util/logging/dedupe.go:61 +0x25 fp=0xc000c41fe0 sp=0xc000c41fc8 pc=0x2840745
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc000c41fe8 sp=0xc000c41fe0 pc=0x4711a1
created by github.com/prometheus/prometheus/util/logging.Dedupe in goroutine 1
  /app/util/logging/dedupe.go:61 +0x10a

goroutine 95 [select]:
runtime.gopark(0xc00010ef90?, 0x2?, 0xc5?, 0x3?, 0xc00010ef74?)
  /usr/local/go/src/runtime/proc.go:398 +0xce fp=0xc00010ee18 sp=0xc00010edf8 pc=0x43e0ae
runtime.selectgo(0xc00010ef90, 0xc00010ef70, 0x408e01?, 0x0, 0x40948b?, 0x1)
  /usr/local/go/src/runtime/select.go:327 +0x725 fp=0xc00010ef38 sp=0xc00010ee18 pc=0x44e525
github.com/prometheus/prometheus/storage/remote.(*WriteStorage).run(0xc00011eb40)
  /app/storage/remote/write.go:115 +0xc5 fp=0xc00010efc8 sp=0xc00010ef38 pc=0x2869085
github.com/prometheus/prometheus/storage/remote.NewWriteStorage.func1()
  /app/storage/remote/write.go:107 +0x25 fp=0xc00010efe0 sp=0xc00010efc8 pc=0x2868f85
runtime.goexit()
  /usr/local/go/src/runtime/asm_amd64.s:1650 +0x1 fp=0xc00010efe8 sp=0xc00010efe0 pc=0x4711a1
created by github.com/prometheus/prometheus/storage/remote.NewWriteStorage in goroutine 1
  /app/storage/remote/write.go:107 +0x439

@tnyeanderson
Copy link
Contributor

I can also confirm that trying to use seaweedfs as the storage backend for a mariadb database also fails in a similar way (though without the POLL error message). Using mysql itself does not seem to have the same issue.

docker run --rm -e 'MARIADB_ROOT_PASSWORD=test' -v $(pwd)/mnt/mariadb:/var/lib/mysql mariadb

@tnyeanderson
Copy link
Contributor

I think this is related: hanwen/go-fuse#501

@chrislusf
Copy link
Collaborator

chrislusf commented Dec 20, 2023

works on Mac:

$ docker run --rm -it -v "/Users/chris/mm/prom:/prometheus" prom/prometheus:v2.48.1
ts=2023-12-20T06:56:02.745Z caller=main.go:539 level=info msg="No time or size retention was set so using the default time retention" duration=15d
ts=2023-12-20T06:56:02.745Z caller=main.go:583 level=info msg="Starting Prometheus Server" mode=server version="(version=2.48.1, branch=HEAD, revision=63894216648f0d6be310c9d16fb48293c45c9310)"
ts=2023-12-20T06:56:02.745Z caller=main.go:588 level=info build_context="(go=go1.21.5, platform=linux/amd64, user=root@71f108ff5632, date=20231208-23:33:22, tags=netgo,builtinassets,stringlabels)"
ts=2023-12-20T06:56:02.745Z caller=main.go:589 level=info host_details="(Linux 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Wed Oct 25 15:08:07 UTC 2023 x86_64 4ba5449b79e6 (none))"
ts=2023-12-20T06:56:02.745Z caller=main.go:590 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2023-12-20T06:56:02.745Z caller=main.go:591 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2023-12-20T06:56:02.752Z caller=web.go:566 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
ts=2023-12-20T06:56:02.754Z caller=main.go:1024 level=info msg="Starting TSDB ..."
ts=2023-12-20T06:56:02.755Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090
ts=2023-12-20T06:56:02.755Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090
ts=2023-12-20T06:56:02.766Z caller=head.go:601 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
ts=2023-12-20T06:56:02.766Z caller=head.go:682 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.295µs
ts=2023-12-20T06:56:02.766Z caller=head.go:690 level=info component=tsdb msg="Replaying WAL, this may take a while"
ts=2023-12-20T06:56:02.769Z caller=head.go:761 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
ts=2023-12-20T06:56:02.769Z caller=head.go:798 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=1.866953ms wal_replay_duration=1.478888ms wbl_replay_duration=214ns total_replay_duration=3.383349ms
ts=2023-12-20T06:56:02.775Z caller=main.go:1045 level=info fs_type=6a656a63
ts=2023-12-20T06:56:02.775Z caller=main.go:1048 level=info msg="TSDB started"
ts=2023-12-20T06:56:02.775Z caller=main.go:1230 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2023-12-20T06:56:02.776Z caller=main.go:1267 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=994.404µs db_storage=961ns remote_storage=1.521µs web_handler=514ns query_engine=1.011µs scrape=411.903µs scrape_sd=23.003µs notify=27.208µs notify_sd=8.315µs rules=1.163µs tracing=5.735µs
ts=2023-12-20T06:56:02.777Z caller=main.go:1009 level=info msg="Server is ready to receive web requests."
ts=2023-12-20T06:56:02.777Z caller=manager.go:1012 level=info component="rule manager" msg="Starting rule manager..."

@tnyeanderson
Copy link
Contributor

Hi Chris, is this with seaweedfs 3.55? And I assume this is APFS; i wonder if it has something to do with the underlying filesystem, or even the kernel.

My original post was using an ext4 filesystem. I went ahead and made an XFS filesystem and ran the same test, and it also failed.

Either way, I saw your post in go-fuse and I think the convo is better continued there. Thanks!

@rfjakob
Copy link

rfjakob commented Dec 21, 2023

Hi, go-fuse contributor here. This seems to be where things go wrong:

ts=2023-12-19T21:37:30.835Z caller=query_logger.go:74 level=error component=activeQueryTracker msg="Failed to read query log file" err=EOF

I don't see how this relates to POLL. Looks more like a READ is failing. But in both cases, a FUSE debug log would help.

@tnyeanderson
Copy link
Contributor

tnyeanderson commented Dec 22, 2023

It's very possible that the POLL is a red herring... What's the best way for me to get the logs you're requesting?

EDIT: Sorry, meant to continue this conversation in the other thread, feel free to respond there instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants