You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the compaction goroutine removes non-empty fragments. Insert some keys and waits 10 minutes to reproduce the error. You can decrease the compaction interval to see the error.
➜ olric git:(master) olricd -c cmd/olricd/olricd-local.yaml
2022/04/23 13:51:23 [INFO] pid: 93542 has been started
2022/04/23 13:51:23 [INFO] Olric 0.5.0-beta.2 on darwin/arm64 go1.18.1 => olric.go:310
2022/04/23 13:51:23 [INFO] Join completed. Synced with 0 initial nodes => discovery.go:59
2022/04/23 13:51:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:51:23 [INFO] The cluster coordinator has been bootstrapped => discovery.go:42
2022/04/23 13:51:23 [INFO] Memberlist bindAddr: 127.0.0.1, bindPort: 3322 => routingtable.go:413
2022/04/23 13:51:23 [INFO] Cluster coordinator: 127.0.0.1:3320 => routingtable.go:414
2022/04/23 13:51:23 [INFO] Node name in the cluster: 127.0.0.1:3320 => olric.go:371
2022/04/23 13:51:23 [INFO] Olric bindAddr: 127.0.0.1, bindPort: 3320 => olric.go:377
2022/04/23 13:51:23 [INFO] Replication count is 1 => olric.go:379
2022/04/23 13:52:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:53:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:54:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:55:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:56:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:57:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:58:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 13:59:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 14:00:23 [INFO] Routing table has been pushed by 127.0.0.1:3320 => operations.go:86
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 0 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 1 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 2 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 3 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 4 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 5 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 6 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 7 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 8 => janitor.go:65
2022/04/23 14:01:23 [INFO] Empty DMap fragment (kind: Primary) has been deleted: dmap.bench on PartID: 9 => janitor.go:65
...
I checked every possible component and tried to find a real problem in DMaps and its compaction implementation. Everything seems okay.
ERROR: Failed to acquire semaphore: context canceled:
This message is quite normal because a semaphore limits the number of concurrent compaction workers, and there are some worker goroutines in the background who wait for their turn. I filtered the context cancelled message.
I used the following code snippet to create a load on a single Olric node. It creates random keys and inserts them into the cluster. I couldn't detect any missing key. It seems that memtier_benchmark somehow triggers DMap fragment creation, but it doesn't insert any keys. It looks all fine.
It looks like the compaction goroutine removes non-empty fragments. Insert some keys and waits 10 minutes to reproduce the error. You can decrease the compaction interval to see the error.
When you try to stop the server:
The text was updated successfully, but these errors were encountered: