Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slice bounds out of range, v 3.15 #3315

Closed
paularlott opened this issue Jul 14, 2022 · 3 comments
Closed

slice bounds out of range, v 3.15 #3315

paularlott opened this issue Jul 14, 2022 · 3 comments

Comments

@paularlott
Copy link

Sponsors SeaweedFS via Patreon https://www.patreon.com/seaweedfs
Report issues here. Ask questions here https://stackoverflow.com/questions/tagged/seaweedfs
Please ask questions in https://github.com/chrislusf/seaweedfs/discussions

example of a good issue report:
#1005
example of a bad issue report:
#1008

Describe the bug

Running 3.15, 30GB across the whole system.

Running fio --filename=/mnt/testing --rw=randrw --blocksize_range=4k-128k --runtime=60 --numjobs=4 --time_based --group_reporting --name=randrw --size=4096M on a SeaweedFS mount causes the mount to fail with a panic: runtime error: slice bounds out of range.

I0715 00:44:12 03703 dirty_pages_chunked.go:89] /testing/testing saveToStorage 1,5b08fc55c4d0 [2172055552,2172084224)
I0715 00:44:12 03703 upload_pipeline.go:166]  uploaderCount 4 --> 3
I0715 00:44:12 03703 upload_pipeline.go:39] Free sealed chunk:  finished uploading chunk 198
I0715 00:44:12 03703 dirty_pages_chunked.go:89] /testing/testing saveToStorage 2,5b0db0b598a0 [3604611072,3604672512)
I0715 00:44:12 03703 slot_pool.go:60] --> 3
I0715 00:44:12 03703 upload_pipeline.go:166]  uploaderCount 3 --> 2
I0715 00:44:12 03703 upload_pipeline.go:39] Free sealed chunk:  finished uploading chunk 1718
I0715 00:44:12 03703 slot_pool.go:60] --> 4
I0715 00:44:12 03703 upload_pipeline.go:166]  uploaderCount 2 --> 1
I0715 00:44:12 03703 upload_pipeline.go:39] Free sealed chunk:  finished uploading chunk 1035
I0715 00:44:12 03703 slot_pool.go:60] --> 2
I0715 00:44:12 03703 slot_pool.go:60] --> 1
I0715 00:44:12 03703 slot_pool.go:49] ++> 2
I0715 00:44:12 03703 page_writer.go:54] ReadDirtyDataAt 1 [1931128832, 1931243520)
I0715 00:44:12 03703 slot_pool.go:49] ++> 3
I0715 00:44:12 03703 filehandle.go:95] /testing/testing existing 2162 chunks adds 1 more
I0715 00:44:12 03703 page_writer.go:54] ReadDirtyDataAt 1 [274632704, 274640896)
I0715 00:44:12 03703 dirty_pages_chunked.go:89] /testing/testing saveToStorage 1,5b117c9bf387 [2696278016,2696335360)
I0715 00:44:12 03703 slot_pool.go:60] --> 2
I0715 00:44:12 03703 upload_pipeline.go:166]  uploaderCount 1 --> 0
I0715 00:44:12 03703 upload_pipeline.go:39] Free sealed chunk:  finished uploading chunk 1285
I0715 00:44:12 03703 page_writer.go:34] 1 AddPage [2420527104, 2420637696)
I0715 00:44:12 03703 dirty_pages_chunked.go:44] 1 memory AddPage [2420527104, 2420637696)
I0715 00:44:12 03703 upload_pipeline.go:144]  uploaderCount 0 ++> 1
I0715 00:44:12 03703 slot_pool.go:49] ++> 3
I0715 00:44:12 03703 slot_pool.go:60] --> 2
I0715 00:44:12 03703 slot_pool.go:60] --> 1
I0715 00:44:12 03703 slot_pool.go:49] ++> 2
I0715 00:44:12 03703 page_writer.go:54] ReadDirtyDataAt 1 [1739399168, 1739468800)
I0715 00:44:12 03703 slot_pool.go:49] ++> 3
I0715 00:44:12 03703 filehandle.go:95] /testing/testing existing 2163 chunks adds 1 more
panic: runtime error: slice bounds out of range [647168:0]

goroutine 89 [running]:
github.com/chrislusf/seaweedfs/weed/filer.(*SingleChunkCacher).readChunkAt(0xc00058c480, {0xc0006e6000, 0x13000, 0xe?}, 0x9e000)
	/github/workspace/weed/filer/reader_cache.go:197 +0x1f7
github.com/chrislusf/seaweedfs/weed/filer.(*ReaderCache).ReadChunkAt(0xc001670120, {0xc0006e6000, 0x13000, 0x13000}, {0xc00173c800, 0xe}, {0xc00171a4e0, 0x20, 0x20}, 0x0, ...)
	/github/workspace/weed/filer/reader_cache.go:79 +0x2bb
github.com/chrislusf/seaweedfs/weed/filer.(*ChunkReadAt).readChunkSliceAt(0xc001bb0730, {0xc0006e6000?, 0x1?, 0x1?}, 0xc001bae3c0, {0xc001ba5c80, 0x70, 0x270}, 0xc000692601?)
	/github/workspace/weed/filer/reader_at.go:174 +0xbe
github.com/chrislusf/seaweedfs/weed/filer.(*ChunkReadAt).doReadAt(0xc001bb0730, {0xc0006e6000, 0x13000, 0x13000}, 0xf1e9e000)
	/github/workspace/weed/filer/reader_at.go:137 +0x3ca
github.com/chrislusf/seaweedfs/weed/filer.(*ChunkReadAt).ReadAt(0xc000818eb8?, {0xc0006e6000?, 0x348b320?, 0x391dae0?}, 0x1661bc?)
	/github/workspace/weed/filer/reader_at.go:107 +0x10c
github.com/chrislusf/seaweedfs/weed/mount.(*FileHandle).readFromChunks(0xc000818ea0, {0xc0006e6000, 0x13000, 0x13000}, 0xf1e9e000)
	/github/workspace/weed/mount/filehandle_read.go:77 +0x8e5
github.com/chrislusf/seaweedfs/weed/mount.(*WFS).Read(0x20080?, 0xc000324000?, 0xc000324198, {0xc0006e6000, 0x13000, 0x13000})
	/github/workspace/weed/mount/weedfs_file_read.go:44 +0x12e
github.com/hanwen/go-fuse/v2/fuse.doRead(0xc00028e000, 0xc000324000)
	/go/pkg/mod/github.com/hanwen/go-fuse/v2@v2.1.1-0.20220627082937-d01fda7edf17/fuse/opcode.go:374 +0x83
github.com/hanwen/go-fuse/v2/fuse.(*Server).handleRequest(0xc00028e000, 0xc000324000)
	/go/pkg/mod/github.com/hanwen/go-fuse/v2@v2.1.1-0.20220627082937-d01fda7edf17/fuse/server.go:514 +0x1f3
github.com/hanwen/go-fuse/v2/fuse.(*Server).loop(0xc00028e000, 0x0?)
	/go/pkg/mod/github.com/hanwen/go-fuse/v2@v2.1.1-0.20220627082937-d01fda7edf17/fuse/server.go:487 +0x108
created by github.com/hanwen/go-fuse/v2/fuse.(*Server).readRequest
	/go/pkg/mod/github.com/hanwen/go-fuse/v2@v2.1.1-0.20220627082937-d01fda7edf17/fuse/server.go:354 +0x52b

System Setup

3 x masters:

/usr/local/bin/weed -logtostderr master \
  -ip 192.168.8.50 -ip.bind 0.0.0.0 \
  -port 9333 -port.grpc 19333 \
  -mdir=/var/seaweedfs/master -defaultReplication=001 \
  -volumeSizeLimitMB=10000 \
  -peers=192.168.8.50:9333,192.168.8.51:9333,192.168.8.52:9333

master.toml

[master.maintenance]
scripts = """
lock
ec.encode -fullPercent=95 -quietFor=1h
ec.rebuild -force
ec.balance -force
volume.deleteEmpty -quietFor=24h -force
volume.balance -force
volume.fix.replication
s3.clean.uploads -timeAgo=24h
unlock
"""
sleep_minutes = 17

[master.sequencer]
type = "raft"     # Choose [raft|snowflake] type for storing the file id sequence
# when sequencer.type = snowflake, the snowflake id must be different from other masters
sequencer_snowflake_id = 0     # any number between 1~1023

# create this number of logical volumes if no more writable volumes
# count_x means how many copies of data.
# e.g.:
#   000 has only one copy, copy_1
#   010 and 001 has two copies, copy_2
#   011 has only 3 copies, copy_3
[master.volume_growth]
copy_1 = 2
copy_2 = 2
copy_3 = 2
copy_other = 1

[master.replication]
treat_replication_as_minimums = false

3 x volume servers (Ips .50, .51, .52):

/usr/local/bin/weed -logtostderr volume \
  -ip 192.168.8.50 -ip.bind 0.0.0.0 \
  -port 8080 -port.grpc 18080 \
  -max 0 -index leveldb \
  -dir /srv/seaweedfs \
  -mserver=192.168.8.50:9333,192.168.8.51:9333,192.168.8.52:9333

1 x filer:

/usr/local/bin/weed -logtostderr filer   -ip 192.168.8.50 -ip.bind 0.0.0.0   -port 8888 -port.grpc 18888   -defaultReplicaPlacement=001 -encryptVolumeData   -s3  -s3.port 8333 -s3.port.grpc 18333   -master=192.168.8.50:9333,192.168.8.51:9333,192.168.8.52:9333

filer.toml

[filer.options]
recursive_delete = false

[leveldb3]
enabled = false
dir = "/var/seaweedfs/filer"

[mysql2]  # or memsql, tidb
enabled = true
createTable = """
  CREATE TABLE IF NOT EXISTS `%s` (
    dirhash BIGINT,
    name VARCHAR(1000) BINARY,
    directory TEXT BINARY,
    meta LONGBLOB,
    PRIMARY KEY (dirhash, name)
  ) DEFAULT CHARSET=utf8;
"""
hostname = "192.168.8.29"
port = 3306
username = "seaweedfs"
password = "password"
database = "seaweedfs"              # create or use an existing database
connection_max_idle = 2
connection_max_open = 100
connection_max_lifetime_seconds = 0
interpolateParams = false
# if insert/upsert failing, you can disable upsert or update query syntax to match your RDBMS syntax:
enableUpsert = true
upsertQuery = """INSERT INTO `%s` (dirhash,name,directory,meta) VALUES(?,?,?,?) ON DUPLICATE KEY UPDATE meta = VALUES(meta)"""

Mount

weed -logtostderr -v 4 mount -filer=192.168.8.50:8888,192.168.8.51:8888,192.168..52:8888 -filer.path=/testing -replication=001 -collection=testing -cacheCapacityMB=6096 -dir /mnt/

The filer is connected to a mysql 8 database, it's a clean install setup just for testing.

There's no errors generated from anything other than mount.

I did try swapping to leveldb3 and it behaved the same. Also checked and I have plenty of storage space.

Expected behavior

fio should run to completion.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

@paularlott
Copy link
Author

If it helps I mounted a bucket using rclone and the S3 interface and fio completed without error, so this appears to be related only to fuse mounts.

@chrislusf
Copy link
Collaborator

Thanks for the reproducing steps! Very helpful to debug and verify.

@paularlott
Copy link
Author

Just tried the update and working perfectly now, thanks for the quick fix.

I may have found one more issue, but it's proving impossible to replicate reliably at the moment.

ningfdx added a commit to ningfdx/seaweedfs that referenced this issue Jul 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants