Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

!URGENT URGENT URGENT! Wrong volume size reported => upload error #441

Closed
brstgt opened this issue Jan 18, 2017 · 17 comments
Closed

!URGENT URGENT URGENT! Wrong volume size reported => upload error #441

brstgt opened this issue Jan 18, 2017 · 17 comments

Comments

@brstgt
Copy link
Contributor

brstgt commented Jan 18, 2017

Since 0.73 volume sizes seem to be reported * 1024.
So volume size exceeds max volume size and upload is denied but volume is marked as writeable.
Unfortunately we cannot switch back to 0.7 or 0.72 as these throw other critical errors.

Please fix this ASAP. Our production system with hundrets of millions of files is currently fucked.
As a workaround we try to raise max_volume_size * 1024 but I don't know if there are any side effects.

Example - uploaded files in collection have approx 1MB, not 1GB:
https://cl.ly/3h0i0220032L

@ijunaid8989
Copy link

Switch to 0.70 ? why it will send errors? on which version you were working already?

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017

Please take a look at needle_map_memory.go LoadNeedleMap

	nm.FileByteCounter = nm.FileByteCounter + uint64(size)
	if offset > 0 && size != TombstoneFileSize {

as size is MaxUint32 it MUST NOT increase the FileByteCounter. This is what i found but i dont know if there are other places where this has more effects

@chrislusf
Copy link
Collaborator

As an urgent fix, please change the value of TombstoneFileSize to 0 for now.

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@chrislusf
Copy link
Collaborator

That's by design. The system allows some extra files, for example, new deletion after the size limit is reached.

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@chrislusf
Copy link
Collaborator

For more than 32GB, it is more than fixing the limit. Current offset can only address to 32GB.

@chrislusf
Copy link
Collaborator

What's your intended volume size, btw?

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@chrislusf
Copy link
Collaborator

I think it happens only during volume server startup and wrong volume size is counted. If wrong, the write to the volumes are failed. So the dat and idx files are still correct.

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017 via email

@chrislusf
Copy link
Collaborator

Just check the actual .dat file size. But if the size is over 32GB, it will need non trivial amount of code to fix the .dat file.

@brstgt
Copy link
Contributor Author

brstgt commented Jan 18, 2017

Hm there are actually some dat files that exceeded that size:

br@dev1:~$ sudo dsh -Mcg seaweed find /weedfs/ -type f -size +32G -exec 'ls -lh {} ;'
seaweed4: -rw-r--r-- 1 root root 35G Jan 18 18:04 /weedfs/d73825be-406a-4050-a36d-ad98ad23292d/mwps_624.cpd
seaweed4: -rw-r--r-- 1 root root 392G Jan 18 18:04 /weedfs/f4bc475a-4da6-4a89-9064-5e7f9b3164fd/mwps_622.dat
seaweed3: -rw-r--r-- 1 root root 58G Jan 18 14:05 /weedfs/6e50ee2d-b9aa-47e7-9020-1b1deb651713/mwpxs_65.dat
seaweed4: -rw-r--r-- 1 root root 72G Jan 18 18:04 /weedfs/0f9c1bb9-b4d2-4482-8a2e-80c46ee59915/gwpcl_402.dat
seaweed4: -rw-r--r-- 1 root root 68G Jan 18 14:05 /weedfs/efaacb2b-10f4-40f4-b817-414766685d9e/mwpcs_297.dat
seaweed1: -rw-r--r-- 1 root root 40G Jan 18 13:50 /weedfs/dda17747-c987-4fae-ba26-8377d34e8348/gwpcm_499.dat
seaweed4: -rw-r--r-- 1 root root 75G Jan 18 14:23 /weedfs/efaacb2b-10f4-40f4-b817-414766685d9e/mwpcm_600.cpd
seaweed3: -rw-r--r-- 1 root root 44G Jan 18 18:04 /weedfs/dba9c1e6-11c7-4af2-ac73-7d5ec0ad158d/mpm_390.dat
seaweed1: -rw-r--r-- 1 root root 83G Jan 18 18:04 /weedfs/dda17747-c987-4fae-ba26-8377d34e8348/gwpm_695.dat
seaweed4: -rw-r--r-- 1 root root 103G Jan 18 18:04 /weedfs/86823a03-e3c6-46c6-92a6-b569bbc856f5/gwpm_697.dat
seaweed3: -rw-r--r-- 1 root root 72G Jan 18 18:04 /weedfs/dba9c1e6-11c7-4af2-ac73-7d5ec0ad158d/gwpcl_402.dat
seaweed2: -rw-r--r-- 1 root root 34G Jan 18 11:49 /weedfs/399b17c1-9d57-430c-8fe3-73f10a488e38/mwpxs_68.dat
seaweed3: -rw-r--r-- 1 root root 68G Jan 18 18:04 /weedfs/33d28675-f4f8-45ee-b03c-36e670cd0b11/mwpcs_297.dat
seaweed3: -rw-r--r-- 1 root root 34G Jan 18 11:49 /weedfs/65a45a2e-3bfd-4377-8ff7-28bc25304f54/mwpxs_68.dat

What would you recommend?

316014408 pushed a commit to k85169336/seaweedfs that referenced this issue Feb 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants