Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[stats] weed mount displays wrong data in output. #3407

Closed
DENISKI opened this issue Aug 5, 2022 · 5 comments
Closed

[stats] weed mount displays wrong data in output. #3407

DENISKI opened this issue Aug 5, 2022 · 5 comments

Comments

@DENISKI
Copy link

DENISKI commented Aug 5, 2022

Describe the bug
weed mounts statistics displayed via df command doesn't respond to real data from volume server.
We have 2 volume servers with 13 10Gb volumes on each and replication 010.
According to the weed shell volume.list we use around 70Gb on each data node:
DataNode 10.120.214.8:8080 total size:70776755152 file_count:39229 deleted_file:4238 deleted_bytes:297263094
DataNode 10.120.214.7:8080 total size:70799440000 file_count:39684 deleted_file:4689 deleted_bytes:315723235
On client:
df -h /mnt/app Filesystem Size Used Avail Use% Mounted on 10.120.214.8:8988+10.120.214.7:8988:/ 260G -8.0Z -2.7G 100% /mnt/app

System Setup

  1. master: /usr/local/bin/weed -v=1 -logdir=/var/log/seaweedfs/master master -peers=10.120.214.8:9333,10.120.214.7:9333,10.120.214.9:9333 -ip=10.120.214.7 -defaultReplication=010 -mdir=/opt/seaweedfs/master/meta -volumeSizeLimitMB=10240 -resumeState=true -ip.bind=0.0.0.0
  2. volume: /usr/local/bin/weed -v=1 -logdir=/var/log/seaweedfs/volume volume -mserver=10.120.214.8:9333,10.120.214.7:9333,10.120.214.9:9333 -ip=10.120.214.7 -port=8080 -rack=rack2 -dataCenter=dc1 -index=leveldb -dir=/mnt/seaweedfs -max=0 -ip.bind=0.0.0.0
  3. filer: /usr/local/bin/weed -v=1 -logdir=/var/log/seaweedfs/filer filer -master=10.120.214.8:9333,10.120.214.7:9333,10.120.214.9:9333 -ip=10.120.214.7 -rack=rack2 -dataCenter=dc1 -collection=default -maxMB=4 -saveToFilerLimit=0 -defaultReplicaPlacement=010 -encryptVolumeData=true -ip.bind=0.0.0.0
  4. mount: weed fuse /mnt/app -o rw,nodev,nosuid,allowOthers=true,dataCenter=dc1,collection=default,concurrentWriters=128,readRetryTime=30s,filer='10.120.214.8:8988,10.120.214.7:8988',cacheDir=/var/cache/seaweedfs/mnt-app,cacheCapacityMB=0,filer.path=/ -o child
  • OS version: Oracle Linux Server 8.6
  • version 30GB 3.14 3c79c77 linux amd64
  • filer.toml
[filer.options]
recursive_delete = false

[leveldb2]
enabled = true
dir = "/opt/seaweedfs/filer"

Expected behavior
https://seaweedfs.slack.com/archives/C9MGUC1UG/p1659665309805189?thread_ts=1659633210.439099&cid=C9MGUC1UG
26 x 10GB - usedBytes = freeSpace
260Gb - 70Gb * 2 = 120Gb
So df should display around 120 Gb of free space.

@chrislusf
Copy link
Collaborator

> volume.list
Topology volumeSizeLimit:10240 MB hdd(volume:26/26 active:24 free:0 remote:0)
  DataCenter dc1 hdd(volume:26/26 active:24 free:0 remote:0)
    Rack rack1 hdd(volume:13/13 active:12 free:0 remote:0)
      DataNode 10.120.214.8:8080 hdd(volume:13/13 active:12 free:0 remote:0)
        Disk hdd(volume:13/13 active:12 free:0 remote:0)
          volume id:1  size:8869762672  collection:"default"  file_count:4603  delete_count:257  deleted_byte_count:28022395  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664372 
          volume id:2  size:8963930952  collection:"default"  file_count:4687  delete_count:288  deleted_byte_count:13143620  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664425 
          volume id:3  size:8915219704  collection:"default"  file_count:4654  delete_count:282  deleted_byte_count:21606973  replica_placement:10  version:3  compact_revision:13  modified_at_second:1659664652 
          volume id:4  size:8670459504  collection:"default"  file_count:4519  delete_count:270  deleted_byte_count:25620073  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664130 
          volume id:5  size:8987026976  collection:"default"  file_count:4682  delete_count:279  deleted_byte_count:19112173  replica_placement:10  version:3  compact_revision:13  modified_at_second:1659664273 
          volume id:6  size:8584370776  collection:"default"  file_count:4204  delete_count:10  deleted_byte_count:18408414  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659657606 
          volume id:7  size:7665903464  collection:"default"  file_count:4159  delete_count:333  deleted_byte_count:61882862  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664596 
          volume id:8  size:7695166072  collection:"default"  file_count:4141  delete_count:276  deleted_byte_count:17676873  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664711 
          volume id:9  size:438300416  collection:"default"  file_count:553  delete_count:314  deleted_byte_count:19849353  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664321 
          volume id:10  size:476265776  collection:"default"  file_count:1002  delete_count:745  deleted_byte_count:25810453  replica_placement:10  version:3  modified_at_second:1659664003 
          volume id:11  size:471117968  collection:"default"  file_count:585  delete_count:326  deleted_byte_count:13748490  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664003 
          volume id:12  size:442692888  collection:"default"  file_count:537  delete_count:296  deleted_byte_count:18879028  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664540 
          volume id:13  size:444947736  collection:"default"  file_count:548  delete_count:287  deleted_byte_count:13378118  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664652 
        Disk hdd total size:70625164904 file_count:38874 deleted_file:3963 deleted_bytes:297138825 
      DataNode 10.120.214.8:8080 total size:70625164904 file_count:38874 deleted_file:3963 deleted_bytes:297138825 
    Rack rack1 total size:70625164904 file_count:38874 deleted_file:3963 deleted_bytes:297138825 
    Rack rack2 hdd(volume:13/13 active:12 free:0 remote:0)
      DataNode 10.120.214.7:8080 hdd(volume:13/13 active:12 free:0 remote:0)
        Disk hdd(volume:13/13 active:12 free:0 remote:0)
          volume id:1  size:8869762672  collection:"default"  file_count:4603  delete_count:257  deleted_byte_count:28022395  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664372 
          volume id:2  size:8963930952  collection:"default"  file_count:4687  delete_count:288  deleted_byte_count:13143620  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664425 
          volume id:3  size:8915219704  collection:"default"  file_count:4654  delete_count:282  deleted_byte_count:21606973  replica_placement:10  version:3  compact_revision:13  modified_at_second:1659664652 
          volume id:4  size:8670459504  collection:"default"  file_count:4519  delete_count:270  deleted_byte_count:25620073  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659664130 
          volume id:5  size:8987026864  collection:"default"  file_count:4682  delete_count:279  deleted_byte_count:19112173  replica_placement:10  version:3  compact_revision:13  modified_at_second:1659664273 
          volume id:6  size:8584370720  collection:"default"  file_count:4204  delete_count:10  deleted_byte_count:18408414  replica_placement:10  version:3  compact_revision:12  modified_at_second:1659657606 
          volume id:7  size:7665903464  collection:"default"  file_count:4159  delete_count:333  deleted_byte_count:61882862  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664596 
          volume id:8  size:7695166288  collection:"default"  file_count:4141  delete_count:276  deleted_byte_count:17676873  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664711 
          volume id:9  size:460985216  collection:"default"  file_count:1008  delete_count:765  deleted_byte_count:38309494  replica_placement:10  version:3  modified_at_second:1659664321 
          volume id:10  size:476265776  collection:"default"  file_count:1002  delete_count:745  deleted_byte_count:25810453  replica_placement:10  version:3  modified_at_second:1659664003 
          volume id:11  size:471117968  collection:"default"  file_count:585  delete_count:326  deleted_byte_count:13748490  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664003 
          volume id:12  size:442692888  collection:"default"  file_count:537  delete_count:296  deleted_byte_count:18879028  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664540 
          volume id:13  size:444947736  collection:"default"  file_count:548  delete_count:287  deleted_byte_count:13378118  replica_placement:10  version:3  compact_revision:1  modified_at_second:1659664652 
        Disk hdd total size:70647849752 file_count:39329 deleted_file:4414 deleted_bytes:315598966 
      DataNode 10.120.214.7:8080 total size:70647849752 file_count:39329 deleted_file:4414 deleted_bytes:315598966 
    Rack rack2 total size:70647849752 file_count:39329 deleted_file:4414 deleted_bytes:315598966 
  DataCenter dc1 total size:141273014656 file_count:78203 deleted_file:8377 deleted_bytes:612737791 
total size:141273014656 file_count:78203 deleted_file:8377 deleted_bytes:612737791

@chrislusf
Copy link
Collaborator

chrislusf commented Aug 5, 2022

Hard to reproduce. Please run weed -v=4 mount ... and show the logs when running df command.

@DENISKI
Copy link
Author

DENISKI commented Aug 5, 2022

Nothing more in the output:

[root@nas1 /]# weed -logdir=/tmp/weed_mount_log -v=4 mount -allowOthers=true -dataCenter=dc1 -collection=default -concurrentWriters=128 -readRetryTime=30s -filer='10.120.214.8:8988,10.120.214.7:8988' -cacheDir=/var/cache/seaweedfs/mnt-app -cacheCapacityMB=0 -filer.path=/ -dir=/mnt/app
I0805 04:15:22 23982 config.go:59] Reading security.toml from /etc/seaweedfs/security.toml
mount point owner uid=0 gid=0 mode=drwxr-xr-x
current uid=0 gid=0
I0805 04:15:22 23982 leveldb_store.go:47] filer store dir: /var/cache/seaweedfs/mnt-app/9143d365/meta
I0805 04:15:22 23982 file_util.go:23] Folder /var/cache/seaweedfs/mnt-app/9143d365/meta Permission: -rwxr-xr-x
This is SeaweedFS version 30GB 3.14 3c79c770562ef4f7c0d4e57a88f616eb3671b9cd linux amd64
I0805 04:15:23 23982 weedfs_stats.go:35] reading filer stats: collection:"default"  ttl:"0s"
I0805 04:15:23 23982 weedfs_stats.go:41] read filer stats: total_size:279172874240  used_size:283950338736  file_count:35260
I0805 04:15:44 23982 weedfs_stats.go:35] reading filer stats: collection:"default"  ttl:"0s"
I0805 04:15:44 23982 weedfs_stats.go:41] read filer stats: total_size:279172874240  used_size:283950338736  file_count:35260```

@chrislusf
Copy link
Collaborator

what is the df output with these logs?

@DENISKI
Copy link
Author

DENISKI commented Aug 5, 2022

Once again together:

logs:

I0805 04:57:04 33687 config.go:59] Reading security.toml from /etc/seaweedfs/security.toml
mount point owner uid=0 gid=0 mode=drwxr-xr-x
current uid=0 gid=0
I0805 04:57:04 33687 leveldb_store.go:47] filer store dir: /var/cache/seaweedfs/mnt-app/9143d365/meta
I0805 04:57:04 33687 file_util.go:23] Folder /var/cache/seaweedfs/mnt-app/9143d365/meta Permission: -rwxr-xr-x
This is SeaweedFS version 30GB 3.14 3c79c770562ef4f7c0d4e57a88f616eb3671b9cd linux amd64
I0805 04:57:05 33687 weedfs_stats.go:35] reading filer stats: collection:"default"  ttl:"0s"
I0805 04:57:05 33687 weedfs_stats.go:41] read filer stats: total_size:279172874240  used_size:284621673232  file_count:35345

df output:

Filesystem                             Size  Used Avail Use% Mounted on
10.120.214.8:8988+10.120.214.7:8988:/  260G -8.0Z -5.1G 100% /mnt/app

martyanov pushed a commit to martyanov/seaweedfs that referenced this issue Aug 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants