Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Storage information not accurate on BTRFS RAID (Type 5) #980

Open
zwimer opened this issue Dec 27, 2023 · 6 comments
Open

[Bug] Storage information not accurate on BTRFS RAID (Type 5) #980

zwimer opened this issue Dec 27, 2023 · 6 comments

Comments

@zwimer
Copy link

zwimer commented Dec 27, 2023

Description of the bug

I have a BTRFS raid 5 array.
On split view: Most of the HDDs hit bug #935 but the one that does not lists that it is 47.6TiB / 16 TiB full.
On non-split view: The pie chart shows 47.6 TiB / 67.3 TiB.

The first one is showing the used of the entire raid array / capacity of the individual HDD. The second is showing used of the entire raid array / sum(capacity of each HDD in the raid array).

Neither shows used / capacity of the raid array.

I'm not sure how to handle this exactly, as df -Thl itself reports 47.6 TiB / 67.3 TiB; I know btrfs fi usage /mnt/foo shows the correct amount of 49 TiB. But even if this is a won't fix, I think at least the split view display value is incorrect.

How to reproduce

  1. Create a BTRFS raid 5 array using 4 HDDs
  2. Run dash

Relevant log output

dashdot  |   storage: [
dashdot  |     {
dashdot  |       size: 2000398934016,
dashdot  |       disks: [ { device: 'nvme0n1', brand: 'Samsung', type: 'NVMe' } ]
dashdot  |     },
dashdot  |     {
dashdot  |       size: 18000207937536,
dashdot  |       disks: [ { device: 'sda', brand: 'Western Digital', type: 'HD' } ]
dashdot  |     },
dashdot  |     {
dashdot  |       size: 18000207937536,
dashdot  |       disks: [ { device: 'sdb', brand: 'Western Digital', type: 'HD' } ]
dashdot  |     },
dashdot  |     {
dashdot  |       size: 18000207937536,
dashdot  |       disks: [ { device: 'sdc', brand: 'Western Digital', type: 'HD' } ]
dashdot  |     },
dashdot  |     {
dashdot  |       size: 18000207937536,
dashdot  |       disks: [ { device: 'sdd', brand: 'Western Digital', type: 'HD' } ]
dashdot  |     },
dashdot  |     {
dashdot  |       size: 8589934592,
dashdot  |       disks: [ { device: 'zram0', brand: 'zram0', type: 'SSD' } ]
dashdot  |     }
dashdot  |   ],

Info output of dashdot cli

yarn run v1.22.19
$ node dist/apps/cli/main.js info
node:internal/modules/cjs/loader:1080
  throw err;
  ^

Error: Cannot find module 'systeminformation'
Require stack:
- /app/dist/apps/cli/apps/cli/src/main.js
- /app/dist/apps/cli/main.js
    at Module._resolveFilename (node:internal/modules/cjs/loader:1077:15)
    at Module._resolveFilename (/app/dist/apps/cli/main.js:32:36)
    at Module._load (node:internal/modules/cjs/loader:922:27)
    at Module.require (node:internal/modules/cjs/loader:1143:19)
    at require (node:internal/modules/cjs/helpers:121:18)
    at Object.<anonymous> (/app/dist/apps/cli/apps/cli/src/main.js:26:18)
    at Module._compile (node:internal/modules/cjs/loader:1256:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
    at Module.load (node:internal/modules/cjs/loader:1119:32)
    at Module._load (node:internal/modules/cjs/loader:960:12) {
  code: 'MODULE_NOT_FOUND',
  requireStack: [
    '/app/dist/apps/cli/apps/cli/src/main.js',
    '/app/dist/apps/cli/main.js'
  ]
}

Node.js v18.17.1
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

What browsers are you seeing the problem on?

Chrome

Where is your instance running?

Linux Server

Additional context

In docker

@MauriceNino
Copy link
Owner

Hi there! The minor difference in the values is probably due to different units used (GiB vs GB). About the split-view: That is definitely wrong.

Can you please update your application and then provide the output of the following command?

docker exec CONTAINER yarn cli raw-data --storage

@zwimer
Copy link
Author

zwimer commented Jan 6, 2024

yarn run v1.22.19

Output:
const disks =  [
  {
    device: '/dev/sda',
    type: 'HD',
    name: 'WDC WD181KFGX-68',
    vendor: 'Western Digital',
    size: 18000207937536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0A83',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdb',
    type: 'HD',
    name: 'WDC WD181KFGX-68',
    vendor: 'Western Digital',
    size: 18000207937536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0A83',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdc',
    type: 'HD',
    name: 'WDC WD181KFGX-68',
    vendor: 'Western Digital',
    size: 18000207937536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0A83',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdd',
    type: 'HD',
    name: 'WDC WD181KFGX-68',
    vendor: 'Western Digital',
    size: 18000207937536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0A83',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/nvme0n1',
    type: 'NVMe',
    name: 'Samsung SSD 980 PRO 2TB                 ',
    vendor: 'Samsung',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '',
    serialNum: 'S6B0NL0T615860L',
    interfaceType: 'PCIe',
    smartStatus: 'unknown',
    temperature: null
  }
]
const sizes =  [
  {
    fs: '/dev/nvme0n1p3',
    type: 'btrfs',
    size: 1998694907904,
    used: 236039712768,
    available: 1760865275904,
    use: 11.82,
    mount: '/',
    rw: true
  },
  {
    fs: 'efivarfs',
    type: 'efivarfs',
    size: 262144,
    used: 54272,
    available: 202752,
    use: 21.12,
    mount: '/mnt/host/sys/firmware/efi/efivars',
    rw: false
  },
  {
    fs: '/dev/nvme0n1p2',
    type: 'ext4',
    size: 1020702720,
    used: 388988928,
    available: 561250304,
    use: 40.94,
    mount: '/mnt/host/boot',
    rw: true
  },
  {
    fs: '/dev/nvme0n1p1',
    type: 'vfat',
    size: 627900416,
    used: 18206720,
    available: 609693696,
    use: 2.9,
    mount: '/mnt/host/boot/efi',
    rw: true
  },
  {
    fs: '/dev/sdc1',
    type: 'btrfs',
    size: 72000827555840,
    used: 51600711196672,
    available: 2409158074368,
    use: 95.54,
    mount: '/mnt/host/mnt/cryo',
    rw: true
  }
]
const blocks =  [
  {
    name: 'nvme0n1',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'Samsung SSD 980 PRO 2TB',
    serial: 'S6B0NL0T615860L     ',
    removable: false,
    protocol: 'nvme',
    group: '',
    device: '/dev/nvme0n1'
  },
  {
    name: 'sda',
    type: 'disk',
    fsType: 'btrfs',
    mount: '',
    size: 18000207937536,
    physical: 'HDD',
    uuid: 'a5850a6f-adba-4e34-8fe4-6f191813d5cd',
    label: '',
    model: 'WDC WD181KFGX-68',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: '',
    device: '/dev/sda'
  },
  {
    name: 'sdb',
    type: 'disk',
    fsType: 'btrfs',
    mount: '',
    size: 18000207937536,
    physical: 'HDD',
    uuid: 'a5850a6f-adba-4e34-8fe4-6f191813d5cd',
    label: '',
    model: 'WDC WD181KFGX-68',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: '',
    device: '/dev/sdb'
  },
  {
    name: 'sdc',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 18000207937536,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'WDC WD181KFGX-68',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: '',
    device: '/dev/sdc'
  },
  {
    name: 'sdd',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 18000207937536,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'WDC WD181KFGX-68',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: '',
    device: '/dev/sdd'
  },
  {
    name: 'zram0',
    type: 'disk',
    fsType: 'swap',
    mount: '[SWAP]',
    size: 8589934592,
    physical: 'SSD',
    uuid: '7bcab061-21a3-4241-b6e3-bfe482fa018d',
    label: 'zram0',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: '',
    device: '/dev/zram0'
  },
  {
    name: 'nvme0n1p1',
    type: 'part',
    fsType: 'vfat',
    mount: '/mnt/host/boot/efi',
    size: 629145600,
    physical: '',
    uuid: '3671-6189',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: '',
    device: '/dev/nvme0n1'
  },
  {
    name: 'nvme0n1p2',
    type: 'part',
    fsType: 'ext4',
    mount: '/mnt/host/boot',
    size: 1073741824,
    physical: '',
    uuid: '7b45994d-fe45-406d-b7f8-1ec605f6dcd7',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: '',
    device: '/dev/nvme0n1'
  },
  {
    name: 'nvme0n1p3',
    type: 'part',
    fsType: 'btrfs',
    mount: '/etc/hosts',
    size: 1998694907904,
    physical: '',
    uuid: 'b795618d-1f46-4658-b196-69c7d4348e40',
    label: 'fedora_localhost-live',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: '',
    device: '/dev/nvme0n1'
  },
  {
    name: 'sdc1',
    type: 'part',
    fsType: 'btrfs',
    mount: '/mnt/host/mnt/cryo',
    size: 18000205840384,
    physical: '',
    uuid: 'a5850a6f-adba-4e34-8fe4-6f191813d5cd',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: '',
    device: '/dev/sdc'
  },
  {
    name: 'sdd1',
    type: 'part',
    fsType: 'btrfs',
    mount: '',
    size: 18000205840384,
    physical: '',
    uuid: 'a5850a6f-adba-4e34-8fe4-6f191813d5cd',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: '',
    device: '/dev/sdd'
  }
]

@zwimer
Copy link
Author

zwimer commented Jan 6, 2024

I'm not sure how to handle this exactly, as df -Thl itself reports 47.6 TiB / 67.3 TiB; I know btrfs fi usage /mnt/foo shows the correct amount of 49 TiB. But even if this is a won't fix, I think at least the split view display value is incorrect.

I don't know how to fix this exactly as it also shows up incorrectly in df. I think it's because this is a raid array as mentioned above. The problem is there are 4 HDD's Raid 5'd into one array:

$ btrfs fi df /mnt/my_mount | grep RAID5
Data, RAID5: total=48.96TiB, used=46.79TiB

@MauriceNino
Copy link
Owner

Okay I see. Unfortunately, there is not much I can fix right now, because the mounted path is directly assigned to the /dev/sdc drive. The best fix for this would be to correctly detect the raid array, but that is somewhat impossible from the given data, because there is normally a raid block in there, to match the data.

Something like this:

{
  name: 'md0',
  type: 'raid1',
  fsType: 'vfat',
  mount: '/mnt/host/boot/efi',
  size: 268369920,
  physical: '',
  uuid: 'F539-5EE9',
  label: 'archlinux-latest-64-minimal:0',
  model: '',
  serial: '',
  removable: false,
  protocol: '',
  group: '',
},

This is missing on your system and I don't know why. I would suggest opening an issue on the systeminformation repository, managed by @sebhildebrandt. Maybe he can add the functionality to support your raid type.

In the meantime, I would suggest using the following variables to get a somewhat desired result:

DASHDOT_FS_DEVICE_FILTER: 'sda,sdb,sdd'
DASHDOT_OVERRIDE_STORAGE_SIZES: 'sdc=72000831750144'

It will look like there are only 2 drives in your system (which is kind of correct, but it will omit the raid info), but they should show the correct sizes.

@MauriceNino MauriceNino changed the title [Bug] DashDot does not handle raid arrays properly [Bug] Storage information not accurate on BTRFS RAID (Type 5) Jan 7, 2024
MauriceNino added a commit that referenced this issue Jan 7, 2024
@zwimer
Copy link
Author

zwimer commented Jan 8, 2024

@MauriceNino I normally would open an issue but I don't know anything about systeminformation or what it is so I don't feel too comfortable making a bug report. If you do so though, feel free to tag me and I can provide help in the form of df -hl or the like.

@MauriceNino
Copy link
Owner

@zwimer I have created an issue for you. If you can please comment all the information that might help on there, that would be nice.

What would help:

  • A command to get the exact size and usage of your RAID (in bytes/bit, not human-readable)
  • A command to get the name/label of your RAID
  • Anything you deem important information

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants