Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to continue reading at all mount points once #129

Closed
phiresky opened this issue May 4, 2020 · 18 comments
Closed

Option to continue reading at all mount points once #129

phiresky opened this issue May 4, 2020 · 18 comments

Comments

@phiresky
Copy link
Contributor

phiresky commented May 4, 2020

Currently there is an option in settings to always continue reading at mount points, and one to continue reading at a specific mount points.

It would be nice to also have the option to cross all boundaries in the current search (but not by default). For example during the "open directory" dialog, or as a menu option below "continue reading at current mount point"

@shundhammer
Copy link
Owner

Let me think about this.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Okay, so I thought about this.

I do acknowledge that this might be a useful feature for a number of users.

But there are a quite some caveats and problems that need to be overcome:

  • The simple approach won't work because there is just too much junk in the way (see below), so this needs to be restricted to filesystems that meet certain criteria; basically "real" filesystems that are "normally" mounted, excluding stuff like /dev, /proc, /sys and maybe more.

  • Just simply always doing it might lead to endless loops; bind-mounting comes to mind.

  • Some combinations might become downright dangerous to the non-expert user (not even talking about noobs!): Bind-mounts, filesystems mounted multiple times.

  • Btrfs (like always and in every aspect) will need very special treatment: subvolumes vs. snapshots.

  • What about network filesystems: NFS, Samba, (more?) ?

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

"Real" vs. Pseudo / System Filesystems

Even traditional Unix systems always had special filesystems like /dev that are really just kernel data structures exported to user space; but Linux kernel developers keep inventing new ones all the time, flooding the output of traditional tools like df with completely useless junk:

[sh @ balrog] ~ 6 % /bin/df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  7,8G     0  7,8G   0% /dev
tmpfs          tmpfs     1,6G  1,5M  1,6G   1% /run
/dev/sdc2      ext4       30G  9,4G   19G  34% /
tmpfs          tmpfs     7,9G   72M  7,8G   1% /dev/shm
tmpfs          tmpfs     5,0M  4,0K  5,0M   1% /run/lock
tmpfs          tmpfs     7,9G     0  7,9G   0% /sys/fs/cgroup
/dev/sdc3      ext4       30G  2,0G   26G   8% /ssd-free-root
/dev/sdc4      ext4      168G   38G  130G  23% /ssd-work
/dev/sda1      fuseblk    98G   63G   36G  64% /win/boot
/dev/sdb3      ext4       30G  9,1G   19G  33% /alternate-root
/dev/sdb2      ext4       30G  7,7G   21G  28% /old-root
/dev/sda2      fuseblk   834G  153G  682G  19% /win/app
/dev/sdb5      ext4      856G  250G  607G  30% /work
tmpfs          tmpfs     1,6G  8,0K  1,6G   1% /run/user/1000

Yikes. What a mess.

[sh @ balrog] ~ 7 % /bin/df -x tmpfs -x devtmpfs -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc2        30G  9,4G   19G  34% /
/dev/sdc3        30G  2,0G   26G   8% /ssd-free-root
/dev/sdc4       168G   38G  130G  23% /ssd-work
/dev/sda1        98G   63G   36G  64% /win/boot
/dev/sdb3        30G  9,1G   19G  33% /alternate-root
/dev/sdb2        30G  7,7G   21G  28% /old-root
/dev/sda2       834G  153G  682G  19% /win/app
/dev/sdb5       856G  250G  607G  30% /work

This is what I want to see: The real filesystems on my real disks.

And the df command already mercifully excludes some of the junk that is reported in /proc/mounts and /etc/mtab:

[sh @ balrog] ~ 27 % column -t /proc/mounts                    
sysfs        /sys                             sysfs            rw,nosuid,nodev,noexec,relatime                                                0  0
proc         /proc                            proc             rw,nosuid,nodev,noexec,relatime                                                0  0
udev         /dev                             devtmpfs         rw,nosuid,relatime,size=8169124k,nr_inodes=2042281,mode=755                    0  0
devpts       /dev/pts                         devpts           rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000                          0  0
tmpfs        /run                             tmpfs            rw,nosuid,noexec,relatime,size=1638392k,mode=755                               0  0
/dev/sdc2    /                                ext4             rw,relatime,data=ordered                                                       0  0
securityfs   /sys/kernel/security             securityfs       rw,nosuid,nodev,noexec,relatime                                                0  0
tmpfs        /dev/shm                         tmpfs            rw,nosuid,nodev                                                                0  0
tmpfs        /run/lock                        tmpfs            rw,nosuid,nodev,noexec,relatime,size=5120k                                     0  0
tmpfs        /sys/fs/cgroup                   tmpfs            ro,nosuid,nodev,noexec,mode=755                                                0  0
cgroup       /sys/fs/cgroup/unified           cgroup2          rw,nosuid,nodev,noexec,relatime,nsdelegate                                     0  0
cgroup       /sys/fs/cgroup/systemd           cgroup           rw,nosuid,nodev,noexec,relatime,xattr,name=systemd                             0  0
pstore       /sys/fs/pstore                   pstore           rw,nosuid,nodev,noexec,relatime                                                0  0
cgroup       /sys/fs/cgroup/net_cls,net_prio  cgroup           rw,nosuid,nodev,noexec,relatime,net_cls,net_prio                               0  0
cgroup       /sys/fs/cgroup/pids              cgroup           rw,nosuid,nodev,noexec,relatime,pids                                           0  0
cgroup       /sys/fs/cgroup/perf_event        cgroup           rw,nosuid,nodev,noexec,relatime,perf_event                                     0  0
cgroup       /sys/fs/cgroup/rdma              cgroup           rw,nosuid,nodev,noexec,relatime,rdma                                           0  0
cgroup       /sys/fs/cgroup/cpuset            cgroup           rw,nosuid,nodev,noexec,relatime,cpuset                                         0  0
cgroup       /sys/fs/cgroup/memory            cgroup           rw,nosuid,nodev,noexec,relatime,memory                                         0  0
cgroup       /sys/fs/cgroup/freezer           cgroup           rw,nosuid,nodev,noexec,relatime,freezer                                        0  0
cgroup       /sys/fs/cgroup/cpu,cpuacct       cgroup           rw,nosuid,nodev,noexec,relatime,cpu,cpuacct                                    0  0
cgroup       /sys/fs/cgroup/devices           cgroup           rw,nosuid,nodev,noexec,relatime,devices                                        0  0
cgroup       /sys/fs/cgroup/hugetlb           cgroup           rw,nosuid,nodev,noexec,relatime,hugetlb                                        0  0
cgroup       /sys/fs/cgroup/blkio             cgroup           rw,nosuid,nodev,noexec,relatime,blkio                                          0  0
systemd-1    /proc/sys/fs/binfmt_misc         autofs           rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=336   0  0
debugfs      /sys/kernel/debug                debugfs          rw,relatime                                                                    0  0
mqueue       /dev/mqueue                      mqueue           rw,relatime                                                                    0  0
hugetlbfs    /dev/hugepages                   hugetlbfs        rw,relatime,pagesize=2M                                                        0  0
configfs     /sys/kernel/config               configfs         rw,relatime                                                                    0  0
fusectl      /sys/fs/fuse/connections         fusectl          rw,relatime                                                                    0  0
/dev/sdc3    /ssd-free-root                   ext4             rw,relatime,data=ordered                                                       0  0
/dev/sdc4    /ssd-work                        ext4             rw,relatime,data=ordered                                                       0  0
/dev/sda1    /win/boot                        fuseblk          rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096  0  0
/dev/sdb3    /alternate-root                  ext4             rw,relatime,data=ordered                                                       0  0
/dev/sdb2    /old-root                        ext4             rw,relatime,data=ordered                                                       0  0
/dev/sda2    /win/app                         fuseblk          rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096  0  0
/dev/sdb5    /work                            ext4             rw,relatime,data=ordered                                                       0  0
binfmt_misc  /proc/sys/fs/binfmt_misc         binfmt_misc      rw,relatime                                                                    0  0
tmpfs        /run/user/1000                   tmpfs            rw,nosuid,nodev,relatime,size=1638392k,mode=700,uid=1000,gid=1000              0  0
gvfsd-fuse   /run/user/1000/gvfs              fuse.gvfsd-fuse  rw,nosuid,nodev,relatime,user_id=1000,group_id=1000                            0  0

There might be some people who find that useful; I am not among them. And neither is the vast majority of Linux users.

And this output is without using en-vogue technologies like docker containers and shrink-wrap-all-the-world package formats such as snap or flatpak. They all tend to multiply this mess; they get very creative with the use of bind-mounts and/or mounting filesystems multiple times to multiple mount points.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Bind Mounts

Linux supports the concept of mounting parts of an already mounted filesystem to another path with the -o bind or --bind options to the mount command.

This is very useful in many cases, but it has caveats since the erstwhile directory tree starting with the root directory might become a general graph that contains cycles:

[sh @ balrog] /tmp/test % tree
.
└── foo
    └── bar
        └── some
            ├── other
            └── where


[sh @ balrog] /tmp/test 6 % sudo mount -o bind /tmp/test/foo/bar /tmp/test/foo/bar/some/other

[sh @ balrog] /tmp/test % tree
.
└── foo
    └── bar
        └── some
            ├── other
            │   └── some
            │       ├── other
            │       └── where
            └── where

[sh @ balrog] /tmp/test % mount | grep /tmp
/dev/sdc2 on /tmp/test/foo/bar/some/other type ext4 (rw,relatime,data=ordered)

[sh @ balrog] /tmp/test % ls -lR
.:
total 4
drwxrwxr-x 3 sh sh 4096 Mai  9 14:01 foo

./foo:
total 4
drwxrwxr-x 3 sh sh 4096 Mai  9 14:01 bar

./foo/bar:
total 4
drwxrwxr-x 4 sh sh 4096 Mai  9 14:02 some

./foo/bar/some:
total 8
drwxrwxr-x 3 sh sh 4096 Mai  9 14:01 other
drwxrwxr-x 2 sh sh 4096 Mai  9 14:01 where
/bin/ls: ./foo/bar/some/other: not listing already-listed directory

./foo/bar/some/where:
total 0

[sh @ balrog] /tmp/test % find . -type d
.
./foo
./foo/bar
./foo/bar/some
find: File system loop detected; ‘./foo/bar/some/other’ is part of the same file system loop as ‘./foo/bar’.
./foo/bar/some/where

So even traditional file utilities like ls and find have checks for just this case.
And I just found out that QDirStat is actually missing such a check.

Neither of those tools does an endless recursion (which was my first suspicion), but it's still an awkward case with awkward handling.

Should QDirStat really follow such bind-mounts? I have serious doubts.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Filesystem Mounted Multiple Times

This is very similar to the "bind mounts" scenario.

On traditional Unix systems this was strictly forbidden, but Linux allows to mount the same filesystem to multiple different mount points at the same time.

[root @ balrog] /mnt # df .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1        58G   52M   58G   1% /mnt

[root @ balrog] /mnt # tree
.
├── foo
│   └── bar
│       └── some
│           └── where
└── lost+found

5 directories, 0 files
[root @ balrog] /mnt # sudo mount /dev/sdd1 /mnt/foo/bar/some/where

[root @ balrog] /mnt # tree
.
├── foo
│   └── bar
│       └── some
│           └── where
│               ├── foo
│               │   └── bar
│               │       └── some
│               │           └── where
│               └── lost+found
└── lost+found

10 directories, 0 files
[root @ balrog] /mnt # ls -lR
.:
total 20
drwxr-xr-x 3 root root  4096 Mai  9 14:24 foo
drwx------ 2 root root 16384 Apr 30 19:36 lost+found

./foo:
total 4
drwxr-xr-x 3 root root 4096 Mai  9 14:24 bar

./foo/bar:
total 4
drwxr-xr-x 3 root root 4096 Mai  9 14:24 some

./foo/bar/some:
total 4
drwxr-xr-x 4 root root 4096 Mai  9 14:24 where
/bin/ls: ./foo/bar/some/where: not listing already-listed directory

./lost+found:
total 0
[root @ balrog] /mnt 8 # find . -type d
.
./lost+found
./foo
./foo/bar
./foo/bar/some
find: File system loop detected; ‘./foo/bar/some/where’ is part of the same file system loop as ‘.’.

It might be challenging to find out which mount point is the primary one, and which ones are just secondary ones. In this example it's simple because the filesystem is mounted again onto itself; in other cases where mutually exclusive mount points are used, is it really possible to tell which one is the primary one and which others are just secondary? Or are they all created equal?

When a filesystem /dev/sdx1 is mounted to both /home/foo/data and to /home/bar/data, what is QDirStat supposed to do when an "always read mounted filesystems" option would be checked? Read it only the first time (with the disk size etc. sums cascading up just that one subtree)? Read it every time it is found and distort the sum for /home?

If there is a concept like a primary mount point, and that would be /home/bar in this example, and QDirStat starts reading at /home/foo and not read /home/bar at all, would it read or ignore /home/foo/data?

In the general case, it would not be easy to tell if a primary mount point is even part of the current directory tree to read. Just imagine one more filesystem between the initial filesystem and this multi-mount filesystem. Yes, it's a pathological case, but it's the pathological cases that create problems.

Whichever way is chosen, it will be very confusing to the normal user.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

QDirStat Bug: Wrong Sums for Bind-Mounts and Multi-Mounts

There is no endless recursion (no matter whether or not "the cross filesystems" option is set), but the same files and directories are summed up several times, so the result is greatly distorted.

This is clearly a bug that should be fixed; I am just not sure if that is realistically possible: The check if a directory is a mount point or not right now checks for the major and minor device numbers of both the parent and the newly found child directory, and if they are different, this is very likely a mount point.

Maybe that check is just too simplistic, and QDirStat needs to check first what mount points are known to the system (checking /proc/mounts and/or /etc/mtab) and while reading the directory tree, check each directory that is found if it is one of those mount points.

@shundhammer
Copy link
Owner

Always Continue Reading at Bind-Mounts and Multi-Mounts?

The feature request this issue is all about was to allow always continuing to read when a mount point was found. For bind-mounts and filesystems mounted multiple times, this would be a bad idea, however: This would enforce distorting the sums.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Cleanup Actions and Bind-Mounts and Multi-Mounts

Recursive cleanup actions might also be a problem; they may or may not work properly if started from a subtree that includes any mounted filesystem.

It may become worse if users bind-mount system directories; in a project some 10 years ago I saw developers (not noobs! system-level developers!) wrecking their system because the scratchbox development system that was used for the project had mounted parts of /usr into the scratchbox sandbox, and a desperate rm -rf command with root privileges (to free sorely needed disk space) wreaked havoc on the host system outside that scratchbox. It was just a chroot jail, not anything sophisticated like today's docker containers, so the root privileges did actually affect the host system.

However, this is a general problem, independent of always continuing to read at mount points or not.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

So, where does this leave us?

I am pretty sympathetic to the feature request in general, but it's a lot less simple than one might imagine. This needs to be restricted to the "reasonable" cases:

  • Only for normal filesystems with normal mount options
  • No system directories like /dev, /proc, /sys
  • No bind mounts
  • No multiple mounts (just its primary mount point)
  • To be clarified: What about network filesystems?

I also never liked that initial directory selection box very much. Personally, I always supply the path to be scanned on the command line (even more so with pkg:/ or unpkg:/ URLs). But that doesnt scale for pure desktop users.

It might actually be time to greatly improve that initial selection: Away from the predefined Qt directory selection dialog and towards a more dedicated dialog that gives the user the useful choices.

Those would include:

  • His home directory (which is the natural thing to start with)
  • The root directory
  • The other "real" filesystems
  • A "browse" button that would open the general directory selection dialog that is used now
  • Maybe a simple input field where I can type a path (with tab completion, of course) or use copy & paste from other windows on my desktop
  • A checkbox "[ ] cross filesystem boundaries" (default: off)
  • Maybe a checkbox "[ ] check mounted network filesystems" (default: off)

The checkboxes would affect only the current program run.

Not sure, but maybe also several tabs at the top to make the other views more accessible:

  • Directories
  • Packages
  • Unpackaged Files

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Btrfs

Btrfs is one big mess pretty complex.

Not only does it include the functionality of LVM and RAID; it also has subvolumes and snapshots. And, as the biggest complication, shared disk blocks between them so you never quite know what a size reported by Btrfs actually means.

Subvolumes are in wide use to get different mount options for different subtrees on the same Btrfs volume (in particular, read-only and copy-on-write). That by itself would not be a big problem, but each subvolume also having its own separate device major and minor number is; that had made it quite hard for QDirStat to figure out what is a genuine mounted filesystem and what is just a subvolume.

Snapshots are a built-in backup (kind of) to get back to a previous state. This is useful for system updates or for manual operations by the admin: If any of them turns out to have bad effects, it is possible to do a rollback to a previous snapshot.

Snapshots are the typical application for shared disk blocks: Creating a snapshot means adding a reference to each of the disk blocks in the current system to the snapshot and increasing the usage count of that block. Btrfs has built-in CoW (copy on write), so a write access to the filesystem means the disk block is at that moment duplicated and detached from the previous version in the snapshot.

While this leads to very efficient storage of an entire filesystem in snapshots, it leads to a huge administrative mess when it comes to calculating the real disk usage: Outside the Btrfs kernel module it is impossible to find out what disk blocks are shared between the live filesystem and any snapshots.

The values that Btrfs returns to syscalls like statfs() and thus to common tools like df are completely bogus. Btrfs comes with its own set of tools to obtain that kind of information, and good luck trying to make any sense of any of their output. They are not even consistent between themselves; each of them requires heavy interpretation what is what and how to understand the numbers.

While it is simple for QDirStat to traverse a directory tree even on a Btrfs, the trouble started with determining what is a subvolume and what is a real mount point: While a subvolume should be read, anything else probably should not.

Adding even more mount point magic to the mix will probably break this fragile construct in some way or the other. So for Btrfs, this will need to remain simple.

@shundhammer
Copy link
Owner

shundhammer commented May 9, 2020

Network Filesystems

Network filesystems such as NFS and Samba (and maybe more) may become a problem: A user scanning his root filesystem and then using the "cross filesystems" option might be blissfully unaware that he is killing the performance of the network and the NFS / Samba servers every time he does that.

Such shared filesystems tend to be very large (which is the whole point of having them on central servers rather than distributed on each desktop client), and scanning them completely with a tool like QDirStat will put a huge strain on shared resources such as network bandwidth and file server I/O.

If anybody actually wants to do that, he should explicitly request it. The default for scanning network filesystems should be "disabled".

Not sure; maybe this should even be a separate setting / checkbox / context menu selection.

@shundhammer
Copy link
Owner

Rather than relying on checkboxes before scanning directories, a different approach just came to my mind:

Maybe when a mounted filesystem is found, open some kind of notification (a non-modal pop-up? A separate aread in the main window?) collecting those filesystems so

  • the user is even aware that there are mounted filesystems in that subtree
  • the user can see what they are (including device name and partition size)
  • the user can wait until the directory tree without those mounted filesystems is completely read and then decide if the mounted filesystems should be scanned

Maybe something like

Mounted filesystems in /work/some/where:                                     [x]

[ ] /dev/sdc1 (500 GB, 130 GB used) mounted at /work/some/where/foo/bar
[ ] /dev/sdc2 (256 GB, 80 GB used) mounted at /work/some/where/else/whatever
[ ] myserver:/shares/stuff (16 TB, 12.3 TB used) mounted at /work/some/where/joe/stuff

[Read selected] [Read all]

This selection would not contain any "weird" filesystems: No system filesystems like /dev, /proc, /sys, no bind mounts; and multiply mounted filesystems should only appear once, preferably with their primary mount point (if a primary mount point can be obtained).

@phiresky
Copy link
Contributor Author

Great discussion - don't think I have much to add. Adding buttons to the UI is a good idea, I didn't even know pkg: existed and I also usually invoke qdirstat from the shell.

By the way, rmlint may be of some inspiration here: It's also a tool that recursively scans directories as quickly as possible.

They do a scan through all mount points in the system before even starting to do the following things here https://github.com/sahib/rmlint/blob/3a7d52db5d3ddf82b00e45ea3ead69e8e413c725/lib/utilities.c#L599:

this information is also used to control the number of threads: for all fs that are part of the same disk and that are rotational, only one scan thread is started, otherwise multiple are started per underlying disk etc.

they also use fiemaps to be able to order disk read operations by actual location on disk for performance.

just random debug output from it:

DEBUG: Filesystem /sys/firmware/efi/efivars: not reflink capable
DEBUG: Filesystem /run: not reflink capable
DEBUG: `devtmpfs` mount detected at /dev (#6); Ignoring all files in it.
DEBUG: Filesystem /dev: not reflink capable
DEBUG: Filesystem /sys: not reflink capable
DEBUG: `proc` mount detected at /proc (#5); Ignoring all files in it.
DEBUG: Filesystem /proc: not reflink capable
DEBUG: 08:33                                        /mnt/backup -> 08:32 /dev/sdc1    (underlying disk: sdc; rotational: yes)
DEBUG: 08:17                                              /data -> 08:16 /dev/sdb1    (underlying disk: sdb; rotational: yes)
DEBUG: 00:58                           /sys/fs/fuse/connections -> 00:00 fusectl      (underlying disk: unknown; rotational: yes)
DEBUG: 00:57                                /run/user/1000/gvfs -> 00:00 gvfsd-fuse   (underlying disk: unknown; rotational: yes)
DEBUG: 00:56                                     /run/user/1000 -> 00:56 tmpfs        (underlying disk: tmpfs; rotational:  no)

@phiresky
Copy link
Contributor Author

By the way, in your above list of dangerous fs you're missing fuse (which can just do whatever it wants, e.g. return infinite lists of files or at the least a borg fuse mount contains many copies of a fs state just like btrfs snapshots).

And then of course, someone might have put a regex in their FAT file system, good luck traversing that: https://github.com/8051Enthusiast/regex2fat

@shundhammer
Copy link
Owner

Okay, here is the first working version; please check.

As mentioned before, this also needed a revamp of the "Open Directory" dialog:

QDirStat-open-dir

I was never very fond of the Qt standard directory selection dialog anyway.

Not only does this one have this new "Cross Filesystems" checkbox for one program run (but it remains the same during that whole program run), it also lists the mounted filesystems much more prominently: That "Places" list on the left is completely new.

This lists only the normal filesystems (including network filesystems such as NFS or CIFS (Samba)), not any system mounts like /dev, /proc, /sys, and also no bind mounts or filesystems mounted multiple times. And the user's home directory, of course.

I am not yet completely happy with the sort order; it should be the mount order which should be the same as in /etc/fstab, but this seems not to be too reliable yet.

The idea of listing the mounted filesystems so prominently is that this is where you typically need to check the disk usage, not on any random directory in the middle of a filesystem.

You can still do that, of course; and it even starts with the current directory. That combo box with the path also has auto-completion for valid paths on the filesystem. And there is still the "Browse..." button that opens the normal Qt directory selection dialog for those (three or four people) who like it. 😄

@shundhammer
Copy link
Owner

shundhammer commented May 26, 2020

Looks like the sort order is indeed by mount order, but systemd starts multiple mounts in parallel, so it depends on which filesystem is faster; the result isn't always the same the from one system boot to the next.

Whatever.

@shundhammer
Copy link
Owner

shundhammer commented May 28, 2020

New and better "Open Directory" dialog:

QDirStat-open-dir

The combo box now no longer has autocompletion (this turned out to be really confusing), but all three widgets now keep in sync with one another: The combo box, the places on the left, and the directory tree. As you type, the corresponding node in the tree is selected. When you click in the tree, the places on the left always show the fileystem that you are on. When you click on the filesystem in the "places" bar, you will go to the mount point of that filesystem.

But you can still type (and copy & paste!), it is still validated (i.e. the "OK" button only becomes active when you have a valid path in the combo box); and the combo box items show the parent directories of the current path.

Also, note the "Up" button which takes you one directory level up.

@shundhammer
Copy link
Owner

shundhammer commented Mar 5, 2021

@shawwn wrote:

I'm trying to use qdirstat with gcsfuse to visualize the size of a rather massive cloud bucket. Unfortunately all I see is this:
...
I'd like to request whatever feature is required to enable this. Network filesystems perhaps? Not sure.

DO NOT HIJACK AN EXISTING ISSUE FOR SOMETHING COMPLETELY DIFFERENT.

Deleting.

Seriously, I am working my ass off to keep everything well-organized, well-documented and easy to understand, and people can't be bothered to do the most basic things (that would require one or two mouse clicks) and just dump their problem-of-the-hour into a completely unrelated issue? No way.

Repository owner deleted a comment from shawwn Mar 5, 2021
Repository owner deleted a comment from shawwn Mar 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants