Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug:1702043] Newly created files are inaccessible via FUSE #906

Closed
gluster-ant opened this issue Mar 12, 2020 · 25 comments
Closed

[bug:1702043] Newly created files are inaccessible via FUSE #906

gluster-ant opened this issue Mar 12, 2020 · 25 comments
Labels
Migrated Type:Bug wontfix Managed by stale[bot]

Comments

@gluster-ant
Copy link
Collaborator

URL: https://bugzilla.redhat.com/1702043
Creator: bio.erikson at gmail
Time: 20190422T19:45:11

Description of problem:
Newly created files/dirs will be inaccessible to the local FUSE mount after file IO is completed.

I have recently started to experience this problem after upgrading to gluster 6.0, and did not previously experience this problem.
I have two nodes running glusterfs, each with a FUSE mount pointed to localhost.

#/etc/fstab
localhost:/gv0 /data/ glusterfs lru-limit=0,defaults,_netdev,acl 0 0

I have ran in to this problem with rsync, random file creation with dd, and mkdir/touch. I have noticed that files are accessible while being written too, and become inaccessible once the file IO is complete. It usually happens in 'chunks' of sequential files. After some period of time >15 min the problem resolves itself. The files on the local bricks ls just fine. The problematic files/dirs are accessible via FUSE mounts on other machines. Heal doesn't report any problems. Small file workloads seem to make the problem worse. Overwriting existing files seems to not create problematic files.

Gluster Info
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: ...
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
...
Options Reconfigured:
cluster.self-heal-daemon: enable
server.ssl: on
client.ssl: on
auth.ssl-allow: *
transport.address-family: inet
nfs.disable: on
user.smb: disable
performance.write-behind: on
diagnostics.latency-measurement: off
diagnostics.count-fop-hits: off
cluster.lookup-optimize: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.nl-cache: on
cluster.readdir-optimize: on
storage.build-pgfid: off
diagnostics.brick-log-level: ERROR
diagnostics.brick-sys-log-level: ERROR
diagnostics.client-log-level: ERROR

Client Log
The FUSE log is flooded with:

[2019-04-22 19:12:39.231654] D [MSGID: 0] [io-stats.c:2227:io_stats_lookup_cbk] 0-stack-trace: stack-address: 0x7f535ca5c728, gv0 returned -1 error: No such file or directory [No such file or directory]

Version-Release number of selected component (if applicable):

apt list | grep gluster

bareos-filedaemon-glusterfs-plugin/stable 16.2.4-3+deb9u2 amd64
bareos-storage-glusterfs/stable 16.2.4-3+deb9u2 amd64
glusterfs-client/unknown 6.1-1 amd64 [upgradable from: 6.0-1]
glusterfs-common/unknown 6.1-1 amd64 [upgradable from: 6.0-1]
glusterfs-dbg/unknown 6.1-1 amd64 [upgradable from: 6.0-1]
glusterfs-server/unknown 6.1-1 amd64 [upgradable from: 6.0-1]
tgt-glusterfs/stable 1:1.0.69-1 amd64
uwsgi-plugin-glusterfs/stable,stable 2.0.14+20161117-3+deb9u2 amd64

How reproducible:
Always

Steps to Reproduce:

  1. Upgrade from 5.6 to either 6.0 or 6.1, with the described configuration.
  2. Run a small file intensive workload.

Actual results:

dd if=/dev/urandom bs=1024 count=10240 | split -a 4 -b 1k - file.
1024+0 records in
1024+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 18.3999 s, 57.0 kB/s

ls: cannot access 'file.abbd': No such file or directory
ls: cannot access 'file.aabb': No such file or directory
ls: cannot access 'file.aadh': No such file or directory
ls: cannot access 'file.aafq': No such file or directory
...

total 845
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaa
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaab
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaac
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaad
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaae
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaf
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaag
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaah
-????????? ? ?        ?           ?            ? file.aaai
-????????? ? ?        ?           ?            ? file.aaaj
-????????? ? ?        ?           ?            ? file.aaak
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaal
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaam
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaan
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaao
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaap
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaq
-????????? ? ?        ?           ?            ? file.aaar
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaas
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaat
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaau
-????????? ? ?        ?           ?            ? file.aaav
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaaw
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaax
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aaay
-????????? ? ?        ?           ?            ? file.aaaz
-????????? ? ?        ?           ?            ? file.aaba
-????????? ? ?        ?           ?            ? file.aabb
-rw-r--r-- 1 someone someone 1024 Apr 22 12:07 file.aabc
...

# Wait 10 mins
total 1024
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaa
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaab
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaac
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaad
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaae
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaf
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaag
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaah
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaai
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaaj
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaak
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaal
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaam
-rw-r--r-- 1 someone someone 1024 Apr 22 12:06 file.aaan
...


Expected results:
All files to be accessible immediately. 

Additional info:
There was nothing of interest in the other logs when changed to INFO.
Seems similar to Bug 1647229
@gluster-ant
Copy link
Collaborator Author

Time: 20200109T17:42:23
jahernan at redhat commented:
I tried to reproduce the issue with v 6.0 but it doesn't happen on my setup. Could you reproduce it but setting debug level to trace ?

To set trace log level run these commands:

gluster volume set brick-log-level TRACE

gluster volume set client-log-level TRACE

Once the error happens, I would need all brick logs and mount log.

@gluster-ant
Copy link
Collaborator Author

Time: 20200224T04:42:53
moagrawa at redhat commented:
Hi Erikson,

Can you share some updates if you are able to reproduce it?

Thanks,
Mohit Agrawal

@xhernandez
Copy link
Contributor

Are you still able to reproduce this ?

@stale
Copy link

stale bot commented Oct 9, 2020

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Oct 9, 2020
@stale
Copy link

stale bot commented Oct 24, 2020

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

@stale stale bot closed this as completed Oct 24, 2020
@kindofblue
Copy link

We actually have a similar problem. With writing 0,5 million 1k file into 1000 directory under glusterfs 6.0 and have the similar issues. It is probably a good idea to reopen this issue.

@pranithk pranithk reopened this Nov 20, 2020
@stale stale bot removed the wontfix Managed by stale[bot] label Nov 20, 2020
@pranithk
Copy link
Member

@kindofblue Do you have steps to recreate this issue? If yes, could you please share?

@kindofblue
Copy link

This issue can be created in two ways:

  1. This is the way that described in Erikson's post. But one probably need to write more files to see the symptoms. I first issue

    dd if=/dev/urandom bs=1024 count=1024000 | split -a 4 -b 1k - file.

    And in another terminal:

    /mnt/gv1/tmp # ls -l > /dev/null
    ls: ./file.trxb: No such file or directory

Notice since after sometime the problematic files can be back to normal. So it is better to put the test in a loop so you can see it more easily.

  1. This approach is more observable:
    1. grab disktest tool and compile it:
      git clone https://github.com/kindofblue/disktest
      gcc -D_GNU_SOURCE disk_dio.c -o disk_dio
    2. goto mounted glusterfs folder:
      disk_dio 1k create
      disk_dio 1k write
    3. The above essential creates 1000 folders and write 10000 1k sized files into each directory. (you can also use a shell script with dd to do the same thing.)
      4, In my environment, after writing 400k files, when you do a ls -l (not ls, since ls only gets the list of file from parent directory) one will observer a lot of directories shows as
      ls -l
      -????????? ? ? ? ? ? ./456
      -????????? ? ? ? ? ? ./467
      -????????? ? ? ? ? ? ./469
    4. Notice the directories that shows inaccessible are not (necessarily) those directories has been written to. That is even these directories has no files in it and never been added any files, they will become inaccessible after a certain amount of files written to some other directories.
    5. In both approaches, umount and remount will make the problem go away.

I also include the gluster volume configuration below, I disabled most cache, write-behind etc to run in a "safe" setting to prevent other factors to play. The glusterfs version is glusterfs-6.0-30.1.el7rhgs and the volume is mounted with default options.

uname -a
Linux  4.14.53 #79 SMP Wed Sep 26 14:27:54 -00 2018 x86_64 x86_64 x86_64 GNU/Linux
gluster vol info gv1

Volume Name: gv1
Type: Distributed-Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 8 x 3 = 24
Transport-type: tcp
Bricks:
Brick1: 10.192.168.51:/mnt/sda2/gv1
Brick2: 10.192.168.52:/mnt/sda2/gv1
Brick3: 10.192.168.53:/mnt/sda2/gv1
Brick4: 10.192.168.54:/mnt/sda2/gv1
Brick5: 10.192.168.55:/mnt/sda2/gv1
Brick6: 10.192.168.56:/mnt/sda2/gv1
Brick7: 10.192.168.57:/mnt/sda2/gv1
Brick8: 10.192.168.58:/mnt/sda2/gv1
Brick9: 10.192.168.59:/mnt/sda2/gv1
Brick10: 10.192.168.60:/mnt/sda2/gv1
Brick11: 10.192.168.61:/mnt/sda2/gv1
Brick12: 10.192.168.62:/mnt/sda2/gv1
Brick13: 10.192.168.63:/mnt/sda2/gv1
Brick14: 10.192.168.64:/mnt/sda2/gv1
Brick15: 10.192.168.65:/mnt/sda2/gv1
Brick16: 10.192.168.66:/mnt/sda2/gv1
Brick17: 10.192.168.67:/mnt/sda2/gv1
Brick18: 10.192.168.69:/mnt/sda2/gv1
Brick19: 10.192.168.70:/mnt/sda2/gv1
Brick20: 10.192.168.71:/mnt/sda2/gv1
Brick21: 10.192.168.72:/mnt/sda2/gv1
Brick22: 10.192.168.73:/mnt/sda2/gv1
Brick23: 10.192.168.74:/mnt/sda2/gv1
Brick24: 10.192.168.75:/mnt/sda2/gv1
Options Reconfigured:
performance.open-behind: off
performance.readdir-ahead: off
performance.read-ahead: off
performance.stat-prefetch: off
performance.write-behind: off
performance.quick-read: off
features.ctime: on
performance.nl-cache: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.parallel-readdir: on
performance.md-cache-statfs: on
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: off
transport.address-family: inet
performance.io-cache: off
performance.write-behind-window-size: 524288
performance.aggregate-size: 512KB
features.shard: on
features.shard-lru-limit: 16384
network.inode-lru-limit: 16384

@kindofblue
Copy link

@pranithk @xhernandez please check my above comment.

@resposit
Copy link

resposit commented Jan 28, 2021

Hi,
I have a similar problem. In my setup I have 3 clients (box01, box02, box03) mounting the same data folder (via gluster fuse native client), located on a Distributed-Disperse gluster volume managed by 5 servers (cloud10-gl, cloud11-gl, cloud12-gl, cloud13-gl, cloud14-gl). Gluster version is 6.10 everywhere - both client and server side.
It happens sometimes that dirs/files written by a client are not visible from some othe clients.

This is what I see know from box03:

[root@box03 c]# pwd
/glusterfs/cloudstor/data/appdata_ocrqnax9wapr/preview/7/c

[root@box03 c]# ls -l
ls: cannot access '6': No such file or directory
total 16
drwxr-xr-x 4 apache apache 4096 Jan 27 09:45 2
drwxr-xr-x 3 apache apache 4096 Jan 26 14:33 4
d????????? ? ?      ?         ?            ? 6
drwxr-xr-x 3 apache apache 4096 Jan 27 11:54 7
drwxr-xr-x 3 apache apache 4096 Jan 22 11:00 9

The same folder is perfectly fine from box01 and box02:

[root@box01 c]# pwd
/glusterfs/cloudstor/data/appdata_ocrqnax9wapr/preview/7/c

[root@box01 c]# ls -l
total 20
drwxr-xr-x 4 apache apache 4096 Jan 27 09:45 2
drwxr-xr-x 3 apache apache 4096 Jan 26 14:33 4
drwxr-xr-x 3 apache apache 4096 Jan 27 16:55 6
drwxr-xr-x 3 apache apache 4096 Jan 27 11:54 7
drwxr-xr-x 3 apache apache 4096 Jan 22 11:00 9

[root@box02 c]# pwd
/glusterfs/cloudstor/data/appdata_ocrqnax9wapr/preview/7/c

[root@box02 c]# ls -l
total 20
drwxr-xr-x 4 apache apache 4096 Jan 27 09:45 2
drwxr-xr-x 3 apache apache 4096 Jan 26 14:33 4
drwxr-xr-x 3 apache apache 4096 Jan 27 16:55 6
drwxr-xr-x 3 apache apache 4096 Jan 27 11:54 7
drwxr-xr-x 3 apache apache 4096 Jan 22 11:00 9

Here my gluster volume config:

root@cloud10:~# gluster volume info cloudstor

Volume Name: cloudstor
Type: Distributed-Disperse
Volume ID: af6ceaae-9d3f-4cf4-adc9-f9480c511c46
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (4 + 1) = 15
Transport-type: tcp
Bricks:
Brick1: cloud10-gl.na.infn.it:/glusterfs/cloudstor/disk6/brick
Brick2: cloud11-gl.na.infn.it:/glusterfs/cloudstor/disk6/brick
Brick3: cloud12-gl.na.infn.it:/glusterfs/cloudstor/disk6/brick
Brick4: cloud13-gl.na.infn.it:/glusterfs/cloudstor/disk6/brick
Brick5: cloud14-gl.na.infn.it:/glusterfs/cloudstor/disk6/brick
Brick6: cloud10-gl.na.infn.it:/glusterfs/cloudstor/disk4/brick
Brick7: cloud11-gl.na.infn.it:/glusterfs/cloudstor/disk4/brick
Brick8: cloud12-gl.na.infn.it:/glusterfs/cloudstor/disk4/brick
Brick9: cloud13-gl.na.infn.it:/glusterfs/cloudstor/disk4/brick
Brick10: cloud14-gl.na.infn.it:/glusterfs/cloudstor/disk4/brick
Brick11: cloud10-gl.na.infn.it:/glusterfs/cloudstor/disk7/brick
Brick12: cloud11-gl.na.infn.it:/glusterfs/cloudstor/disk7/brick
Brick13: cloud12-gl.na.infn.it:/glusterfs/cloudstor/disk7/brick
Brick14: cloud13-gl.na.infn.it:/glusterfs/cloudstor/disk7/brick
Brick15: cloud14-gl.na.infn.it:/glusterfs/cloudstor/disk7/brick
Options Reconfigured:
disperse.shd-max-threads: 4
transport.address-family: inet
performance.parallel-readdir: on
performance.nl-cache: on
server.outstanding-rpc-limit: 128
performance.io-thread-count: 64
server.event-threads: 4
client.event-threads: 4
performance.cache-size: 1024MB
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
nfs.disable: on

Here my config on client side:

[root@box02 c]# cat /etc/fstab | grep gluster
cloud10-gl:/cloudstor /glusterfs/cloudstor glusterfs defaults,_netdev,backup-volfile-servers=cloud11-gl:cloud12-gl:cloud13-gl:cloud14-gl 0 0
[root@box02 c]# mount | grep gluster
cloud10-gl:/cloudstor on /glusterfs/cloudstor type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev)

As said before, unmountg / re-mounting the gluster volume temporarily fixes the problem:

[root@box03 ~]# umount /glusterfs/cloudstor
[root@box03 ~]# mount /glusterfs/cloudstor
[root@box03 ~]# cd /glusterfs/cloudstor/data/appdata_ocrqnax9wapr/preview/7/c
[root@box03 c]# ls -l
total 20
drwxr-xr-x 4 apache apache 4096 Jan 27 09:45 2
drwxr-xr-x 3 apache apache 4096 Jan 26 14:33 4
drwxr-xr-x 3 apache apache 4096 Jan 27 16:55 6
drwxr-xr-x 3 apache apache 4096 Jan 27 11:54 7
drwxr-xr-x 3 apache apache 4096 Jan 22 11:00 9

Any ideas ?

@resposit
Copy link

resposit commented Feb 1, 2021

It just happened again. This time, from box02 I can see some broken directories:

[root@box02 files]# pwd
/glusterfs/cloudstor/data/ferrara/files_trashbin/files
[root@box02 files]# ls -l Cartellini.d1612173884
ls: cannot access 'Cartellini.d1612173884': No such file or directory

From box01 and box03 the directory is perfectly fine:

[root@box01 files]# pwd
/glusterfs/cloudstor/data/ferrara/files_trashbin/files
[root@box01 files]# ls -l Cartellini.d1612173884
total 0

[root@box03 files]# pwd
/glusterfs/cloudstor/data/ferrara/files_trashbin/files
[root@box03 files]# ls -l Cartellini.d1612173884
total 0

On box02 I can see from log files:

[root@box02 ~]# grep "Cartellini.d1612173884" -r /var/log/*
/var/log/glusterfs/glusterfs-cloudstor.log:[2021-02-01 10:04:44.892841] I [MSGID: 109066] [dht-rename.c:1953:dht_rename] 0-cloudstor-dht: renaming /data/ferrara/files/My Files 2020/2_Fondi Esterni Napoli/4 PROGETTI/MSCA-ITN-INSIGHTS/Personale/DARWISH-rinunciatario/Cartellini (bf82eeb6-4ee9-4a4e-8f75-c917f83a4ce5) (hash=cloudstor-readdir-ahead-1/cache=cloudstor-readdir-ahead-1) => /data/ferrara/files_trashbin/files/Cartellini.d1612173884 ((null)) (hash=cloudstor-readdir-ahead-0/cache=<nul>)

And on cloud10-gl (one of the gluster servers) I see:

root@cloud10:~# grep -r bf82eeb6-4ee9-4a4e-8f75-c917f83a4ce5 /var/log/*
/var/log/glusterfs/cloudstor-rebalance.log.1:[2021-01-26 09:41:34.792981] I [MSGID: 109063] [dht-layout.c:650:dht_layout_normalize] 0-cloudstor-dht: Found anomalies in /data/ferrara/files/My Files 2020/2_Fondi Esterni Napoli/4 PROGETTI/H2020-INSIGHTS/Personale/DARWISH-rinunciatario/Cartellini (gfid = bf82eeb6-4ee9-4a4e-8f75-c917f83a4ce5). Holes=1 overlaps=0

I can't find anything else concerning uuid bf82eeb6-4ee9-4a4e-8f75-c917f83a4ce5 on the other servers log files.
Any help would be really appreciated.

@diete-p
Copy link

diete-p commented Feb 10, 2021

Hi reposit,
we have exactly the same problem. recently we have upgraded all server and clients from 3.12.15 to 7.9 and added two further nodes. since then we recognize the same issue on some clients while some other clients still have access to corresponding files and subdirectories.
for example, a shortly created file is accessible from 4 fuse clients

[ 10:37:05 ] -  ~/central $./mycommand.sh -H gl-clients -c "ls -l /sdn/thumbs/5950/files/21/02/08/2725200/GQk9MLn48vzTD7m-thumb.jpeg"

Host : gluster-client-01
-rw-r--r-- 1 www-data www-data 71185 Feb  9 01:59 /sdn/thumbs/5950/files/21/02/08/2725200/GQk9MLn48vzTD7m-thumb.jpeg

while 4 other fuse clients cannot access the correspondig directory

[ 10:36:17 ] - ~/central $./mycommand.sh -H cache-ger -c "ls -l /sdn/thumbs/5950/files/21/02/"

Host : cache-01
total 28
drwxr-xr-x  7 root root 4096 Feb  2 02:08 01
drwxr-xr-x  6 root root 4096 Feb  3 02:19 02
drwxr-xr-x 16 root root 4096 Feb  4 02:11 03
drwxr-xr-x  7 root root 4096 Feb  5 02:08 04
drwxr-xr-x 11 root root 4096 Feb  6 02:12 05
drwxr-xr-x  4 root root 4096 Feb  7 01:38 06
drwxr-xr-x  7 root root 4096 Feb  8 01:52 07
d?????????  ? ?    ?       ?            ? 08
d?????????  ? ?    ?       ?            ? 09
...

as you mentioned, umount / mount glusterfs solves the problem for a while.
since the rebalance process is completed we tried today another approach and we have activated the parallel-readdir and readdir-ahead.

performance.parallel-readdir: on
performance.readdir-ahead: on

directly after activation the above shown diretories and files were accessible even from the problematic clients...without umount / mount of the glusterfs.
I don't know if this will permanently solve the problem.
I recently read about the same problem elsewhere but can't find the thread anymore. There the problem was also solved by volume settings, but reappeared after a while ... as far as I can remember.
So it can be just another 'workaround'.
I don't know how you set readdir-ahead, I suspect it is set to 'off'. Perhaps you set the parameter accordingly in your environment and observe the behavior of the clients.
Or you wait until the problem occurs again and then change some other volume parameter.
Then we know whether the performance parameters solve the problem or whether changing some volume settings helps for a short period of time.
best regards

@resposit
Copy link

Hi @diete-p
apparently I already have those readdir settings to on and still getting the same behaviour randomly, then I'm afraid it doesn't fix the problem.

BTW, this is the full list of my current settings:

root@cloud10:~# gluster volume get cloudstor all
Option                                  Value
------                                  -----
cluster.lookup-unhashed                 on
cluster.lookup-optimize                 on
cluster.min-free-disk                   10%
cluster.min-free-inodes                 5%
cluster.rebalance-stats                 off
cluster.subvols-per-directory           (null)
cluster.readdir-optimize                off
cluster.rsync-hash-regex                (null)
cluster.extra-hash-regex                (null)
cluster.dht-xattr-name                  trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid    off
cluster.rebal-throttle                  normal
cluster.lock-migration                  off
cluster.force-migration                 off
cluster.local-volume-name               (null)
cluster.weighted-rebalance              on
cluster.switch-pattern                  (null)
cluster.entry-change-log                on
cluster.read-subvolume                  (null)
cluster.read-subvolume-index            -1
cluster.read-hash-mode                  1
cluster.background-self-heal-count      8
cluster.metadata-self-heal              off
cluster.data-self-heal                  off
cluster.entry-self-heal                 off
cluster.self-heal-daemon                on
cluster.heal-timeout                    600
cluster.self-heal-window-size           1
cluster.data-change-log                 on
cluster.metadata-change-log             on
cluster.data-self-heal-algorithm        (null)
cluster.eager-lock                      on
disperse.eager-lock                     on
disperse.other-eager-lock               on
disperse.eager-lock-timeout             1
disperse.other-eager-lock-timeout       1
cluster.quorum-type                     none
cluster.quorum-count                    (null)
cluster.choose-local                    true
cluster.self-heal-readdir-size          1KB
cluster.post-op-delay-secs              1
cluster.ensure-durability               on
cluster.consistent-metadata             no
cluster.heal-wait-queue-length          128
cluster.favorite-child-policy           none
cluster.full-lock                       yes
diagnostics.latency-measurement         off
diagnostics.dump-fd-stats               off
diagnostics.count-fop-hits              off
diagnostics.brick-log-level             INFO
diagnostics.client-log-level            INFO
diagnostics.brick-sys-log-level         CRITICAL
diagnostics.client-sys-log-level        CRITICAL
diagnostics.brick-logger                (null)
diagnostics.client-logger               (null)
diagnostics.brick-log-format            (null)
diagnostics.client-log-format           (null)
diagnostics.brick-log-buf-size          5
diagnostics.client-log-buf-size         5
diagnostics.brick-log-flush-timeout     120
diagnostics.client-log-flush-timeout    120
diagnostics.stats-dump-interval         0
diagnostics.fop-sample-interval         0
diagnostics.stats-dump-format           json
diagnostics.fop-sample-buf-size         65535
diagnostics.stats-dnscache-ttl-sec      86400
performance.cache-max-file-size         0
performance.cache-min-file-size         0
performance.cache-refresh-timeout       1
performance.cache-priority
performance.cache-size                  1024MB
performance.io-thread-count             64
performance.high-prio-threads           16
performance.normal-prio-threads         16
performance.low-prio-threads            16
performance.least-prio-threads          1
performance.enable-least-priority       on
performance.iot-watchdog-secs           (null)
performance.iot-cleanup-disconnected-reqsoff
performance.iot-pass-through            false
performance.io-cache-pass-through       false
performance.cache-size                  1024MB
performance.qr-cache-timeout            1
performance.cache-invalidation          on
performance.ctime-invalidation          false
performance.flush-behind                on
performance.nfs.flush-behind            on
performance.write-behind-window-size    1MB
performance.resync-failed-syncs-after-fsyncoff
performance.nfs.write-behind-window-size1MB
performance.strict-o-direct             off
performance.nfs.strict-o-direct         off
performance.strict-write-ordering       off
performance.nfs.strict-write-ordering   off
performance.write-behind-trickling-writeson
performance.aggregate-size              128KB
performance.nfs.write-behind-trickling-writeson
performance.lazy-open                   yes
performance.read-after-open             yes
performance.open-behind-pass-through    false
performance.read-ahead-page-count       4
performance.read-ahead-pass-through     false
performance.readdir-ahead-pass-through  false
performance.md-cache-pass-through       false
performance.md-cache-timeout            600
performance.cache-swift-metadata        true
performance.cache-samba-metadata        false
performance.cache-capability-xattrs     true
performance.cache-ima-xattrs            true
performance.md-cache-statfs             off
performance.xattr-cache-list
performance.nl-cache-pass-through       false
features.encryption                     off
network.frame-timeout                   1800
network.ping-timeout                    42
network.tcp-window-size                 (null)
client.ssl                              off
network.remote-dio                      disable
client.event-threads                    4
client.tcp-user-timeout                 0
client.keepalive-time                   20
client.keepalive-interval               2
client.keepalive-count                  9
network.tcp-window-size                 (null)
network.inode-lru-limit                 200000
auth.allow                              *
auth.reject                             (null)
transport.keepalive                     1
server.allow-insecure                   on
server.root-squash                      off
server.all-squash                       off
server.anonuid                          65534
server.anongid                          65534
server.statedump-path                   /var/run/gluster
server.outstanding-rpc-limit            128
server.ssl                              off
auth.ssl-allow                          *
server.manage-gids                      off
server.dynamic-auth                     on
client.send-gids                        on
server.gid-timeout                      300
server.own-thread                       (null)
server.event-threads                    4
server.tcp-user-timeout                 42
server.keepalive-time                   20
server.keepalive-interval               2
server.keepalive-count                  9
transport.listen-backlog                1024
transport.address-family                inet
performance.write-behind                on
performance.read-ahead                  on
performance.readdir-ahead               on
performance.io-cache                    on
performance.open-behind                 on
performance.quick-read                  on
performance.nl-cache                    on
performance.stat-prefetch               on
performance.client-io-threads           on
performance.nfs.write-behind            on
performance.nfs.read-ahead              off
performance.nfs.io-cache                off
performance.nfs.quick-read              off
performance.nfs.stat-prefetch           off
performance.nfs.io-threads              off
performance.force-readdirp              true
performance.cache-invalidation          on
performance.global-cache-invalidation   true
features.uss                            off
features.snapshot-directory             .snaps
features.show-snapshot-directory        off
features.tag-namespaces                 off
network.compression                     off
network.compression.window-size         -15
network.compression.mem-level           8
network.compression.min-size            0
network.compression.compression-level   -1
network.compression.debug               false
features.default-soft-limit             80%
features.soft-timeout                   60
features.hard-timeout                   5
features.alert-time                     86400
features.quota-deem-statfs              off
geo-replication.indexing                off
geo-replication.indexing                off
geo-replication.ignore-pid-check        off
geo-replication.ignore-pid-check        off
features.quota                          off
features.inode-quota                    off
features.bitrot                         disable
debug.trace                             off
debug.log-history                       no
debug.log-file                          no
debug.exclude-ops                       (null)
debug.include-ops                       (null)
debug.error-gen                         off
debug.error-failure                     (null)
debug.error-number                      (null)
debug.random-failure                    off
debug.error-fops                        (null)
nfs.disable                             on
features.read-only                      off
features.worm                           off
features.worm-file-level                off
features.worm-files-deletable           on
features.default-retention-period       120
features.retention-mode                 relax
features.auto-commit-period             180
storage.linux-aio                       off
storage.batch-fsync-mode                reverse-fsync
storage.batch-fsync-delay-usec          0
storage.owner-uid                       -1
storage.owner-gid                       -1
storage.node-uuid-pathinfo              off
storage.health-check-interval           30
storage.build-pgfid                     off
storage.gfid2path                       on
storage.gfid2path-separator             :
storage.reserve                         1
storage.health-check-timeout            10
storage.fips-mode-rchecksum             off
storage.force-create-mode               0000
storage.force-directory-mode            0000
storage.create-mask                     0777
storage.create-directory-mask           0777
storage.max-hardlinks                   100
features.ctime                          on
config.gfproxyd                         off
cluster.server-quorum-type              off
cluster.server-quorum-ratio             0
changelog.changelog                     off
changelog.changelog-dir                 {{ brick.path }}/.glusterfs/changelogs
changelog.encoding                      ascii
changelog.rollover-time                 15
changelog.fsync-interval                5
changelog.changelog-barrier-timeout     120
changelog.capture-del-path              off
features.barrier                        disable
features.barrier-timeout                120
features.trash                          off
features.trash-dir                      .trashcan
features.trash-eliminate-path           (null)
features.trash-max-filesize             5MB
features.trash-internal-op              off
cluster.enable-shared-storage           disable
locks.trace                             off
locks.mandatory-locking                 off
cluster.disperse-self-heal-daemon       enable
cluster.quorum-reads                    no
client.bind-insecure                    (null)
features.timeout                        45
features.failover-hosts                 (null)
features.shard                          off
features.shard-block-size               64MB
features.shard-lru-limit                16384
features.shard-deletion-rate            100
features.scrub-throttle                 lazy
features.scrub-freq                     biweekly
features.scrub                          false
features.expiry-time                    120
features.cache-invalidation             on
features.cache-invalidation-timeout     600
features.leases                         off
features.lease-lock-recall-timeout      60
disperse.background-heals               8
disperse.heal-wait-qlength              128
cluster.heal-timeout                    600
dht.force-readdirp                      on
disperse.read-policy                    gfid-hash
cluster.shd-max-threads                 1
cluster.shd-wait-qlength                1024
cluster.locking-scheme                  full
cluster.granular-entry-heal             no
features.locks-revocation-secs          0
features.locks-revocation-clear-all     false
features.locks-revocation-max-blocked   0
features.locks-monkey-unlocking         false
features.locks-notify-contention        no
features.locks-notify-contention-delay  5
disperse.shd-max-threads                4
disperse.shd-wait-qlength               1024
disperse.cpu-extensions                 auto
disperse.self-heal-window-size          1
cluster.use-compound-fops               off
performance.parallel-readdir            on
performance.rda-request-size            131072
performance.rda-low-wmark               4096
performance.rda-high-wmark              128KB
performance.rda-cache-limit             10MB
performance.nl-cache-positive-entry     on
performance.nl-cache-limit              10MB
performance.nl-cache-timeout            60
cluster.brick-multiplex                 off
cluster.max-bricks-per-process          250
disperse.optimistic-change-log          on
disperse.stripe-cache                   4
cluster.halo-enabled                    False
cluster.halo-shd-max-latency            99999
cluster.halo-nfsd-max-latency           5
cluster.halo-max-latency                5
cluster.halo-max-replicas               99999
cluster.halo-min-replicas               2
features.selinux                        on
cluster.daemon-log-level                INFO
debug.delay-gen                         off
delay-gen.delay-percentage              10%
delay-gen.delay-duration                100000
delay-gen.enable
disperse.parallel-writes                on
features.sdfs                           off
features.cloudsync                      off
features.ctime                          on
ctime.noatime                           on
features.enforce-mandatory-lock         off

@diete-p
Copy link

diete-p commented Feb 10, 2021

Hi resposit,
These are not good news. I was hoping that the settings would change something permanently.
I'll wait until the problem occurs again and then think about further steps.
On another thread regarding this problem, I heard from someone else with the same problem. As far as I can remember, he solved the problem by distributing the fuse clients differently over his network topology. Unfortunately, I can't find the thread in this case either.
But that's what I'm starting to think about ... all of our 8 clients are equipped with 10 gbits network adapters and are connected to switch 1, while the gluster extends over switch 2 and 3, also with 10 gbits network adapters. As I recently discovered, switch 1 is connected to switches 2 and 3 via a 20 gbits uplink. Everything has to go through there.
I have now read about this problem several times and so far I have not found a final answer, except perhaps the answer regarding the network topology. As far as I remember, all people had the problem when there was a certain load. This could be an indication of a network problem.

@resposit
Copy link

Hi @diete-p
not sure network topology could cause the issue in my case. My clients are virtual machines, running on the same hypervisor. My serveres are physical hosts, all connected to the same 10gb/s switch on a dedicated NIC.

Today I got the same problem again. This time I enabled "trace" logging on my clients. This is what I'm seeing from box03:

[root@box03 5-iAPS]# pwd
/glusterfs/cloudstor/data/ferrara/files_versions/My Files 2020/2_Fondi Esterni Napoli/4 PROGETTI/ASI/5-iAPS
[root@box03 5-iAPS]# ls -l  | grep "\?"
ls: cannot access '2-Assegnazioni': No such file or directory
d????????? ? ?      ?           ?            ? 2-Assegnazioni

Seeing this in fuse client log file:

[2021-02-15 08:55:31.178839] T [MSGID: 0] [nl-cache.c:236:nlc_lookup] 0-cloudstor-nl-cache: Serving negative lookup from cache:2-Assegnazioni
[2021-02-15 08:55:31.178878] T [fuse-bridge.c:1008:fuse_entry_cbk] 0-glusterfs-fuse: 7173578: LOOKUP() /data/ferrara/files_versions/My Files 2020/2_Fondi Esterni Napoli/4 PROGETTI/ASI/5-iAPS/2-Assegnazioni => -1 (No such file or directory)

It looks there is something wrong with nl-cache. Not sure what it does exactly, I'll try to disable it.

@pranithk
Copy link
Member

nl-cache is -ve lookup cache. It was developed for use in samba workloads as far as I understand. If a file created in some other mount is accessed quickly in this mount, you may get this error I think.

@resposit
Copy link

resposit commented Feb 15, 2021

I enabled nl-cache after reading recommendations for best performance:
https://docs.gluster.org/en/latest/Administrator-Guide/Performance-Tuning/
It's not mentioned to be a samba specific setting.
BTW, I just turned it off:

root@cloud10:~# gluster volume set cloudstor performance.nl-cache off

I'll see if it gets better. Unfortunately those errors come out randomly, so I have to wait and see if they appear again.

@kindofblue
Copy link

@pranithk you do not need to use multi-mount to observe this issue, single mount can reproduce this issue easily.

@diete-p
Copy link

diete-p commented Feb 22, 2021

Hi resposit,

the day you published your last post we have been faced one more time with this error. I have then just turned off performance.readdir-ahead and the error went away. Regarding my first message, it should now be clear that this is not the solution. It just triggers 'something' so that the clients can access files and directories again without umount / mount.

Then i turned off performance.nl-cache as you mentioned in your last post. Since then the error does not appear anymore.
I hope it's not too early. But in the week before we were confronted with the error almost every day.

best regards.

@resposit
Copy link

resposit commented Feb 22, 2021

Hi @diete-p
same for me. Since my last post I deactivated nl-cache and no errors seen since then.
Not sure if this is a real solution or if it is a bug rather than a feature.
I wish some gluster developer could clarify it.
Regards.

@stale
Copy link

stale bot commented Sep 21, 2021

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Sep 21, 2021
@stale
Copy link

stale bot commented Oct 6, 2021

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

@stale stale bot closed this as completed Oct 6, 2021
@saurabhwahile
Copy link

saurabhwahile commented Dec 6, 2021

We are facing the same issue, certain files show up as ????? ???? under the gluster FUSE mount. However the files are there in the underlying filesystem ( Individual brick )

-????????? ? ? ? ? ? recovery-point-offset-checkpoint
-rw-r--r-- 1 root root 1901 Nov 22 16:33 replication-offset-checkpoint
-????????? ? ? ? ? ? replication-offset-checkpoint.tmp

Gluster version: 7.9 / Ubuntu 18.04
I don't have any gluster replication enabled, underlying filesystem is zfs.

@tiswo
Copy link

tiswo commented Feb 8, 2022

We also encountered the same problem. After repair, the fuse cache of the client cannot be deleted

@sysupdate
Copy link

Same issue with us, both performance.nl-cache and performance.readdir-ahead are off

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Migrated Type:Bug wontfix Managed by stale[bot]
Projects
None yet
Development

No branches or pull requests

9 participants