Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t fails with no space left on the device #3793

Closed
Shwetha-Acharya opened this issue Sep 7, 2022 · 2 comments · Fixed by #3794
Assignees

Comments

@Shwetha-Acharya
Copy link
Contributor

17:12:03 ok 19 [ 13/ 75] < 50> '_GFS --attribute-timeout=0 --entry-timeout=0 --volfile-server=172.31.14.233 --volfile-id=patchy /mnt/glusterfs/0'
17:12:03 volume set: success
17:12:03 dd: error writing ‘/mnt/glusterfs/0/file9’: No space left on device
17:12:03 not ok 20 [ 5527/ 3] < 62> '! ls /d/backends/patchy0/file10' -> ''
17:12:03 not ok 21 [ 12/ 3] < 63> '! ls /d/backends/patchy1/file10' -> ''
17:12:03 ok 22 [ 12/ 2] < 64> 'ls /d/backends/patchy2/file10'

Observed in CentOS 7 regression: https://build.gluster.org/job/gh_centos7-regression/2805/consoleFull

@mohit84
Copy link
Contributor

mohit84 commented Sep 7, 2022

@karthik-us Can you please check this

karthik-us added a commit to karthik-us/glusterfs that referenced this issue Sep 7, 2022
Issue:
The test case tries to simulate a scenario where the entry creation
succeeds on a single brick by filling the other two bricks. The condition
which was checking the file creation failure on the other 2 brick has been
changed recently to check for the number of blocks allocated for the file.
In some cases when the brick gets filled, it might still be able to create
few empty files but not accommodate data.

Fix:
Change the condition to check whether the file creation itself will fail on
the 2 brick before exiting the loop, so that the next file creation operation
will fail on both the bricks, simulating the expected scenario.

Change-Id: Ifd2ee5b7cbe6bc713c3e19eae79c31a91238579f
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Fixes: gluster#3793
@karthik-us
Copy link
Contributor

@Shwetha-Acharya no space left on device error is expected in the test case as we are trying to simulate an entry creation failure on 2 bricks by filling them to the limit. Seems like the recent change in #3637 changed the way how we are checking the entry creation failure. I think due to that it is failing. Posted a fix which hopefully should fix it.

karthik-us added a commit to karthik-us/glusterfs that referenced this issue Sep 7, 2022
Issue:
The test case tries to simulate a scenario where the entry creation
succeeds on a single brick by filling the other two bricks. The condition
which was checking the file creation failure on the other 2 brick has been
changed recently to check for the number of blocks allocated for the file.
In some cases when the brick gets filled, it might still be able to create
few empty files but not accommodate data.

Fix:
Change the condition to check whether the file creation itself will fail on
the 2 brick before exiting the loop, so that the next file creation operation
will fail on both the bricks, simulating the expected scenario.

Change-Id: Ifd2ee5b7cbe6bc713c3e19eae79c31a91238579f
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Fixes: gluster#3793
mohit84 pushed a commit that referenced this issue Sep 8, 2022
…3794)

Issue:
The test case tries to simulate a scenario where the entry creation
succeeds on a single brick by filling the other two bricks. The condition
which was checking the file creation failure on the other 2 brick has been
changed recently to check for the number of blocks allocated for the file.
In some cases when the brick gets filled, it might still be able to create
few empty files but not accommodate data.

Fix:
Change the condition to check whether the file creation itself will fail on
the 2 brick before exiting the loop, so that the next file creation operation
will fail on both the bricks, simulating the expected scenario.

Change-Id: Ifd2ee5b7cbe6bc713c3e19eae79c31a91238579f
Signed-off-by: karthik-us <ksubrahm@redhat.com>
Fixes: #3793

Signed-off-by: karthik-us <ksubrahm@redhat.com>
@karthik-us karthik-us self-assigned this Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants