-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add test for btrfs maintenance jobs performance #6487
Conversation
3398b32
to
92f3f9c
Compare
622b76a
to
91741c6
Compare
Would the dd’s be more effective if something like oflags=nocache || dsync || direct was used? Not sure which would be best at triggering the condition you want to test for. |
@@ -0,0 +1,109 @@ | |||
# SUSE's openQA tests | |||
# | |||
# Copyright © 2012-2019 SUSE LLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just 2019 please
# without any warranty. | ||
|
||
# Summary: Create writes in different btrfs snapshots and monitor btrfs maintenance job performance. | ||
# Maintainer: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add yourself as maintainer please or another person, not empty.
You can also add a line with "# Tags: ..." and reference the corresponding progress ticket and bug id
assert_script_run "mkdir $dest"; | ||
$self->set_playground_disk; | ||
my $disk = get_required_var('PLAYGROUNDDISK'); | ||
assert_script_run "mkfs.btrfs -f $disk && mount $disk $dest && cd $dest"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would actually just work on the root filesystem, not a playground disk
# 1/1 1/2 | ||
# / \ / | \ | ||
# a b c d(k) | ||
assert_script_run "for c in {a..d}; do btrfs subvolume create \$c; done"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to create just snapshots with snapper, not low-level btrfs, so that at best this would also trigger a "snapper-cleanup" call. We could even go as high-level as using zypper to install/remove packages which should also trigger creating snapshots
|
||
assert_script_run "for c in a b; do btrfs qgroup assign \$c 1/1 .; done"; | ||
assert_script_run "for c in b c d; do btrfs qgroup assign \$c 1/2 .; done"; | ||
assert_script_run "for c in 1/1 1/2; do btrfs qgroup assign \$c 2/1 .; done"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not mess with qgroup's directly here but just trigger the actions indirectly as above
@alexandergraul Hi! Do you plan to look at this ever again? |
Hi @OleksandrOrlov, no, I don't think I will give this another shot anytime soon. |
This is very much WIP, but the idea is to write dummy files, create a new snapshot (now two snapshots point to that dummy file) and then deleting the original dummy files. This should cause some tasks for
btrfs balance
which is launched and/proc/diskstats
is monitored. If the queue in/proc/diskstats
gets too big, the whole system has to wait which disrupts the user.