You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
JuiceFS keeps transfering blocks for IO ops has been cancelled. During fio read test of 4 GB transfer I cancelled the fio process at ~500MB was way too slow, JuiceFS process didn't react to cancelling the IO test, instead kept copying the block to S3 endpoint What you expected to happen:
I expected JuiceFS to stop IO, and reflect the most recent state. Instead JuiceFS has continued the file transfer -- ignoring the cancelled IO request. How to reproduce it (as minimally and precisely as possible):
format --compress none --force --access-key XXXXXXX --secret-key XXXXXXX --block-size 1024 --storage s3 --bucket=https://xxxxxxxxxx.s3.us-east-1.amazonaws.com REDIS-SEVER benchmark
juicefs mount --max-uploads=150 --io-retries=20 REDIS-SERVER /mnt/aws
fio --name=sequential-read --directory=/mnt/aws --rw=read --refill_buffers --bs=4M --size=4G
Anything else we need to know?:
was done on a Lenovo X1 7th edition, 16GB memory i7-8665U 4 core processor, ethernet hooked up to linux router with 500Mbit/sec symmetric fiber optic internet connection to Bell Canada Environment:
JuiceFS version (use ./juicefs --version): juicefs version 0.9.3-34 (2021-01-26 15db788)
Cloud provider or hardware configuration running JuiceFS:
OS (e.g: cat /etc/os-release):
NAME="Linux Mint"
VERSION="20 (Ulyana)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 20" VERSION_ID="20" UBUNTU_CODENAME=focal
@steven-varga This is a known issue, when the file is closed, but ongoing requests will not be canceled. This is worse when you don't have much bandwidth, we will fix it.
davies
changed the title
JuiceFS keeps copying file which has been removed
JuiceFS does not cancel ongoing prefetch requests after file is closed
Jan 28, 2021
What happened:
JuiceFS keeps transfering blocks for IO ops has been cancelled. During
fio
read test of 4 GB transfer I cancelled thefio
process at ~500MB was way too slow, JuiceFS process didn't react to cancelling the IO test, instead kept copying the block to S3 endpointWhat you expected to happen:
I expected JuiceFS to stop IO, and reflect the most recent state. Instead JuiceFS has continued the file transfer -- ignoring the cancelled IO request.
How to reproduce it (as minimally and precisely as possible):
format --compress none --force --access-key XXXXXXX --secret-key XXXXXXX --block-size 1024 --storage s3 --bucket=https://xxxxxxxxxx.s3.us-east-1.amazonaws.com REDIS-SEVER benchmark
juicefs mount --max-uploads=150 --io-retries=20 REDIS-SERVER /mnt/aws
fio --name=sequential-read --directory=/mnt/aws --rw=read --refill_buffers --bs=4M --size=4G
Anything else we need to know?:
was done on a Lenovo X1 7th edition, 16GB memory i7-8665U 4 core processor, ethernet hooked up to linux router with 500Mbit/sec symmetric fiber optic internet connection to Bell Canada
Environment:
JuiceFS version (use
./juicefs --version
): juicefs version 0.9.3-34 (2021-01-26 15db788)Cloud provider or hardware configuration running JuiceFS:
OS (e.g:
cat /etc/os-release
):NAME="Linux Mint"
VERSION="20 (Ulyana)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 20" VERSION_ID="20" UBUNTU_CODENAME=focal
Kernel (e.g.
uname -a
): Linux io 5.4.0-58-generic Connect juicefs to ceph object store without formatting volume? #64-Ubuntu SMP Wed Dec 9 08:16:25 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxObject storage (cloud provider and region): aws s3 us-east-1
Redis info (version, cloud provider managed or self maintained): Redis server v=5.0.7 sha=00000000:0 malloc=jemalloc-5.2.1 bits=64 build=636cde3b5c7a3923
Network connectivity (JuiceFS to Redis, JuiceFS to object storage): 500Mbit/sec fiber
Others:
The text was updated successfully, but these errors were encountered: