Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug? Logarithmic slowdown for Encrypt-In-Place on large volumes #1063

Open
a-raccoon opened this issue May 15, 2023 · 8 comments
Open

Bug? Logarithmic slowdown for Encrypt-In-Place on large volumes #1063

a-raccoon opened this issue May 15, 2023 · 8 comments
Labels

Comments

@a-raccoon
Copy link

It seems there may be a bug in the Encrypt-in-place volume creation process. It will start off quite zippy, but slow down to a crawl near the end. The final 10% of the process will take 10 times as long as the first 90% took to encrypt.

Expected behavior

Encrypt-in-place should encrypt the entire drive at a pretty steady rate. There should be no dramatic slowdown.

Observed behavior

Dramatic slowdown is observed after 80% completion, and especially after 90% completion with agonizing slowness on large volumes.

Suspected problem

I suspect that there is an inefficient addressing situation with how VeraCrypt seeks to the next sector address it intends to read and encrypt, possibly scrolling through all earlier addresses from 0 and up. It may be sending millions of seek commands to arrive to the correct sector, from 0 to current, instead of just advancing in a forward march, before resetting to sector 0 again.

Steps to reproduce

Create a new volume using encrypt-in-place on a large drive, say 5 terabytes.

Your Environment

Please tell us more about your environment

VeraCrypt version: 1.25.9 (64-bit)

Operating system and version: Windows 7 Pro

System type: 64-bit

@a-raccoon a-raccoon added the bug label May 15, 2023
@idrassi
Copy link
Member

idrassi commented May 23, 2023

Thank you for this report.
Indeed, there is a big slowdown for large volumes when we get closer to the end and this is because of use of SetFilePointerEx to seek to the next sector from the begining of the volume in the function EncryptPartitionInPlaceResume :
SetFilePointerEx (dev, offset, NULL, FILE_BEGIN)

This is indeed not efficient but it was done like this to avoid any potential bugs because the loop for In-Place encryption involves other I/O operations (zeroing bad sectors and wiping sectors) that change the file pointer position and SetFilePointerEx call is the most reliable way to be sure that we are at the correct offset. Of course, we can implement a logic that will manually track the file pointer changes in order to rewind it when necessary.

I'm not sure yet how much complexity such change will induce. I will evaluate it and I will update this issue with the findings.

@a-raccoon
Copy link
Author

Perhaps, and since multi-pass wiping magnetic media is a pretty obsolete thing to do in modern ultra-dense (and solid state) storage, we can only use the rewind method when wiping is a thing, otherwise use a steady forward march when wipe is set to 0 passes.

Seems I just damaged a <1 year old WD 5TB 2.5" USB by allowing it to encrypt-in-place for 5 days. Now it tick-tocks when plugged in.

@idrassi
Copy link
Member

idrassi commented May 23, 2023

I agree with your approach. This should simplify things.
Concerning your damaged drive, this is surprising...it is worrying that continuous sector read/write operations for 5 days cause such hard. After all, disks connected to storage servers handle such cases for many months and years without issues.

idrassi added a commit that referenced this issue Jun 4, 2023
…Place on large volumes (relates to #1063)

We replace absolute file pointer moving by relative moving with respect to current position. This was implemented as a workaround to address the performance issues related to in-place encryption. When using SetFilePointerEx() with FILE_BEGIN as the reference point, reaching the end of large drives during in-place encryption can cause significant slowdowns. By moving the file pointer relatively, these performance issues are mitigated.
@idrassi
Copy link
Member

idrassi commented Jun 4, 2023

I have implement a possible workaround for this issue: 2246653

Since we need to write back encrypted sector after reading it, we need to move the file pointer back to the begining of the sector so there is still a call to SetFilePointerEx for every sector but this time is moving file pointer relatively to the current position and not from the begining.

I did atomic benchmarks of SetFilePointerEx calls on 3 TB USB disk but I didn't see any different between relative and absolut file pointer moving. So it is possible that the slowdown is coming from a combination of Read/Write operations interwinded with abolut file pointer moving.

I have prepare a build of VeraCrypt containing this fix and it is available on the Nightly Builds folder on sourceforge:
https://sourceforge.net/projects/veracrypt/files/VeraCrypt%20Nightly%20Builds/Windows/

@a-raccoon
Copy link
Author

a-raccoon commented Jun 4, 2023

Are there any optimizations to be had with buffer sizes? Now that computers have 12 to 128 gigs of ram, can we buffer 64 gigs of data for encryption, or must it be done sector-by-sector?

Can you also benchmark against new volume encryption on the same 3TB setup, to determine if Encrypt-In-Place is even viable, and should instead be performed with 2 disks for data slushing?

@fzxx
Copy link
Contributor

fzxx commented Jun 5, 2023

我已经实现一个可能的解决方法对这个问题: 2246653

因为我们需要写回加密部门以后阅读它,我们需要移动的文件指针,回到开始的部门,因此仍然有一个叫SetFilePointerEx为每个部门,但这次是运动文件指针,相对于目前的位置,而不从开始。

我有没有原子的标准的SetFilePointerEx呼吁3结核病u盘但我没有看到任何不同之间的相对和绝对文件指针移动。 因此,它是可能的,经济放缓是来自一组合的读写操作interwinded与abolut文件指针移动。

我已经准备建立的VeraCrypt含有这种解决,它可在夜间基础上的文件夹sourceforge: https://sourceforge.net/projects/veracrypt/files/VeraCrypt%20Nightly%20Builds/Windows/

This seems to fix the issue with fast formatting (dynamic volumes) #388 #690 #786 #883

@xnoreq
Copy link

xnoreq commented Jun 21, 2023

@fzxx That's not possible. If the change above sped up formatting then this would mean that there's a bug in these API and the file is not zeroed properly anymore.

I wrote fixes for the broken implementation of /fastcreatefile here several months ago but @idrassi has not commented on it.

I've also made suggestions how to improve the feedback of progress to the user when quick formatting without /fastcreatefile - no response either.
Basically, all you'd need to do is seek through the file in, let's say 100 MB chunks from the beginning, write some zero bytes, update the progress bar. This would also make the operation cancelable by the user.

Currently this does not happen so the first write at the end of the file (which happens after the file is created and formatting starts) will be blocking.... potentially for days. The progress bar hangs in the meantime.

But this probably does not belong here but to #690.

@idrassi
Copy link
Member

idrassi commented Jun 22, 2023

Thank you @xnoreq for this feedback.
I was not aware of you previous feedback as I was in a long break from the project. I didn't see any PR related to your changes and it is hard for me to go through all existing issues and checking all comments.

Concerning the change implement for the logarithmic slowdown of in-place encryption, it is for partitions only but it seems to have an effect on file containers. Even in the case of disk partition, there should not be a difference between absolut and relative offset calls but in reality there is a big difference.

Concerning your proposes changes: error case of AdjustTokenPrivileges is managed correctly is other places of the code but not in the SetPrivilege function. And for the missing privilege for /fastCreateFile, this a good catch.
I have pushed commits based on these.

Concerning your chunk based idea, I agree that will make things better at UX side. I will look into how to implement effeciently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants