New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions: Does the hashing time scale by RAM size? #235
Comments
It scales more-or-less linearly, the hashing time is (very) roughly Also, note that when you allocate a large chunk of memory, the OS usually only maps the first couple of pages to actual physical memory. When you start writing beyond those pages, the CPU will generate page faults and trigger the kernel's page fault handler. This handler then assigns more physical pages to the region you were trying to write and switches back to your application. This happens many times in a row when you first write to a big freshly allocated chunk of memory. For Argon2, it causes an additional slowdown in the first pass, so the actual scaling is not really linear when you try to measure it. I tried once to curve-fit the dependency of hashing time on memory cost, but it didn't fit even to simple polynomial curves, so I guess it is really not easy to predict precisely (and it will heavily depend on the system and configuration used). |
Using |
It won't. But |
on this issue a question: what would be better? more time by using a higher time parameter or just go for more ram (and have it take longer as side effect). Libsodium apparently makes argon2i have a minimum time factor of 3, where I am honestly not sure whether that's a sensible choice when one could instead lower the time down to one and for for much more RAM instead. |
There are potential pitfalls internally to using fewer iterations. Changes
have been made since the specific paper was published but it was found that
shortcuts arise that an attacker can utilize if the number if iterations is
low, even less than 10 (I'd have to find the specific paper to see if I
have that number correct). Better to have even 4 or 5 iterations and then
adjust the memory needs to the server and estimated client capacity. The
typical goal is less than 2 seconds to keep from annoying the average user,
no more than 5 for power users or high sensitivity systems.
On Jan 9, 2018 00:32, "My1" <notifications@github.com> wrote:
on this issue a question: what would be better? more time by using a higher
time parameter or just go for more ram (and have it take longer as side
effect).
Libsodium apparently makes argon2i have a minimum time factor of 3, where I
am honestly not sure whether that's a sensible choice when one could
instead lower the time down to one and for for much more RAM instead.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#235 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIl7mOyW-QSsUYMC1A1n5iMth0Hdisk8ks5tIvntgaJpZM4Q8YMq>
.
|
okay, this is kinda sad. would have been awesome if the time/iterations would be only helpful (for example on a system where you have more CPU than RAM) but not needed, so that on systems with an overabundance of RAM you use a lot of RAM which makes the time naturally grow enough already and not cope with any artificial time increases. for example 1GB of RAM takes on a iterations of 1 (via PHP) already 1,5sec on my dev machine. while a server may be more powerful, it also has to cope with multiple sign ins at once, so to avoid an accidential DoS I usually tune around 0,1-0,5 sec for one single login, so that the chance of way too many people logging in right in that time space goes down and even if, that there wont be too many issues. |
If you are using PHP's |
Also bear in mind the parameters must match between the two Systems that
will be using the hash. If the server and client are using different
parameters, what they individually calculate, even for the same salt,
password, secret, and AD, will not match.
…On Jan 10, 2018 04:29, "Frank Denis" ***@***.***> wrote:
If you are using PHP's password_hash() function, I'd suggest using the
sodium_pwhash() function instead, that is likely to be faster.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#235 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIl7mDxwzU7ugEGqUr3UsNQvhOxspJ4Sks5tJILygaJpZM4Q8YMq>
.
|
no client involved, the server does everything, and since the hash is stored in a format where all factors are known this is not a problem for verifying |
Here's my advice:
tl; dr; IMO, Argon2i is too dangerous for most applications. It has had a couple of bad compromises already, and I suspect there will be more. If you want confidence your password hash will be viable two years from now, use Argon2id. The 'd' part means password-dependent hashing, like Scrypt uses, which runs in the second half of the algorithm. The attacks we're seeing against Argon2i keep on coming, but they don't work against Argon2id. For some reason academics have a strong preference for the ideal rather than the practical, and Argon2i is more "ideal" in that it has far less chance of leaking information about your password through side-channels like cache-timing. However, from a practical point of view, Argon2i is still an academic exercise: it is not ready for prime time. You take a big risk by using it. Also, if you do choose Argon2i, choose t_cost >= 3 (reducing ASIC attacker's cost by >= 3X vs defender. The only downside to Argon2id is, like Scrypt, in the second half of execution password-dependent side channels may be visible. If you keep the salt secret, cache-timing can only leak meta-data such as "the same person just logged in again, So, just use Argon2id, use t_cost = 1, and let it run for as long as your application can stand. It really is simple to choose these parameters. |
For an example of @waywardgeek's advice, fscrypt has to search for optimal Argon2id hashing costs when a user initially creates Search function: |
@waywardgeek But a2i already has that much problems? Iirc the main issue of a2i was time memory tradeoffs, where one could use less memory by instead calculating the needed stuff as needed, which instead raises the calculations by a lot. But if a2i should really use a time parameter of 3 i guess they should go and change the default (which iirc was 2) I know the basic differences between i, d and id, d is dependent, which axes down tmto but shows side channels, i is independent, meaning no side channels but instead tmto and id is a nice mix of both lowering the risk of both problems. @josephlr |
For example: does hashing on 64 GB ram take 4 times faster comparing on 16 GB ram?
Thanks :)
The text was updated successfully, but these errors were encountered: