-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception on multithreaded wasm #164
Comments
Some discussion on the matter: tc39/proposal-csprng#6 Allocating a buffer for each chunk via JS looks like a bit too much overhead to make this workaround a default option. But IIUC allocating a temporary buffer on the Rust side will not work either, since compiler can eliminate "needless" copies. My first reaction is to suggest not using a shared array like this and instead perform copy of generated random values in a higher level code, but I don't have a strong opinion on this matter. If it will be decided to leave code as-is, at the very least we should document this behavior. |
@chemicstry, would you mind sharing your fork? I'm not sure where the code you've shared above should go, and I'm hitting this as well. I don't care about the performance overhead for my personal project. |
@chemicstry I would like some more info on how this is happening. Looking at the Perhaps there just shouldn't be a way to turn a
@newpavlov, I don't think overhead should be too big of a deal here (as users should be using a CSPRNG if performance matters). I'm just wondering if the following code would work: // arr could also be on the stack
let mut arr = vec![0; 65536];
for chunk in dest.chunks_mut(65535) {
let mut s = &mut arr[..chunk.len()];
n.get_random_values(s); // Error checking ...
chunk.copy_from_slice(s);
} I don't think it would be legal to optimize away the fact that the buffer being passed to |
When building for multithreaded wasm32-unknown-unknown target all rust memory (heap and stack(?)) is backed by a Using I'm currently on the phone, but I can give a more concrete example tomorrow. I will also test if it works with stack allocated slice. |
Ok that makes sense. I'm worried that this decision will break a bunch of stuff (not just us), given the restrictions on various APIs and their use of
Sounds good, let me know what you find. |
Yeah, I also initially thought about a similar approach, but was not sure if it will not be optimized out. BTW we probably should use a stack allocated buffer with a significantly reduced chunk size (e.g. 256 bytes) instead of the vector, since the main use-case for |
As expected, this does not work either. I tried with You can't specify (at least not trivially) how memory in wasm is allocated, because that's just how wasm works. In single thread case all memory is backed by The @gregkatz here is a PoC fix that I have. It is based on |
After skimming the WASM threads proposal (which is language agnostic), it makes sense why the stack has to be a Given these restrictions, I think the best thing to do is to allocate a 32-byte (or 256-byte) non-shared buffer when initializing Note that Node doesn't seem to have these problems.
The EMCA |
Thanks very much @chemicstry. Your fix worked for me. |
It looks like #165 fixed this problem, closin. Backport for v0.1 (if it happends) can be tracked separtely. |
Solves rust-random#164 for the v0.1 branch Signed-off-by: Joe Richey <joerichey@google.com>
I'm building a project (
bevy
) on multithreaded wasm via web workers and this library crashes with exception:Uncaught (in promise) TypeError: Failed to execute 'getRandomValues' on 'Crypto': The provided ArrayBufferView value must not be shared.
Indeed the spec does not allow generating random numbers into a memory backed by
SharedArrayBuffer
.I managed to fix this issue by generating numbers into an intermediate buffer allocated by javascript:
This introduces some overhead, but I'm not sure if there is a standard way to check whether rust's memory is backed by
SharedArrayBuffer
.I can submit a PR for this, so let me know how you want me to proceed.
The text was updated successfully, but these errors were encountered: