New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writing to tensor is 9 times slower with torch.Tensor compared to torch.Storage #474
Comments
I think this comes from the lua loop overhead. If you replace the loop for the tensor by data_view = data:view(-1)
for i = 1, data_view:nElement() do
data_view[i] = 0
end you should get similar timings. |
This is an expected behaviour. There are three things affecting performance:
Now, if you want super fast memory access for CPU tensors, you can use the |
@ivankreso just one remark, if you are using the C array from |
@fmassa I just saw it in the docs, thanks. |
Is this an expected performance difference?
Output:
The text was updated successfully, but these errors were encountered: