-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dist() works on 10k X 10k, but not on subset: 10K X 100 #131
Comments
@TiMaG Could you try the latest develop version? devtools::install_github('cdeterman/gpuR', ref = 'develop') There have been some changes there and I have added some additional comments that will spit out during |
@cdeterman: Thank you for the reply. It works for dim = 1000 with the develop version. But not for dim>=1500. Thanks a lot! |
@TiMaG sorry for my delayed response. Thanks for testing again, did you have any of the comments that the current Regarding consulting, I am possibly open but we should take such conversations off list to my email at cdetermanjr@gmail.com |
I wasnt sure whether it would be more easy to combine it with the "consulting" and sharing screen. But to keep going i just went ahead already: Concerning your question: No i dont see any messages like that:
|
@TiMaG ah, that was because I only put the comments in the library(gpuR)
ndim <- 5000
AA <- gpuMatrix(rnorm(ndim*100), nrow = ndim, ncol = 100, type = "double")
BB <- dist(AA) |
@cdeterman No problem at all.
|
@TiMaG I'm sorry to have let this issue just hang. Has this been a problem with any of the other gpuMatrix functions or just with |
@dselivanov have you encountered this error/bug before with any of the NVIDIA cards you have used? |
Don't remember. I just got 1070ti so potentially can test. The issue is
that I don't have properly set up environment and lack any spare time...
Will try my best to help with testing but can't commit any time estimation.
пн, 1 окт. 2018 г., 20:37 Charles Determan <notifications@github.com>:
… @dselivanov <https://github.com/dselivanov> have you encountered this
error/bug before with any of the NVIDIA cards you have used?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#131 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AE4u3TTERFZjZDGwWARcKF9Dl67QnU4tks5ugkTYgaJpZM4UOo9s>
.
|
@dselivanov fair enough, just thought would ask as you are the most active NVIDIA user that I am aware. |
…to be working normally locally on Intel GPU.
I did discover a general issue whenever the dimensions exceeded 128 (the internal padded sizes). But I believe I have fixed the problem now. Hopefully this latest commit addresses the original issue provided by @TiMaG. |
Hi @cdeterman , sry for the late response. ndim <- 15000 I also tried, 50000. It failed then, but i think that is more due to the hardware limit (i took the p2.xlarge instance on aws). |
@TiMaG glad to hear it is working now. Yes, I suspect it would be a hardware limit. I will close this issue now unless another issue arises. Sorry it took so long to resolve. |
Hi,
thanks for any infos already. A feedback would be very appreciated.
The overall goal would be to have up to 50k X 300 matrices,
it would be nice to have 10k X 100.
Thanks for any infos!
Number of platforms: 1
checked all devices
completed initialization
gpuR 2.0.0
Attaching package: ‘gpuR’
The following objects are masked from ‘package:base’:
$deviceVendor
[1] "NVIDIA Corporation"
$numberOfCores
[1] 13
$maxWorkGroupSize
[1] 1024
$maxWorkItemDim
[1] 3
$maxWorkItemSizes
[1] 1024 1024 64
$deviceMemory
[1] 11995578368
$clockFreq
[1] 823
$localMem
[1] 49152
$maxAllocatableMem
[1] 2998894592
$available
[1] "yes"
$deviceExtensions
[1] "cl_khr_global_int32_base_atomics"
[2] "cl_khr_global_int32_extended_atomics"
[3] "cl_khr_local_int32_base_atomics"
[4] "cl_khr_local_int32_extended_atomics"
[5] "cl_khr_fp64"
[6] "cl_khr_byte_addressable_store"
[7] "cl_khr_icd"
[8] "cl_khr_gl_sharing"
[9] "cl_nv_compiler_options"
[10] "cl_nv_device_attribute_query"
[11] "cl_nv_pragma_unroll"
[12] "cl_nv_copy_opts"
$double_support
[1] TRUE
The text was updated successfully, but these errors were encountered: