-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU required specifications #50
Comments
I saw the list: http://mcx.space/gpubench/ |
The estimation of the total memory needed by an MCX simulation is straightforward, from your script, you requested a volume of 500x500x1500 voxels, and a total of 10 time-gates, each voxel uses a 4-byte single-precision number to store fluence. if you convert this to bytes, you need 50050015004/1024/102410 = 14 GB global memory. In addition, MCX need twice of that size internally to improve accuracy, so, your simulation requires 14*2 = 28GB GPU memory. if you look at all the GPUs on the market, you can almost never find a GPU with such amount of memory, except the super expensive v100 GPU. So, your simulation is just not realistic. Even you can find a GPU with such a large memory, to produce a fluence map that every voxel has decent SNR will require a huge amount of photons. The runtime will also be unrealistic. Think about this, if you run a simulation on a grid with 100x100x100 voxels, if you refine the grid to 200x200x200, you will need 64 times more run-times to produce a simulation with comparable SNR - 8 fold comes from the increase number of voxels, and another 8-fold comes from launching 8x more photons to produce the same SNR. My suggestion is always: start with a small domain, run a small number of photons, and see if your solution is acceptable, then gradually increase the photon number or domain size. Do not go the other way around. |
Hey, I try to run this code:
And I got this warning and error.
I know my GPU is not that strong (GEFORCE 930MX) and I am willing to upgrade my GPU but what are the required specifications of the GPU to run simulations like this properly
The text was updated successfully, but these errors were encountered: