You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AMGX is currently able to provide an estimation of the GPU memory usage. Currently, this is achieved by computing the memory used by all processes, thanks to cudaMemGetInfo function.
It would be more interesting to only provide the memory used by the AMGX process.
To get the amount of memory used by the AMGX process, you may store at launch the current memory used estimation and subtract it to each "allocated = total - free" in the updateMaxMemoryUsage function.
The text was updated successfully, but these errors were encountered:
It is possible that another application will change it's memory usage during runtime, so saving memory usage at AMGX startup wouldn't be the best solution. However considering that AMGX uses caching allocator it's possible to get total memory usage by reduction over allocated blocks of memory from the allocator.
AMGX is currently able to provide an estimation of the GPU memory usage. Currently, this is achieved by computing the memory used by all processes, thanks to cudaMemGetInfo function.
It would be more interesting to only provide the memory used by the AMGX process.
To get the amount of memory used by the AMGX process, you may store at launch the current memory used estimation and subtract it to each "allocated = total - free" in the updateMaxMemoryUsage function.
The text was updated successfully, but these errors were encountered: