You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the latest version of nvidia-docker of pytorch, with support for cuda 12.
I complie the cuda 118 version of bit lib, since the code require bitxxx_cuda118.so .
Tested on 7B version, OK.
13B, CUDA out of memory. About 1-2G less.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 68.00 MiB (GPU 0; 23.65 GiB total capacity; 22.68 GiB already allocated; 41.31 MiB free; 23.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
No OOM error, 64Gb memory installed.
I doubt whether RTX4090 can actually run 13B model.
Please share more detailed imformation of your device.
The text was updated successfully, but these errors were encountered:
I am using the latest version of nvidia-docker of pytorch, with support for cuda 12.
I complie the cuda 118 version of bit lib, since the code require bitxxx_cuda118.so .
Tested on 7B version, OK.
13B, CUDA out of memory. About 1-2G less.
No OOM error, 64Gb memory installed.
I doubt whether RTX4090 can actually run 13B model.
Please share more detailed imformation of your device.
The text was updated successfully, but these errors were encountered: