You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to run the detection on 1 day continuous data, I get the following error,
Traceback (most recent call last):
File "deployment2.py", line 87, in
pick_results = pick(waveform, 100, 20, model, transform, 0.1, batch_size=1000)
...
RuntimeError: CUDA out of memory. Tried to allocate 67.46 GiB (GPU 0; 31.72 GiB total capacity; 12.07 GiB already allocated; 18.40 GiB free; 12.07 GiB reserved in total by PyTorch)
Is there any way to reduce the required memory?
Thank you!
Chenyu
The text was updated successfully, but these errors were encountered:
@chenyuli1992 Can you share the code that you try to do detection on? It seems that our data volume is larger than 32GB (the GPU RAM capacity). I would guess that you put the entire continuous data in the CPIC detector. If so, you can easily reduce the memory cost by breaking them into small pieces, say one-hour segments. If you have trouble writing the code to break the continuous signal, sent me an example data piece, I will write a sample code for you.
@chenyuli1992 This reminds me that we should have a sample code built-in for people who need to do deployment on continuous data. I will look into @Lchuang 's code and see where to put it.
Hi Lijun,
When I try to run the detection on 1 day continuous data, I get the following error,
Is there any way to reduce the required memory?
Thank you!
Chenyu
The text was updated successfully, but these errors were encountered: