-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDAOutOfMemoryError on A100 #5
Comments
Hi @vdejager - can you try reducing the batch size using the There is no model we suggest in particular - the models have an accuracy/speed tradeoff (SUP model being the most accurate and Fast being the fastest). |
I'm running it now with
with the dna_r9.4.1_e8.1_sup@v3.3 model. This seems to work on a small fast5 file. I'm going to test it on a bigger dataset to see how it goes. |
Thanks @vdejager - I am closing this issue. When we have updates on memory footprint we will notify in release notes.. |
I managed to compile on RedHat/CentOS8, but I'm getting errors with the 'sup' models:
dna_r9.4.1_e8.1_sup@v3.3 and dna_r9.4.1_e8_sup@v3.3
Data is from a: FLO-MIN106 SQK-DCS109 dna_r9.4.1_450bps_hac
An amplicon run, so no prior info other than that it should contain 16s sequences.
the following modes work fine: dna_r9.4.1_e8_hac@v3.3, dna_r9.4.1_e8.1_hac@v3.3, dna_r9.4.1_e8_fast@v3.4 and dna_r9.4.1_e8.1_fast@v3.4
However, what model would your suggest to use and what would the best method be to compare them against Bonito and the Guppy output?
error below:
system:
ThinkSystem SD650-N v2
Intel Xeon Platinum 8360Y (2x),36 Cores/Socket, 2.4 GHz (Speed Select SKU), 250W
NVIDIA A100 (4x),40 GiB HMB2 memory with 5 active memory stacks per GPU
16x32 GiB,3200 MHz, DDR4
512GiB160GiB HMB2(7.111 GiB)
2xHDR100 ConnectX-6 single port2x25GbE SFP28 LOM1x1GbE RJ45 LOM
The text was updated successfully, but these errors were encountered: