You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loaded detection model vikp/surya_det2 on device cuda with dtype torch.float16
Loaded detection model vikp/surya_layout2 on device cuda with dtype torch.float16
Loaded reading order model vikp/surya_order on device cuda with dtype torch.float16
Loaded recognition model vikp/surya_rec on device cuda with dtype torch.float16
Loaded texify model to cuda with torch.float16 dtype
Converting 80 pdfs in chunk 1/1 with 8 processes, and storing in ./markdowns_output
Processing PDFs: 0%| | 0/80 [00:00<?, ?pdf/s]
run nvidia-smi , Only GPU 0 gets utilized (99%). The other 7 just have 3 MiB of memory usage, but no utilization and no processes are tied to them.
I follow the readme run this code
Console output, after running the above command:
run nvidia-smi , Only GPU 0 gets utilized (99%). The other 7 just have 3 MiB of memory usage, but no utilization and no processes are tied to them.
I also ref #136 , i use marker_chunk_convert, it not works.
The text was updated successfully, but these errors were encountered: