Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Colab support #13

Closed
filippo82 opened this issue Jan 5, 2024 · 8 comments
Closed

Google Colab support #13

filippo82 opened this issue Jan 5, 2024 · 8 comments
Labels

Comments

@filippo82
Copy link

Hi there,

do you have any ideas/clues for the issue with Google Colab? If yes, I could look into this?

@bclavie
Copy link
Owner

bclavie commented Jan 5, 2024

Hey Filippo! Also tagging @okhat as it's something we've been discussing and that we want to fix. The issue seems to be down to us using multiprocessing/forking, which hangs forever in certain environments.

The fix, or workaround, would be to simply not fork when there is a single GPU, which would cover most of the situations where the hanging is noticeable (assuming that it's very uncommon to train on multi-GPU in colab/other non-jupyter notebooks envs). I think upstreaming it to the main ColBERT repo would be ideal: https://github.com/stanford-futuredata/ColBERT . You can check out any of the main files (Indexer.py, Trainer.py, etc...) to get an idea of how the process pool is handled at the moment.

@filippo82
Copy link
Author

Would you have a sample notebook I could use to test the issue? Not that I am an expert in this ... but I could have a look at it at least.

@bclavie
Copy link
Owner

bclavie commented Jan 6, 2024

The example training notebook is a good one! If you run it on Colab, it will hang once you get to the train() call (when the processes fork). The goal would be that unless n_gpus is explicitly >1, it we shouldn't be using multiprocessing, which will eliminate the problem.

@bclavie bclavie added the enhancement New feature or request label Jan 6, 2024
@filippo82
Copy link
Author

filippo82 commented Jan 7, 2024

A quick update after an initial run of the example training notebook:

  • I uploaded the 02-basic_training.ipynb to Google Colab
  • I selected a T4 GPU
  • I installed RAGatouille with pip install ragatouille
  • I restarted the session

The notebook does NOT hang while executing the cell with train() but it completes after 20s seconds with this output:

#> Starting...
#> Joined...

Looking inside the .ragatouille folder, I see these files:

%ls -altrh .ragatouille/colbert/none/2024-01/07/22.57.44/checkpoints/colbert

total 419M
drwxr-xr-x 3 root root 4.0K Jan  7 22:58 ../
drwxr-xr-x 2 root root 4.0K Jan  7 22:59 ./
-rw-r--r-- 1 root root  664 Jan  7 23:20 config.json
-rw-r--r-- 1 root root 419M Jan  7 23:20 model.safetensors
-rw-r--r-- 1 root root 1.2K Jan  7 23:20 tokenizer_config.json
-rw-r--r-- 1 root root  695 Jan  7 23:20 special_tokens_map.json
-rw-r--r-- 1 root root 227K Jan  7 23:20 vocab.txt
-rw-r--r-- 1 root root 695K Jan  7 23:20 tokenizer.json
-rw-r--r-- 1 root root 1.6K Jan  7 23:20 artifact.metadata

This seems to be correct, right?

@filippo82
Copy link
Author

@bclavie let me know if I am missing anything ... it seems to be working fine for me with a T4 GPU on Google Colab.

@bclavie
Copy link
Owner

bclavie commented Jan 9, 2024

Hey @filippo82, this is strange, I'm still getting hangups 🤔. I will be trying to have a deeper look at some point to figure out exactly what's causing the hangups!

#> Starting...
#> Joined...

This output is also quite strange, as I'd expect it to get the normal main process prints (i.e. the training config and the loss at each training step)

Out of curiosity, does Indexing run fine on your end too?

@filippo82
Copy link
Author

Hi @bclavie, if you see the #> Joined... message, then it means it finished the execution of that cell. You can find the additional logs by clicking on Runtime -> View runtime logs:

Screenshot 2024-01-09 at 15 44 37

@bclavie
Copy link
Owner

bclavie commented Jan 9, 2024

Oh I didn't realise it was a truncated output you were sharing, I thought you meant that was the whole output!

Colab still hangs when I try it, and it still hangs on Windows (though less important a problem), so there's something fishy going on 🤔 I'll try and dig to figure it out

@bclavie bclavie closed this as completed Jan 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants