Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

README Indexing fails on two GPUs #17

Closed
bclavie opened this issue Jan 6, 2024 · 13 comments
Closed

README Indexing fails on two GPUs #17

bclavie opened this issue Jan 6, 2024 · 13 comments
Labels
bug Something isn't working

Comments

@bclavie
Copy link
Collaborator

bclavie commented Jan 6, 2024

I'm not sure if the problem is related to Colab, I also have an error using Jupyter locally on my Ubuntu server.
The basic readme.md example doesn't work and the cell never finish executing.

Here's the code and stacktrace if that helps:

from ragatouille import RAGPretrainedModel

RAG = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
my_documents = [
    "This is a great excerpt from my wealth of documents",
    "Once upon a time, there was a great document"
]

index_path = RAG.index(index_name="my_index", collection=my_documents)

output the following:

[Jan 06, 10:41:35] #> Creating directory .ragatouille/colbert/indexes/my_index 


#> Starting...
#> Starting...
nranks = 2 	 num_gpus = 2 	 device=1
[Jan 06, 10:41:38] [1] 		 #> Encoding 0 passages..
nranks = 2 	 num_gpus = 2 	 device=0
[Jan 06, 10:41:38] [0] 		 #> Encoding 2 passages..
 File "/home/np/miniconda3/envs/np-ml/lib/python3.10/site-packages/colbert/indexing/collection_indexer.py", line 101, in setup
    avg_doclen_est = self._sample_embeddings(sampled_pids)
  File "/home/np/miniconda3/envs/np-ml/lib/python3.10/site-packages/colbert/indexing/collection_indexer.py", line 140, in _sample_embeddings
    self.num_sample_embs = torch.tensor([local_sample_embs.size(0)]).cuda()
AttributeError: 'NoneType' object has no attribute 'size'

Originally posted by @timothepearce in #14 (comment)

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

Hey @timothepearce, I've created the issue here!

I think this is what's going on:

The README examples are too short. I'll update shortly to make sure the doc collections are big enough.

I spy in your trace that you're using 2 GPUs (num_gpus = 2). The fact that the embedding sample ends up being a NoneType object is probably because upstream ColBERT is trying to split the document collection into batches for both GPUs then fails because there aren't enough, but still creates an empty batch.

Does it work if you use more examples?

@bclavie bclavie added the bug Something isn't working label Jan 6, 2024
@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

I've just merged #18 and pushed a fixed version to pypi (to add the wikipedia page fetcher), the readme example should be a lot more functional now!

@timothepearce
Copy link

timothepearce commented Jan 6, 2024

That was quick! I was inspecting the source code while you were fixing it. Nice job!

I'm struggling with another issue (not related to your package), but I'll keep you informed.

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

Thanks, glad I could fix it for you!

@bclavie bclavie closed this as completed Jan 6, 2024
@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

I'm struggling with another issue (not related to your package), but I'll keep you informed.

Oh sorry I glanced over that -- let me know if it's something I can assist with!

@bclavie bclavie reopened this Jan 6, 2024
@timothepearce
Copy link

@bclavie, the 0.0.2b version isn't available on PyPI, but the code works as I tested it by cloning the repo instead.

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

My bad, seems like poetry silently crashed during publish... Live on PyPi now!

@timothepearce
Copy link

@bclavie not a bug, but to carry out some benchmarks, I indexed 1000 documents and noticed that the library currently only uses one GPU at a time but loads the embedding model on both devices.

[Jan 06, 15:23:17] #> Creating directory .ragatouille/colbert/indexes/presentation_1000 

#> Starting...
#> Starting...
nranks = 2 	 num_gpus = 2 	 device=1
[Jan 06, 15:23:21] [1] 		 #> Encoding 17079 passages..
nranks = 2 	 num_gpus = 2 	 device=0
[Jan 06, 15:23:21] [0] 		 #> Encoding 31537 passages..
[Jan 06, 15:23:52] [0] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 31,537
[Jan 06, 15:23:52] [1] 		 avg_doclen_est = 99.43394470214844 	 len(local_sample) = 17,079
[Jan 06, 15:23:52] [0] 		 Creating 32,768 partitions.
[Jan 06, 15:23:52] [0] 		 *Estimated* 7,650,049 embeddings.
[Jan 06, 15:23:52] [0] 		 #> Saving the indexing plan to .ragatouille/colbert/indexes/presentation_1000/plan.json ..
Clustering 4783720 points in 128D to 32768 clusters, redo 1 times, 20 iterations
  Preprocessing in 0.14 s
  Iteration 0 (696.46 s, search 696.33 s): objective=1.51976e+06 imbalance=1.742 nsplit=0

Here is the output of nvidia-smi:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03             Driver Version: 535.129.03   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090        Off | 00000000:01:00.0 Off |                  Off |
|  0%   38C    P8              16W / 450W |   1036MiB / 24564MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce RTX 4090        Off | 00000000:03:00.0 Off |                  Off |
| 30%   31C    P2              67W / 450W |   2616MiB / 24564MiB |    100%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|    0   N/A  N/A   1654032      C   ...np/miniconda3/envs/np-ml/bin/python     1026MiB |
|    1   N/A  N/A   1654070      C   ...np/miniconda3/envs/np-ml/bin/python     2600MiB |
+---------------------------------------------------------------------------------------+

Do you know how I can optimise the embedding/indexing phase?

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

Oh this is interesting, thanks for flagging it! For the indexing part, it's fully deferred to ColBERT itself (the Stanford's colbert-ai lab), but I'll add on my to-do to dig and make sure that the full GPU settings are properly passed to it.

Overall, sadly indexing can be quite slow (it's by far the slowest part of ColBERT).

@timothepearce
Copy link

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).

Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker. Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

Thanks for all your hard work, ColBERT has always been challenging to use!

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 6, 2024

@bclavie Yes, I noticed this is one of the main disadvantages compared to dense embedding models. The 1000 documents took almost 4 hours to process, but the CPU was the bottleneck, not the GPU (even with only one running).
Do you have any QPS benchmarks and memory footprint compared to the number of vectors indexed?

cc @okhat

For my use case, which consists of indexing several million documents, ColBERT is probably a better choice as a reranker.

That's fair! I'm planning on looking at building RAGPretrainedModel.rerank(query: str, documents: list[str]) soon to support index-free re-ranking, just pass a query + a list of strings (as suggested in #6). If you're interested, I'll ping you when it ships.

Given the number of vectors, I wouldn't be surprised if queries were slower than more traditional methods.

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

Thanks for all your hard work, ColBERT has always been challenging to use!

Thank you, I'm glad this has been useful to you!

@timothepearce
Copy link

timothepearce commented Jan 6, 2024

If you're interested, I'll ping you when it ships.

Please yes!

I'm pretty sure once indexed (that is indeed a challenging task), ColBERT would still query super fast, but would be worth double checking!

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

@bclavie
Copy link
Collaborator Author

bclavie commented Jan 7, 2024

I'm still working on RAGatouille. I'll run some benchmarks on a more extensive dataset and post them here if you're interested.

Would love the yes! All early feedback is more than welcome, thank you!

I'll close the issue for now (to keep tracks of bug) but feel free to post it here (I'll ping you on the reranker issue once that's live)

@bclavie bclavie closed this as completed Jan 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants