Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to create an index on Ubuntu / Linux environment. #72

Closed
GMartin-dev opened this issue Jan 27, 2024 · 8 comments
Closed

Fail to create an index on Ubuntu / Linux environment. #72

GMartin-dev opened this issue Jan 27, 2024 · 8 comments

Comments

@GMartin-dev
Copy link
Contributor

Env: Ubuntu 22.04, Jammy Jellyfish. Just a normal index call with a subset of documents. After this error, I can see some json files created.
image

Python env dependencies (requirement):
https://pastebin.com/9yHL0d8b

            overwrite_index = False
            rag.index(
                index_name=index_id,
                max_document_length=500,
                overwrite_index=overwrite_index,
                collection=all_doc_text,
                document_ids=ids,
                document_metadatas=metadatas,
            )

File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/RAGPretrainedModel.py", line 187, in index
return self.model.index(
^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 349, in index
self.indexer.index(
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 78, in index
self.__launch(collection)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 83, in __launch
manager = mp.Manager()
^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/context.py", line 57, in Manager
m.start()
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/managers.py", line 567, in start
self._address = reader.recv()
^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/connection.py", line 249, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/connection.py", line 413, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/multiprocessing/connection.py", line 382, in _recv
raise EOFError


Then trying to execute as a retriever:

File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/langchain_core/retrievers.py", line 281, in aget_relevant_documents
raise e
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/langchain_core/retrievers.py", line 274, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/langchain_core/retrievers.py", line 166, in _aget_relevant_documents
return await run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 490, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/integrations/_langchain.py", line 20, in _get_relevant_documents
docs = self.model.search(query, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/RAGPretrainedModel.py", line 296, in search
return self.model.search(
^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 446, in search
self._load_searcher(index_name=index_name, force_fast=force_fast)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 409, in _load_searcher
self.searcher = Searcher(
^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/searcher.py", line 33, in init
self.index_config = ColBERTConfig.load_from_index(self.index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/config/base_config.py", line 97, in load_from_index
loaded_config, _ = cls.from_path(metadata_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/config/base_config.py", line 44, in from_path
with open(name) as f:
^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '.ragatouille/colbert/indexes/test_index_id/plan.json'

@bclavie
Copy link
Collaborator

bclavie commented Jan 27, 2024

Hey! I believe this is an adjacent issue to #60

Multiprocessing still seems to be causing some problems upstream. The good news is this PR by @Anmol6 stanford-futuredata/ColBERT#294 should be removing it entirely and solve at least some of those problems. It'll hopefully be merged soon, but if you want to try it out in the meantime, a workaround would be to install ColBERT directly from his branch.

@bclavie
Copy link
Collaborator

bclavie commented Jan 28, 2024

Hey @GMartin-dev, version 0.0.6b0 now ships with colbert-ai 0.2.18, which should eliminate the mp.manager() calls on indexing. Could you check it out and let us know if this solves your issue?

@GMartin-dev
Copy link
Contributor Author

GMartin-dev commented Jan 29, 2024

@bclavie thanks for the tip I just tried it. It seems that the original error it's gone and a new issue emerged.
Now I see the plan.json file:
image
But it's actually missing the index files right?

Successfully installed colbert-ai-0.2.18 ragatouille-0.0.6b0

same command, same dependencies just updated to version you pointed:

File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/RAGPretrainedModel.py", line 183, in index
    return self.model.index(
           ^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/ragatouille/models/colbert.py", line 349, in index
    self.indexer.index(
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 78, in index
    self.__launch(collection)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexer.py", line 87, in __launch
    launcher.launch_without_fork(self.config, collection, shared_lists, shared_queues, self.verbose)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 93, in launch_without_fork
    return_val = run_process_without_mp(self.callee, new_config, *args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/infra/launcher.py", line 109, in run_process_without_mp
    return_val = callee(config, *args)
                 ^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 33, in encode
    encoder.run(shared_lists)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 68, in run
    self.train(shared_lists) # Trains centroids from selected passages
    ^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 237, in train
    bucket_cutoffs, bucket_weights, avg_residual = self._compute_avg_residual(centroids, heldout)
                                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/collection_indexer.py", line 315, in _compute_avg_residual
    compressor = ResidualCodec(config=self.config, centroids=centroids, avg_residual=None)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 24, in __init__
    ResidualCodec.try_load_torch_extensions(self.use_gpu)
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/colbert/indexing/codecs/residual.py", line 103, in try_load_torch_extensions
    decompress_residuals_cpp = load(
                               ^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1284, in load
    return _jit_compile(
           ^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1535, in _jit_compile
    return _import_module_from_library(name, build_directory, is_python_module)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/german/.pyenv/versions/3.11.3/envs/agile_clean/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1929, in _import_module_from_library
    module = importlib.util.module_from_spec(spec)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 573, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1233, in create_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed

It seems related to this:
stanford-futuredata/ColBERT#195

But not exactly the same...

@bclavie
Copy link
Collaborator

bclavie commented Jan 29, 2024

Hey, quite interesting, thank you for running again... It definitely seems like there is a very specific issue for some people on linux+CUDA where there's an issue when loading the custom code, while it's fine for others in very similar (but likely not identical) environments. Is there an actual error raised (EOF again?) at the end of the traceback you posted or does it just print this and stop?

Could you also run the script after exporting CUDA_VISIBLE_DEVICES="" to try and help narrow it down as 100% a CUDA related issue?

Could you post your dependency dump, and CUDA version please? cc @Anmol6 so we can try and track exactly what the upstream compatibility issue is 🤔

@GMartin-dev
Copy link
Contributor Author

Sorry for the delay on this... and thanks for you new tips!
CUDA:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

After exporting CUDA_VISIBLE_DEVICES="" it seems that indexing finished correctly!! it also searches etc.
Now I guess it is using CPU with CUDA disabled right?.... it's super slow, but really slow.
Working with 500+ text pieces no more than 300 token each approx. It took like 10 min to index, and search takes 1 min+,
is there any limitation on CPU architectures when running on CPU?
I'm using an Old Dell server with a Intel Xeon E-2224G, normally for transformer based models of relatively small size it works fast enough, and and use the GPU for the biggest LLM. It's a modest setup for testing ideas.

@TheMcSebi
Copy link

Might be worth a shot updating CUDA to 12.x. What gpu were you trying to run this on?

@GMartin-dev
Copy link
Contributor Author

In fact I was trying to run it using CPU only, I have an A2000 in the same system but it's being used for other models. This was finally fixed by:
#120 (comment)

@bclavie
Copy link
Collaborator

bclavie commented Feb 15, 2024

(Copy/pasting this message in a few related issues)

Hey guys!

Thanks a lot for bearing with me as I juggle everything and trying to diagnose this. It’s complicated to fix with relatively little time to dedicate to it, as it seems like the dependencies causing issues aren’t the same for everyone, with no clear platform pattern as of yet. Overall, the issues center around the usual suspects of faiss and CUDA.

While because of this I can’t fix the issue with PLAID optimised indices just yet, I’m also noticing that most of the bug reports here are about relatively small collections (100s-to-low-1000s). To lower the barrier to entry as much as possible, #137 is introducing a second index format, which doesn’t actually build an index, but performs an exact search over all documents (as a stepping stone towards #110, which would use an HNSW index to be an in-between compromise between PLAID optimisation and exact search).
This approach doesn’t scale, but offers the best possible search accuracy & is still performed in a few hundred milliseconds at most for small collections. Ideally, it’ll also open up the way to shipping lower-dependency versions (#136)

The PR above (#137) is still a work in progress, as it needs CRUD support, tests, documentation, better precision routing (fp32/bfloat16) etc… (and potentially searching only subset of document ids).
However, it’s working in a rough state for me locally. If you’d like to give it a try (with the caveat that it might very well break!), please feel free to install the library directly from the feat/full_vectors_indexing branch and adding the following argument to your index() call:

index(…
index_type=FULL_VECTORS”,
)

Any feedback is appreciated, as always, and thanks again!

@bclavie bclavie closed this as completed Mar 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants