Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MacOS/w11] segmentation fault; testing "Miyazaki" example locally #85

Closed
alxpez opened this issue Jan 29, 2024 · 8 comments
Closed

[MacOS/w11] segmentation fault; testing "Miyazaki" example locally #85

alxpez opened this issue Jan 29, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@alxpez
Copy link
Contributor

alxpez commented Jan 29, 2024

Testing the Miyazaki example locally:

# RAGtest.py

from ragatouille import RAGPretrainedModel
from ragatouille.utils import get_wikipedia_page


def run():
    RAG = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")

    documents = [get_wikipedia_page("Hayao_Miyazaki")]
    document_ids = ["miyazaki"]
    document_metadatas = [{"entity": "person", "source": "wikipedia"}]

    RAG.index(
        index_name="miyazaki",
        collection=documents,
        document_ids=document_ids,
        document_metadatas=document_metadatas,
        max_document_length=180, 
        split_documents=True
    )

    results = RAG.search(query="What is Miyazaki's first work?", k=3)
    results
    print(results)


if __name__ == '__main__':
    run()

output

[Jan 29, 17:15:38] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
/Users/username/anaconda3/envs/alts/lib/python3.11/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available.  Disabling.
  warnings.warn(


[Jan 29, 17:15:40] #> Creating directory .ragatouille/colbert/indexes/miyazaki 


[Jan 29, 17:15:43] [0]           #> Encoding 81 passages..
  0%|                                                     | 0/2 [00:00<?, ?it/s]/Users/username/anaconda3/envs/alts/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
 50%|██████████████████████▌                      | 1/2 [00:21<00:21, 21.27s/it]/Users/username/anaconda3/envs/alts/lib/python3.11/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn(
100%|█████████████████████████████████████████████| 2/2 [00:26<00:00, 13.38s/it]
[Jan 29, 17:16:10] [0]           avg_doclen_est = 129.82716369628906     len(local_sample) = 81
[Jan 29, 17:16:10] [0]           Creating 1,024 partitions.
[Jan 29, 17:16:10] [0]           *Estimated* 10,516 embeddings.
[Jan 29, 17:16:10] [0]           #> Saving the indexing plan to .ragatouille/colbert/indexes/miyazaki/plan.json ..
WARNING clustering 9991 points to 1024 centroids: please provide at least 39936 training points
Clustering 9991 points in 128D to 1024 clusters, redo 1 times, 20 iterations
  Preprocessing in 0.00 s
[2]    31903 segmentation fault  python RAGtest.py
/Users/username/anaconda3/envs/alts/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

running on a macbook pro (intel)
using the latest version (0.0.5a2)

@bclavie
Copy link
Owner

bclavie commented Jan 29, 2024

Hey, thanks for raising the issue, I haven't seen anything like this yet 🤔.

It'd be very helpful if you could:

  • There's been quite a lot of activity, we're currently on version 0.0.6b1, could you try again with the latest version to make sure it's not related to the upstream multiprocessing which is now bypassed?
  • Monitor your memory usage when you run this to make sure it's not OOM?
  • Post your dependencies if neither of the above fixes it/reveals the issue?

Thank you!

@bclavie bclavie added the bug Something isn't working label Jan 30, 2024
@bclavie bclavie changed the title segmentation fault; testing "Miyazaki" example locally [MacOS] segmentation fault; testing "Miyazaki" example locally Jan 30, 2024
@bclavie
Copy link
Owner

bclavie commented Jan 30, 2024

I'm trying to investigate. This appears to be a dependency issue, compounded by an issue when loading the .cpp upstream ColBERT extensions.

While we figure out exactly what caused this, I've reverted some recent dependency updates and pushed a new version to Pypi. Let me know if it fixes it for you guys!

@akshaydevml
Copy link

Hi @bclavie , I tried with the latest version in PyPi, still the same error

@alxpez
Copy link
Contributor Author

alxpez commented Feb 1, 2024

@bclavie Yup, same error.

  • tested on version 0.0.6b4
  • memory doesn't seem to be a problem (it runs and crashes with 6gb left to spare)
  • the example is running on a new conda env with python 3.11

@alxpez
Copy link
Contributor Author

alxpez commented Feb 9, 2024

@bclavie @akshaydevml an update on this example. I've tested it on a w11 machine and I'm getting the same error output

@YossefAboukrat
Copy link

Hello @bclavie, I have the same error on an Intel Mac with 32 gigabytes of ram and Python 11

@alxpez alxpez changed the title [MacOS] segmentation fault; testing "Miyazaki" example locally [MacOS/w11] segmentation fault; testing "Miyazaki" example locally Feb 13, 2024
@bclavie bclavie linked a pull request Feb 14, 2024 that will close this issue
@bclavie
Copy link
Owner

bclavie commented Feb 15, 2024

(Copy/pasting this message in a few related issues)

Hey guys!

Thanks a lot for bearing with me as I juggle everything and trying to diagnose this. It’s complicated to fix with relatively little time to dedicate to it, as it seems like the dependencies causing issues aren’t the same for everyone, with no clear platform pattern as of yet. Overall, the issues center around the usual suspects of faiss and CUDA.

While because of this I can’t fix the issue with PLAID optimised indices just yet, I’m also noticing that most of the bug reports here are about relatively small collections (100s-to-low-1000s). To lower the barrier to entry as much as possible, #137 is introducing a second index format, which doesn’t actually build an index, but performs an exact search over all documents (as a stepping stone towards #110, which would use an HNSW index to be an in-between compromise between PLAID optimisation and exact search).
This approach doesn’t scale, but offers the best possible search accuracy & is still performed in a few hundred milliseconds at most for small collections. Ideally, it’ll also open up the way to shipping lower-dependency versions (#136)

The PR above (#137) is still a work in progress, as it needs CRUD support, tests, documentation, better precision routing (fp32/bfloat16) etc… (and potentially searching only subset of document ids).
However, it’s working in a rough state for me locally. If you’d like to give it a try (with the caveat that it might very well break!), please feel free to install the library directly from the feat/full_vectors_indexing branch and adding the following argument to your index() call:

index(…
index_type=FULL_VECTORS”,
)

Any feedback is appreciated, as always, and thanks again!

@bclavie
Copy link
Owner

bclavie commented Mar 18, 2024

Hey @alxpez @YossefAboukrat, this was most likely an issue related to faiss and should FINALLY be fixed by the new experimental default indexing in 0.0.8, which skips using faiss (does K-means in pure pytorch) as long as you're indexing fewer than ~100k documents!

@bclavie bclavie closed this as completed Mar 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants