You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The code in question is :
def create_or_get_colbert_model(username):
index_name=username+"_report_history"
index_path = PATH/.ragatouille/colbert/indexes/"+index_name+"/"
print("Index path = ",index_path)
if os.path.exists(index_path):
RAG = RAGPretrainedModel.from_index(index_path)
else:
RAG = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
return RAG
After printing "Index path = " , index_path, I get :
Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
This happens even after creating a fresh ENV, and fresh installs of torch-cpu-2.2.0, sentence_transformers, transformers and ragatouille.
The text was updated successfully, but these errors were encountered:
I'm running on a CPU-only host.
My pip freeze output:
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
async-timeout==4.0.3
attrs==23.2.0
beautifulsoup4==4.12.3
bitarray==2.9.2
blinker==1.8.2
catalogue==2.0.10
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
colbert-ai==0.2.19
dataclasses-json==0.6.6
datasets==2.19.1
Deprecated==1.2.14
dill==0.3.8
dirtyjson==1.0.8
distro==1.9.0
exceptiongroup==1.2.1
faiss-cpu==1.8.0
fast-pytorch-kmeans==0.2.0.1
filelock==3.14.0
Flask==3.0.3
frozenlist==1.4.1
fsspec==2024.3.1
git-python==1.0.3
gitdb==4.0.11
GitPython==3.1.43
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.23.0
idna==3.7
itsdangerous==2.2.0
Jinja2==3.1.4
joblib==1.4.2
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.19
langchain-community==0.0.38
langchain-core==0.1.52
langchain-text-splitters==0.0.1
langsmith==0.1.56
llama-index==0.10.36
llama-index-agent-openai==0.2.4
llama-index-cli==0.1.12
llama-index-core==0.10.36
llama-index-embeddings-openai==0.1.9
llama-index-indices-managed-llama-cloud==0.1.6
llama-index-legacy==0.9.48
llama-index-llms-openai==0.1.18
llama-index-multi-modal-llms-openai==0.1.5
llama-index-program-openai==0.1.6
llama-index-question-gen-openai==0.1.3
llama-index-readers-file==0.1.22
llama-index-readers-llama-parse==0.1.4
llama-parse==0.4.2
llamaindex-py-client==0.1.19
MarkupSafe==2.1.5
marshmallow==3.21.2
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.16
mypy-extensions==1.0.0
nest-asyncio==1.6.0
networkx==3.3
ninja==1.11.1.1
nltk==3.8.1
numpy==1.26.4
onnx==1.16.0
openai==1.28.0
orjson==3.10.3
packaging==23.2
pandas==2.2.2
pillow==10.3.0
protobuf==5.26.1
pyarrow==16.0.0
pyarrow-hotfix==0.6
pydantic==2.7.1
pydantic_core==2.18.2
pynvml==11.5.0
pypdf==4.2.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
RAGatouille==0.0.8.post2
regex==2024.5.10
requests==2.31.0
safetensors==0.4.3
scikit-learn==1.4.2
scipy==1.13.0
sentence-transformers==2.7.0
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
soupsieve==2.5
SQLAlchemy==2.0.30
srsly==2.4.8
striprtf==0.0.26
sympy==1.12
tenacity==8.3.0
threadpoolctl==3.5.0
tiktoken==0.6.0
tokenizers==0.19.1
torch==2.2.0+cpu
tqdm==4.66.4
transformers==4.40.2
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
ujson==5.9.0
urllib3==2.2.1
voyager==2.0.6
Werkzeug==3.0.3
wrapt==1.16.0
xxhash==3.4.1
yarl==1.9.4
The code in question is :
def create_or_get_colbert_model(username):
index_name=username+"_report_history"
index_path = PATH/.ragatouille/colbert/indexes/"+index_name+"/"
print("Index path = ",index_path)
if os.path.exists(index_path):
RAG = RAGPretrainedModel.from_index(index_path)
else:
RAG = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")
return RAG
After printing "Index path = " , index_path, I get :
Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...
This happens even after creating a fresh ENV, and fresh installs of torch-cpu-2.2.0, sentence_transformers, transformers and ragatouille.
The text was updated successfully, but these errors were encountered: