Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault (core dumped) #1817

Closed
anamariaUIC opened this issue Apr 1, 2024 · 4 comments
Closed

Segmentation fault (core dumped) #1817

anamariaUIC opened this issue Apr 1, 2024 · 4 comments

Comments

@anamariaUIC
Copy link

Hello,

I installed private GPT via:

rm -rf .local .cache /scratch/network/$USER/privateGPT /scratch/network/$USER/tmp
mkdir /scratch/network/$USER/tmp
export TMPDIR=/scratch/network/$USER/tmp
module load Python/3.11.3-GCCcore-12.3.0
pip install --user poetry
module load git
module load CUDA
export PATH=$PATH:/home/$USER/.local/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/$USER/.local/lib
cd /scratch/network/$USER
git clone https://github.com/imartinez/privateGPT.git
cd privateGPT/
export PYTHONPATH=$PYTHONPATH:$PWD
poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant"
poetry run python scripts/setup
CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
PGPT_PROFILES=local make run

When I query a 3 pages long .pdf file I am getting: Segmentation fault (core dumped) error.
Please see in attach details.

How this can be resolved?

Screen Shot 2024-04-01 at 4 01 33 PM Screen Shot 2024-04-01 at 4 27 15 PM
@beaconai
Copy link

beaconai commented Apr 2, 2024

i think your GPU utilization increased

@beaconai
Copy link

beaconai commented Apr 2, 2024

i'm facing the problem "Initial token count exceed token limit" can anyone help me out, im upload a CVS file and im getting this error.

@anamariaUIC
Copy link
Author

anamariaUIC commented Apr 2, 2024 via email

@boufaka
Copy link

boufaka commented Apr 3, 2024

I solve the same issue (segementation fault) changing a file in version 0.5.0:

privateGPT/private_gpt/components/llm/llm_component.py

The setting "offload_kqv": True to "offload_kqv": False

I hope this helps

@jaluma jaluma closed this as completed Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants