Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot access Mistral and the GUI does not recogize the loaded document #1935

Open
KansaiTraining opened this issue May 20, 2024 · 2 comments

Comments

@KansaiTraining
Copy link

KansaiTraining commented May 20, 2024

i am trying to run PrivateGPT for the first time. I have installed Llama and a service is running at the moment.
I have cloned the repo, install the poetry dependencies (poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant"
)and then pull

ollama pull mistral
ollama pull nomic-embed-text

I do not understand what

make sure the Ollama desktop app is closed.

means
I do not call ollama serve since it is already running (that is how it is in the latest ollama)

The two problems I have are

  1. first it comes when I do PGPT_PROFILES=ollama make run

A lot of errors come out but basically it is this one

OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2.
401 Client Error. (Request ID: Root=1-664ac2d9-648ebce1202a3b494633c746;52185420-fabc-421f-9fb9-627f32bfac6a)

Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json.
Access to model mistralai/Mistral-7B-Instruct-v0.2 is restricted. You must be authenticated to access it.

I investigated already, went to the huggingface site and learned that you need authorization to use this model
**NOTE: If I need authorization from a third party, doesnt that mean this is not private anymore??

Anyway, I solved that with

poetry run huggingface-cli login <TOKEN>

and it says that now I am loggined and authorized

However, running private gpt still gives me the same error. Even though I have been authorized to use the model

  1. Even though these errors appear, the GUI runs
    So I upload a csv file there and then I ask questions about the document but the system tells me they dont know anything about the document.
    I don't know if both errors are related to one another but now I can not use privateGPT so

Any help greatly appreciated

@alxspiker
Copy link
Contributor

Hi @KansaiTraining,

To resolve the issues with accessing the Mistral model and the GUI not recognizing loaded documents on a Windows PC, please follow these steps:

Authorization Issue with Mistral Model

  1. Verify API Key Permissions:

    • Ensure your API key has the necessary permissions on the Hugging Face website.
  2. Clear Cached Tokens:

    rmdir /s /q %userprofile%\.cache\huggingface
    huggingface-cli login
  3. Set Environment Variable:

    setx HUGGINGFACE_API_KEY "your_huggingface_api_key"
    • After setting the environment variable, restart your terminal or system for the changes to take effect.
  4. Update Configuration:
    Ensure the API key is set in your configuration file (e.g., settings.yaml):

    huggingface:
      api_key: "your_huggingface_api_key"

GUI Not Recognizing Uploaded Documents

  1. Verify Backend Processing:

    • Ensure the backend correctly processes and indexes uploaded documents.
  2. Set Correct File Permissions:

    • On Windows, ensure the application has the necessary permissions to read and write files. Modify file permissions through the file properties dialog if necessary.
  3. Trigger Reindexing:

    • Ensure that the indexing process is triggered correctly after file uploads.
  4. Inspect Logs:

    • Check logs for any errors or warnings during the document upload and querying process:
    type C:\path\to\application\logs\logfile.log

Following these steps should help resolve the issues with Mistral access and GUI file uploads. If the problem persists, please provide additional details about your setup and any specific error messages for further assistance.

Best regards,
alxspiker

@KansaiTraining
Copy link
Author

KansaiTraining commented May 22, 2024

I will try to do what you said (btw I use ubuntu not linux). I have been three days trying to debug this.

But before that an important question:
If this is intended to be a local LLM, why does it need to access external sites?

What is a tokenizer?
I deactivated tokenizer and now the first error does not appear, but is the tokenizer necessary?
(my second problem is still happening)

--
UPDATE: I applied your advice on the first item (modified for ubuntu) and it worked. Now the tokenizer (what is a tokenizer?) is working and accessing the resource, but then again, why does it need to access external sites?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants