Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extract_headers sometimes fails #520

Open
barsuna opened this issue May 21, 2024 · 2 comments
Open

extract_headers sometimes fails #520

barsuna opened this issue May 21, 2024 · 2 comments

Comments

@barsuna
Copy link

barsuna commented May 21, 2024

When testing gpt-researcher with local llama3, i found that sometimes the extract_headers will throw the exception here

        if line.startswith("<h") and len(line) > 1:  # Check if the line starts with an HTML header tag
            level = int(line[2])  # Extract header level

apparently sometimes what comes after <h is not a number

temporarily i've changed it to

            try:
                level = int(line[2])  # Extract header level
            except:
                level = 2

but perhaps maintainer of this, can probably fix it properly

@arsaboo
Copy link
Contributor

arsaboo commented May 21, 2024

How are you using llama3 with GPT researcher?

@barsuna
Copy link
Author

barsuna commented May 22, 2024

there was a post here
#395

  • use lm-studio for llama3
  • for embeddings install ollama with some small model (lm-studio had embeddings too, but different api format)

i'm using a small api gateway that translates gpt-researcher api calls to llama.cpp own apis and does some other general maintenance of the api outputs. lm-studio + ollama can be done without code changes (discounting my other issues) the api gateway requires other changes to gpt-researcher, so i would not recommend it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants