We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I wish to use llama-index with a locally deployed chatglm model, how should I do it? Or what changes should I make?
The text was updated successfully, but these errors were encountered:
we use langchain as the underlying LLM abstraction under the hood, i'd follow langchain docs for how to add a custom llm
docs for how to integrate llm into llamaindex: https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html
Sorry, something went wrong.
we use langchain as the underlying LLM abstraction under the hood, i'd follow langchain docs for how to add a custom llm docs for how to integrate llm into llamaindex: https://gpt-index.readthedocs.io/en/latest/how_to/custom_llms.html
404
No branches or pull requests
I wish to use llama-index with a locally deployed chatglm model, how should I do it? Or what changes should I make?
The text was updated successfully, but these errors were encountered: