You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes, a locally stored TensorFlow model can be used with LlamaIndex. There are a few ways to do this based on the similar issues I found in the LlamaIndex repository.
You can load the model and tokenizer directly as kwargs if you can load them outside of the huggingface LLM. Alternatively, you can implement a custom LLM class. More information on how to do this can be found here. This solution was suggested in the issue Querying Index using local models with no access to download from the hub(s).
You can use langchain as the underlying LLM abstraction under the hood. You should follow the langchain docs for how to add a custom llm. This solution was suggested in the issue Can I use llama-index based on a locally deployed model?. However, the provided link to the documentation is not working (404 error), so you will need to find the correct documentation or contact the creators for the correct link.
Please let me know if you need further assistance.
Question Validation
Question
Can a locally stored TensorFlow model be used with llamaindex?
The text was updated successfully, but these errors were encountered: