-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOC: TextGen (Text Generation Web UI) - the code no longer works. #14318
Comments
🤖 Hello @solarslurpi! It's great to see you again. I hope you're doing well. Thank you for bringing this issue to our attention. Thank you for bringing this to our attention, solarslurpi. It seems like there might be an issue with the From the information available in the LangChain repository, the def test_simple_question(fake_llm_checker_chain: LLMCheckerChain) -> None:
"""Test simple question that should not need python."""
question = "Which mammal lays the biggest eggs?"
output = fake_llm_checker_chain.run(question)
assert output == "I still don't know." (Source: libs/langchain/tests/unit_tests/chains/test_llm_checker.py) However, without more context about how this function interacts with the Oobabooga Text Generation Web UI API, it's hard to determine why it's returning a 403 error. Could you provide more information about the changes made to the Oobabooga Text Generation Web UI API and how the SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
I'm getting a 404. I believe that the oobabooga/text-generation-webui API was recently updated to match the OpenAI API as documented here: https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API You can manually revert to the old API by downloading snapshot: https://github.com/oobabooga/text-generation-webui/archive/refs/tags/snapshot-2023-11-12.zip. Then copy the /extensions/api into your current version of text-generation-webui. Then run with the "--extensions api" flag instead of the "--api" flag. Then everything works. I think that this file needs to be updated: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py, for example here: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/textgen.py#L214, but I don't have enough experience to know exactly what all needs to be done. |
Hey! The obabooga/text-generation-webui API were updated, indeed. The legacy API are not working anymore, responding with 404. I've managed to edit the file pointed by @flounder5 in order to match the new API (eg. no longer use of websockets). Here the snippet which could replace the current implementation: Until it won't be fixed/merged, you can save the file as Simple use case: from textgen import TextGen
model_url = 'http://localhost:5000'
llm = TextGen(model_url=model_url)
llm = TextGen(model_url='http://localhost:5000')
text = 'What would be a good company name for a company that makes colorful socks?'
print(llm.invoke(text)) Remember to set the model as stated in the TextGen docs (LangChain) (api flag, model loaded, etc.) If someone is still interested in the use of the lagacy API, just pass the llm = TextGen(model_url='http://localhost:5000', legacy_api=True) Hope all of this will come to help someone |
The problem still persists, I hope they will fix it soon. |
This was very useful. I was able to get it working using this method. I think the lack of websockets would be better than the module not working at all (which is currently the issue). There's a few things available in the lama.cpp loader that the API allows that using OpenAI don't allow. There's definite value in this module basically replacing the current version. I'm hearing comments such as no one can fix it. Maybe I'm misunderstanding, but why not open a pull request and attach it to this ticket? |
You saved my day! |
Any updates? The problem still there. |
Yes this problem still exists. The above solution worked for me over http, but using a cert on https i get the following error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate |
Actually I've found a better workaround: since TextGen is now using OpenAI public API we can actually use the langchain OpenAI wrapper to make calls to the TextGen module. You need to specify in the model the Here an example: from langchain_openai import OpenAI # also ChatOpenAI works
llm = OpenAI(
# any random value, it is mandatory
openai_api_key='<RANDOM>',
# note: add /v1 at the end of the endpoint or it will not work
openai_api_base='http://localhost:5000/v1',
)
text = 'What would be a good company name for a company that makes colorful socks?'
print(llm.invoke(text)) NOTE: the endpoint must terminate with NOTE: you can actually choose any of the OpenAI model interface implementations. Still the SOURCES |
@ManyStrongWords for SSL now you should be able to use the |
Issue with current documentation:
I believe the Oobabooga Text Generation Web UI API was rewritten, causing the code on the TextGen page of the Langchain docs to stop working.
e.g.: the way the code handles talking to a ws: causes a 403. I can execute API calls that work well, e.g.: curl http://127.0.0.1:5000/v1/chat/completions \
while llm_chain.run(question) returns a 403 (failed handshake).
Idea or request for content:
It would be awesome if this would be fixed. If not, please pull the page.
The text was updated successfully, but these errors were encountered: