Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lanchain chain.run() 결괏값 느리게 오는 문제 #51

Closed
dasd412 opened this issue Jul 31, 2023 · 7 comments
Closed

Lanchain chain.run() 결괏값 느리게 오는 문제 #51

dasd412 opened this issue Jul 31, 2023 · 7 comments
Assignees
Labels

Comments

@dasd412
Copy link
Contributor

dasd412 commented Jul 31, 2023

가끔 langchain <-> openai api 사이에서 엄청나게 늦게 결과가 오는 경우가 있다...
이경우 timeout 정한 후, 다시 retry 하는 등 연구가 필요하다.
run()이 동기 메서드인지도 확인 필요...

@dasd412 dasd412 added the bug label Jul 31, 2023
@dasd412 dasd412 self-assigned this Jul 31, 2023
@dasd412
Copy link
Contributor Author

dasd412 commented Aug 8, 2023

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 1.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).

@dasd412
Copy link
Contributor Author

dasd412 commented Aug 8, 2023

위 에러 같은 경우, 실제로 타임아웃 에러가 발생한 것이다. 조치가 필요함...

@dasd412 dasd412 pinned this issue Aug 8, 2023
@dasd412
Copy link
Contributor Author

dasd412 commented Aug 8, 2023

타임 아웃을 아무리 느려도 5초안에는 해야 되지 않을까 싶다.

@westreed
Copy link
Contributor

westreed commented Aug 8, 2023

생성이 오래 걸리는 경우에는 5초를 가뿐히 넘길 수도 있을 것 같아요. ㄷㄷ;

@westreed
Copy link
Contributor

westreed commented Aug 8, 2023

image

Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='[api.openai.com](http://api.openai.com/)', port=443): Read timed out. (read timeout=600).

10분 대기상황일 때 서버쪽 메시지.

@westreed westreed self-assigned this Aug 8, 2023
@westreed
Copy link
Contributor

langchain-ai/langchain#3005

llm = ChatOpenAI(temperature=0, model_name=model, request_timeout=120)

llm_factory.py에서 request_timeout을 수정해주면 잘 반영이 되는지 확인하기

@dasd412
Copy link
Contributor Author

dasd412 commented Oct 22, 2023

langchain 걷어내고 openai로 바꿨으므로 해결됨.

@dasd412 dasd412 closed this as completed Oct 22, 2023
@dasd412 dasd412 unpinned this issue Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants