-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: base_url seems not work on "http://localhost:1234/v1" #2514
Comments
Hi @holisHsu Another way to get your models for llm studio is to use the id value in Last thing is the ** Sample **
|
If you use LLM Studio, try your curl command: curl http://localhost:1234/v1/chat/completions |
Thank you so much and the problem seems to be I using the wrong short name.
And I will close this issue because it's not a bug. |
Thanks for your advice, I have tried this and check it's work as I have mentioned in "Steps to reproduce" 🙏 It turns out I provide the wrong short model name and I am now considering is it possible to provide a detail error information to indicate the problem. |
Thanks for any potential help in advance, and apologize if what I report is not a bug.
Description
When I run example code as following with base_url point to my local LM Studio server,
ChatCompletion fail and it seems the request path is wrong
Error
Screenshot for ChatCompletion
It should call
![CleanShot 2024-04-26 at 01 56 33@2x](https://private-user-images.githubusercontent.com/52438637/325721585-4feaccda-94a2-468d-9652-b2f086610b29.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjE4NDM1MjIsIm5iZiI6MTcyMTg0MzIyMiwicGF0aCI6Ii81MjQzODYzNy8zMjU3MjE1ODUtNGZlYWNjZGEtOTRhMi00NjhkLTk2NTItYjJmMDg2NjEwYjI5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MjQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzI0VDE3NDcwMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTcyMTQ2MmIxZmE0MTM2MzcxNDM4Y2ExZjhjYWZmYmFjOGU0YmZmYWU0NGQ0YTlmM2U4OWVhNDY1ZWYzMjNiNzMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0._XNm3WbxThCZqzYyFPQlBqzIS7RiNDJ5sG_zyZhI_jY)
/v1/chat/completions
instead ?Steps to reproduce
Model Used
Llama3-7Bcognitivecomputations/dolphin-2.9-llama3-8b-gguf/dolphin-2.9-llama3-8b-q8_0.ggufExpected Behavior
ChatCompletion should get the result
Screenshots and logs
Traceback log is provided as following, if any more information should be reveal please let me know, thanks you so much.
Additional Information
No response
The text was updated successfully, but these errors were encountered: