-
-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Utilizing local LLM model such as LLaMA.cpp or Baichuan #144
Comments
Thank you for your suggestion. I apologize for my lack of experience in the area of large language models. Could you please clarify if it's possible to utilize a web API from the project you mentioned? |
There are many of LLMs having the potential of integrating with ETCP to do EN-CHN translation, such as ChatGLM, Baichuan, InternLM, XVERSE and Bloom. |
LLaMA.cpp provides an HTTP API server that gives users the ability to interact with it. You can use the custom engine feature to invoke its endpoints. Here is a recipe you can refer to: {
"name": "LLaMA.cpp",
"languages": {
"source": {
"German": "de"
},
"target": {
"English": "en"
}
},
"request": {
"url": "http://127.0.0.1:8080/completion",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"data": {
"prompt": "Translate the content from <slang> to <tlang>: <text>",
"n_predict": 128
},
"response": "response['content']"
} Please refer to its documentation for more information. |
Hi! Thanks for the great work, this is super helpful.
Considering the availability of LLMs on local machines, especially with the ability to quantify LLMs (such as LLaMA.cpp https://github.com/ggerganov/llama.cpp), LLMs can be relatively small and work on local computers and it would be good if this plugin can also support that, so we can run the translation on local machine without using these online translation services. Thanks for your work again!
The text was updated successfully, but these errors were encountered: