-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can we support the locally deployed private models directly through HTTP, instead of models deployed through frameworks(LocalAI,Ollama etc.)?? #2198
Comments
🤖 Hello @KateHed! It's great to cross paths with you again in the dify repository. Hope you're having a good day! Based on the current implementation of dify (v0.5.0), it does support the integration of local models through HTTP requests. This is indicated by the presence of 'localai' in the list of model providers in the _position.yaml file. This suggests that dify can interact with a local AI model, likely through HTTP requests. You can find more details in the api/core/model_runtime/model_providers/_position.yaml file. Dify integrates with local models through HTTP requests by providing a set of APIs that allow users to interact with the models. These APIs include functionalities like getting a list of model providers, getting the credentials for a specific model provider, validating the credentials for a specific model provider, saving or deleting the credentials for a specific model provider, and more. You can find more details in the api/controllers/console/workspace/model_providers.py file. The process of integrating a local model through HTTP requests in dify involves defining the model provider and configuration method, generating the form schemas based on the provider and configuration method, validating the form values, saving or removing the credentials, and rendering the form. You can find more details in the web/app/components/header/account-setting/model-provider-page/model-modal/index.tsx file. Please note that the actual HTTP requests are likely made in the I hope this information helps! If you have any other questions or need further clarification, feel free to ask. Best regards, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
We're currently offering an OpenAI compatible API for direct integration with your personally developed model inference API. |
Our models are not compatible with OpenAI's interface, so they cannot be directly called through the openAI-api-compatible |
Technically, it's feasible to implement, but we can't just implement each one as they come along since there are many, and it's quite time-consuming. Therefore, developers can refer to the 'model_runtime' section and integrate it on their own. We need to converge some edge cases to focus on developing our main features. |
Self Checks
1. Is this request related to a challenge you're experiencing?
We have to add support for using local models directly through HTTP requests, rather than using a fixed model provider?
2. Describe the feature you'd like to see
We want to using local models directly through HTTP requests.
3. How will this feature improve your workflow or experience?
This feature will help us use local model.
4. Additional context or comments
No response
5. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: