Skip to content

Per-App / Per-Workflow Model Provider Credentials (API Key Selection) #32167

@DmytroVons

Description

@DmytroVons

Self Checks

  • I have read the Contributing Guide and Language Policy.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report, otherwise it will be closed.
  • Please do not modify this template :) and fill in all the required fields.

1. Is this request related to a challenge you're experiencing? Tell me about your story.

Currently, Model Provider configurations (like OpenAI API keys) are managed at the Workspace level. Once an API key is added, it becomes a global resource for all applications and workflows within that workspace.

This creates significant limitations for developers building multi-tenant SaaS products or agencies serving multiple clients:

1. Billing Isolation: It is impossible to separate API usage and costs within the OpenAI dashboard for different clients if they share the same workspace.
2. Quota Management: One "heavy" workflow can exhaust the Rate Limits (RPM/TPM) for all other apps in the same workspace.
3. Security: Agencies often need to use a client's own API key for their specific workflow without giving that key access to other internal projects.

I would like to see an option to select specific credentials at a more granular level (App or Node level):

1. Credential Selection in LLM Nodes: Similar to how "Tools" allow selecting a credential_id, the LLM node in Workflow/Chatflow should allow choosing from a list of previously configured credentials for that provider.
2. App-level Override: The ability to define an API key in the App Settings that overrides the default Workspace key for all nodes within that specific application.
3. Dynamic Credential Input: (Optional) The ability to pass an API key as a variable to an LLM node, allowing for truly dynamic, user-provided key usage.

Separate Workspaces: Currently, the only way to achieve this is by creating a new workspace for every client. This is hard to manage at scale and prevents sharing knowledge bases or internal tools across clients.

HTTP Request Nodes: Manually calling the OpenAI API via HTTP nodes. This works but loses all the benefits of Dify’s native LLM features (streaming, easy prompt formatting, vision support, etc.).

External AI Gateways: Using LiteLLM or One API as a proxy. This adds infrastructure complexity and overhead.

2. Additional context or comments

This request aligns with previous community discussions, such as Discussion #25955, where users expressed the need for per-agent/per-client billing separation. Adding this feature would make Dify a much more powerful tool for professional service providers and SaaS builders.

3. Can you help us with this feature?

  • I am interested in contributing to this feature.

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleIssue has not had recent activity or appears to be solved. Stale issues will be automatically closed💪 enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions