Skip to content

Add Flowise AI LLM provider integration #3864

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

Rrojaski
Copy link
Contributor

Pull Request Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 🔨 chore
  • 📝 docs

Relevant Issues

resolves #1785

What is in this change?

Adds Flowise AI as a new LLM provider (environment variable, server settings, and provider class)

Integrates Flowise options into both onboarding's and chat setting's frontend LLM selection UIs

Additional Information

Developer Validations

  • I ran yarn lint from the root of the repo & committed changes
  • Relevant documentation has been updated
  • I have tested my code functionality
  • Docker build succeeds locally

Rrojaski added 9 commits May 15, 2025 20:04
- Introduced FlowiseAiOptions component for user settings input.
- Added Flowise AI logo and integrated it into LLM preferences.
- Updated constants and environment variables for Flowise configuration.
- Implemented FlowiseLLM class for backend processing and API interaction.
- Enhanced workspace settings to include Flowise as a selectable LLM provider.
@timothycarambat timothycarambat changed the title feat: add Flowise AI LLM provider integration, 1785 flowise api feat: add Flowise AI LLM provider integration May 27, 2025
@timothycarambat timothycarambat changed the title feat: add Flowise AI LLM provider integration Add Flowise AI LLM provider integration May 27, 2025
Copy link
Member

@timothycarambat timothycarambat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont have Flowise running locally to better understand how this functionality all works, but it has some pretty signifiant code diffs in the LLM provider that would indiciate that flowise is quite different from other model providers and would lack the ability to use historical chats.

If there is an easy flow to create a testing flowise path I can test against I can see more about how this works, but looking at their docs there are a lot of fields and formatting we could use, but dont - or fields that are used that the API says do not exist or are invalid to set.

Comment on lines 165 to 177
const response = await axios.post(
`${this.basePath}/api/v1/prediction/${process.env.FLOWISE_LLM_CHATFLOW_ID}`,
{
question: lastMessage.content,
streaming: true,
},
{
headers: {
"Content-Type": "application/json",
},
responseType: "stream",
}
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to https://docs.flowiseai.com/api-reference/prediction streaming is not a property you can pass in? Also since it can take a history, why not pass in the whole history? In this format it would never recall a conversation message from a previous message.

Likewise, if you are managing the history in flowwise, /reset from AnythingLLM would not clear the history from flowise so you would still have issues with history and it retaining information you no longer wish to use

Comment on lines +140 to +152
const promptTokens = response.usage?.input_tokens || 0;
const completionTokens = response.usage?.output_tokens || 0;

return {
textResponse: response.content[0].text,
metrics: {
prompt_tokens: promptTokens,
completion_tokens: completionTokens,
total_tokens: promptTokens + completionTokens,
outputTps: completionTokens / result.duration,
duration: result.duration,
},
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output example of https://docs.flowiseai.com/api-reference/prediction shows measurments on tokens are not returned from this endpoint - so they would always be zero?

writeResponseChunk,
clientAbortedHandler,
} = require("../../helpers/chat/responses");
const axios = require("axios");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't use Axios anywhere in our codebase directly and instead rely on fetch. I'm surprised this import even worked, as it must be a sub-dep of something else we use in this execution path.

Copy link
Contributor Author

@Rrojaski Rrojaski Jun 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, yeah, I can use axios. My bad, I didn’t realize. I’ve updated the PR to use fetch instead. Thanks for pointing it out!


this.basePath = process.env.FLOWISE_LLM_BASE_PATH;
this.model = "flowise";
if (!this.model) throw new Error("FlowiseLLM must have a valid model set.");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can never be false since you manually set it and its not user-configurable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

Comment on lines 85 to 92
content.push({
type: "image_url",
image_url: {
url: attachment.contentString,
detail: "high",
},
});
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the predicition endpoint images should be sent as

"uploads": [
    {
      "type": "file",
      "name": "image.png",
      "data": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG",
      "mime": "image/png"
    }
  ]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right! I’ve updated the format to match the documentation. That said, I noticed Flowise is pretty different from the rest of the Anything LLM app, and these attachments aren’t actually getting passed to the Flowise API yet. Give me a bit to fix that.

Rrojaski added 3 commits June 1, 2025 16:54
…structure for attachments. Improved error handling for fetch responses and adjusted message formatting for API requests.
- Consolidate prompt construction logic
- Update streamGetChatCompletion to use structured input
- Fix attachment formatting for Flowise API
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEAT]: Flowise APi
2 participants