-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Add Flowise AI LLM provider integration #3864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Introduced FlowiseAiOptions component for user settings input. - Added Flowise AI logo and integrated it into LLM preferences. - Updated constants and environment variables for Flowise configuration. - Implemented FlowiseLLM class for backend processing and API interaction. - Enhanced workspace settings to include Flowise as a selectable LLM provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont have Flowise running locally to better understand how this functionality all works, but it has some pretty signifiant code diffs in the LLM provider that would indiciate that flowise is quite different from other model providers and would lack the ability to use historical chats.
If there is an easy flow to create a testing flowise path I can test against I can see more about how this works, but looking at their docs there are a lot of fields and formatting we could use, but dont - or fields that are used that the API says do not exist or are invalid to set.
const response = await axios.post( | ||
`${this.basePath}/api/v1/prediction/${process.env.FLOWISE_LLM_CHATFLOW_ID}`, | ||
{ | ||
question: lastMessage.content, | ||
streaming: true, | ||
}, | ||
{ | ||
headers: { | ||
"Content-Type": "application/json", | ||
}, | ||
responseType: "stream", | ||
} | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to https://docs.flowiseai.com/api-reference/prediction streaming
is not a property you can pass in? Also since it can take a history, why not pass in the whole history? In this format it would never recall a conversation message from a previous message.
Likewise, if you are managing the history in flowwise, /reset
from AnythingLLM would not clear the history from flowise so you would still have issues with history and it retaining information you no longer wish to use
const promptTokens = response.usage?.input_tokens || 0; | ||
const completionTokens = response.usage?.output_tokens || 0; | ||
|
||
return { | ||
textResponse: response.content[0].text, | ||
metrics: { | ||
prompt_tokens: promptTokens, | ||
completion_tokens: completionTokens, | ||
total_tokens: promptTokens + completionTokens, | ||
outputTps: completionTokens / result.duration, | ||
duration: result.duration, | ||
}, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The output example of https://docs.flowiseai.com/api-reference/prediction shows measurments on tokens are not returned from this endpoint - so they would always be zero?
writeResponseChunk, | ||
clientAbortedHandler, | ||
} = require("../../helpers/chat/responses"); | ||
const axios = require("axios"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't use Axios
anywhere in our codebase directly and instead rely on fetch. I'm surprised this import even worked, as it must be a sub-dep of something else we use in this execution path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haha, yeah, I can use axios. My bad, I didn’t realize. I’ve updated the PR to use fetch instead. Thanks for pointing it out!
|
||
this.basePath = process.env.FLOWISE_LLM_BASE_PATH; | ||
this.model = "flowise"; | ||
if (!this.model) throw new Error("FlowiseLLM must have a valid model set."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can never be false since you manually set it and its not user-configurable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
content.push({ | ||
type: "image_url", | ||
image_url: { | ||
url: attachment.contentString, | ||
detail: "high", | ||
}, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to the predicition endpoint images should be sent as
"uploads": [
{
"type": "file",
"name": "image.png",
"data": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG",
"mime": "image/png"
}
]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right! I’ve updated the format to match the documentation. That said, I noticed Flowise is pretty different from the rest of the Anything LLM app, and these attachments aren’t actually getting passed to the Flowise API yet. Give me a bit to fix that.
…structure for attachments. Improved error handling for fetch responses and adjusted message formatting for API requests.
- Consolidate prompt construction logic - Update streamGetChatCompletion to use structured input - Fix attachment formatting for Flowise API
Pull Request Type
Relevant Issues
resolves #1785
What is in this change?
Adds Flowise AI as a new LLM provider (environment variable, server settings, and provider class)
Integrates Flowise options into both onboarding's and chat setting's frontend LLM selection UIs
Additional Information
Developer Validations
yarn lint
from the root of the repo & committed changes