Replies: 2 comments 2 replies
-
this is a good feature to add! Currently you can only chain the output from LLMChain, will bring that ability to other chains as well |
Beta Was this translation helpful? Give feedback.
0 replies
-
I am curious. Suppose I embed a book and instead of answering my questions, I want the LLM to ask me a question based on the embedded book. Then it should evaluate my response by comparing it to what is said in the book and then correct me. Is it possible?I think this will require a custom agent and may not be possible through flowise currently. Please correct me if I am wrong. Thank you. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I need some help. I trained my chatbot on a pdf document. Now, I want to derive insights from the conversation between the user and Chatgpt (for instance, the total number of words used in the conversation). I require that once the ConversationRetrivalQAChain runs, the output needs to be summarized, possibly stored in a vector store and then sent to another instance of GPT which evaluates the output based on its own prompt (that I provide). However, I am unable to figure out how to send the output from ConversationRetrivalQAChain to a memory/vectorstore as Flowise does not let me connect the two. Without it I cannot pass it to another LLM. Can someone please guide me whether this is possible? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions