Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: chat with compositeBrain ( with/out streaming) #1883

Merged
merged 22 commits into from
Dec 15, 2023

Conversation

gozineb
Copy link
Contributor

@gozineb gozineb commented Dec 13, 2023

DONE

  • generate_stream, generate and save answer in BE

TODO

  • Create an intermediary make_streaming_recursive_tool_calls async function
  • Save intermediary answers in new message logs column then fetch and display in front

@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Dec 13, 2023
@dosubot dosubot bot added the area: backend Related to backend functionality or under the /backend directory label Dec 13, 2023
Copy link

vercel bot commented Dec 13, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 15, 2023 8:56am
quivr-strapi ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 15, 2023 8:56am
quivrapp ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 15, 2023 8:56am

@mamadoudicko mamadoudicko marked this pull request as draft December 13, 2023 17:32
Comment on lines 128 to 129
if not connected_brains:
response = HeadlessQA(
chat_id=chat_id,
model=self.model,
max_tokens=self.max_tokens,
temperature=self.temperature,
streaming=self.streaming,
prompt_id=self.prompt_id,
).generate_answer(chat_id, question)
brain = brain_service.get_brain_by_id(self.brain_id)
if save_answer:
new_chat = chat_service.update_chat_history(
CreateChatHistory(
**{
"chat_id": chat_id,
"user_message": question.question,
"assistant": response.assistant,
"brain_id": question.brain_id,
"prompt_id": self.prompt_to_use_id,
}
)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe directly calling generate_answer of HeadlessQA with save_answer = CURRENT_SAVE_ANSWER ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like :

    def generate_answer(
        self, chat_id: UUID, question: ChatQuestion, save_answer: bool
    ) -> str:
        connected_brains = brain_service.get_connected_brains(self.brain_id)
        if not connected_brains:
            return HeadlessQA(
                chat_id=chat_id,
                model=self.model,
                max_tokens=self.max_tokens,
                temperature=self.temperature,
                streaming=self.streaming,
                prompt_id=self.prompt_id,
            ).generate_answer(chat_id, question, save_answer=save_answer)

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Dec 15, 2023
@mamadoudicko mamadoudicko merged commit 742e9bd into main Dec 15, 2023
8 checks passed
StanGirard added a commit that referenced this pull request Dec 15, 2023
🤖 I have created a release *beep* *boop*
---


## 0.0.141 (2023-12-15)

## What's Changed
* feat[i18n]: Added i18n documenation to the contribution guidelines by
@NilsJacobsen in #1899
* feat: Update Explore button label by @StanGirard in
#1901
* feat: chat with compositeBrain ( with/out streaming) by @gozineb in
#1883
* feat: update brains library by @mamadoudicko in
#1903


**Full Changelog**:
v0.0.140...v0.0.141

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: backend Related to backend functionality or under the /backend directory lgtm This PR has been approved by a maintainer size:XXL This PR changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants