-
Notifications
You must be signed in to change notification settings - Fork 12
feat: showcase text generation and thinking steps from suggested prompts #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
listeners/assistant/message.py
Outdated
| # The second example shows detailed thinking steps similar to tool calls | ||
| else: | ||
| streamer.append( | ||
| chunks=[ | ||
| MarkdownTextChunk( | ||
| text="Hello.\nI have received the task. ", | ||
| ), | ||
| MarkdownTextChunk( | ||
| text="This task appears manageable.\nThat is good.", | ||
| ), | ||
| TaskUpdateChunk( | ||
| id="001", | ||
| title="Understanding the task...", | ||
| status="in_progress", | ||
| details="- Identifying the goal\n- Identifying constraints", | ||
| ), | ||
| TaskUpdateChunk( | ||
| id="002", | ||
| title="Performing acrobatics...", | ||
| status="pending", | ||
| ), | ||
| ], | ||
| ) | ||
| time.sleep(4) | ||
|
|
||
| streamer.append( | ||
| chunks=[ | ||
| TaskUpdateChunk( | ||
| id="001", | ||
| title="Understanding the task...", | ||
| status="complete", | ||
| details="\n- Pretending this was obvious", | ||
| output="We'll continue to ramble now", | ||
| ), | ||
| TaskUpdateChunk( | ||
| id="002", | ||
| title="Performing acrobatics...", | ||
| status="in_progress", | ||
| ), | ||
| ], | ||
| ) | ||
| time.sleep(4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I've been experimenting with something similar on my own, I like this idea. Featuring what is the agent doing is great like (tools calls or thinking steps)
listeners/events/app_mentioned.py
Outdated
| ) | ||
|
|
||
| returned_message = call_llm([{"role": "user", "content": text}]) | ||
| returned_message = call_llm(text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⭐ nice
|
📸 Demo with current changes! Timeline with tool calls and LLM call dice.movPlan with mocked data plan.mov |
mwbrooks
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Super fun prompts that showcase the timeline and plan display styles alongside the new task and plan blocks. I think this is a good start to inspiring developers to integrate their own tool calls.
🧪 Local testing worked like a charm. I didn't find any issues.
📝 I left a few suggestions. Nothing is blocking, but I'd ask to consider switching the inverse logic (maybe I'm just a little slow in the ol'noodle and need it outlined simpler). Feel free to pump any-or-all to the future.
🚀 Please drop that draft label so we can smash the merge! ![]()
listeners/assistant/message.py
Outdated
| # This first example shows a generated text response for the provided prompt | ||
| if message["text"] != "Wonder a few deep thoughts.": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: It took me a few re-reads before I noticed the inverted logic (!=).
At first, I kept wondering why "Wonder a few deep thoughts." would be passed directly to the LLM, while the else statement handled the example.
Non-blocker, but could we flip this logic to be == "Wonder a few deep thoughts.": and have the else handle the general LLM response?
| streamer.append( | ||
| chunks=[ | ||
| MarkdownTextChunk( | ||
| text="Hello.\nI have received the task. ", | ||
| ), | ||
| MarkdownTextChunk( | ||
| text="This task appears manageable.\nThat is good.", | ||
| ), | ||
| TaskUpdateChunk( | ||
| id="001", | ||
| title="Understanding the task...", | ||
| status="in_progress", | ||
| details="- Identifying the goal\n- Identifying constraints", | ||
| ), | ||
| TaskUpdateChunk( | ||
| id="002", | ||
| title="Performing acrobatics...", | ||
| status="pending", | ||
| ), | ||
| ], | ||
| ) | ||
| time.sleep(4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
praise: Love this example. It's dead simple and easy to grok. It's ripe to be altered, experimented, and hacked on for folks playing with the sample. 🙌🏻
ai/tools/__init__.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: I think it's reasonable to rename ai/ to agent/ if we want, but it can also happen in a later PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i really like this example 🤩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srtaalej It makes for fun games! 🎲 ✨
srtaalej
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice changes and its working for me ⭐ ⭐ ⭐
| messages.extend(messages_in_thread) | ||
| response = openai_client.responses.create( | ||
| model="gpt-4o-mini", input=messages, stream=True | ||
| streamer: ChatStream, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do we think about moving the OpenAI client initialization to the module level instead of creating it inside call_llm() 🤔 that way we’re not re-initializing the client on every call
it’s probably fine either way for this dice-rolling use case, but it might be better to model best practices for users ❓
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srtaalej That might be nice to follow up with! At the moment I find the following error perhaps due to import order and the "dotenv" package of the main module though:
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🗣️ ramble: We might also consider keeping it within a function to avoid global variables, but I do not know the best practices of modules, I admit!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, i was getting the same error when i was working on the app. Ill try to figure out how we fixed it in case we run into it again but good work! ⭐ ⭐ ⭐
Co-authored-by: Michael Brooks <michael@michaelbrooks.ca>
zimeg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jonigl @srtaalej @mwbrooks Thanks all for taking a look at these changes and the kind encouragement 💌
I made a few changes around code structure to make understanding easier I hope, but we might want to follow up with instructions around development with the CLI too.
Let's merge this after a bit more testing 🧪 ✨
ai/tools/__init__.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srtaalej It makes for fun games! 🎲 ✨
| messages.extend(messages_in_thread) | ||
| response = openai_client.responses.create( | ||
| model="gpt-4o-mini", input=messages, stream=True | ||
| streamer: ChatStream, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srtaalej That might be nice to follow up with! At the moment I find the following error perhaps due to import order and the "dotenv" package of the main module though:
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
| messages.extend(messages_in_thread) | ||
| response = openai_client.responses.create( | ||
| model="gpt-4o-mini", input=messages, stream=True | ||
| streamer: ChatStream, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🗣️ ramble: We might also consider keeping it within a function to avoid global variables, but I do not know the best practices of modules, I admit!
Co-authored-by: Michael Brooks <michael@michaelbrooks.ca>
|
@mwbrooks Thanks for that last minute find too! I'll merge this now for upcoming testing and continued iterations 🚢 💨 🫡 |
Type of change
Summary
This PR showcases "chunks" in a chat stream. The example is meant to be quick to understand but perhaps not so meaningful. Related: slackapi/python-slack-sdk#1809
Requirements