Skip to content

Conversation

@zimeg
Copy link
Member

@zimeg zimeg commented Dec 18, 2025

Type of change

  • New feature

Summary

This PR showcases "chunks" in a chat stream. The example is meant to be quick to understand but perhaps not so meaningful. Related: slackapi/python-slack-sdk#1809

Requirements

  • I have ensured the changes I am contributing align with existing patterns and have tested and linted my code
  • I've read and agree to the Code of Conduct

@zimeg zimeg requested review from mwbrooks and srtaalej December 18, 2025 00:09
@zimeg zimeg self-assigned this Dec 18, 2025
@zimeg zimeg added the enhancement New feature or request label Dec 18, 2025
@zimeg zimeg marked this pull request as draft December 18, 2025 00:09
Comment on lines 72 to 113
# The second example shows detailed thinking steps similar to tool calls
else:
streamer.append(
chunks=[
MarkdownTextChunk(
text="Hello.\nI have received the task. ",
),
MarkdownTextChunk(
text="This task appears manageable.\nThat is good.",
),
TaskUpdateChunk(
id="001",
title="Understanding the task...",
status="in_progress",
details="- Identifying the goal\n- Identifying constraints",
),
TaskUpdateChunk(
id="002",
title="Performing acrobatics...",
status="pending",
),
],
)
time.sleep(4)

streamer.append(
chunks=[
TaskUpdateChunk(
id="001",
title="Understanding the task...",
status="complete",
details="\n- Pretending this was obvious",
output="We'll continue to ramble now",
),
TaskUpdateChunk(
id="002",
title="Performing acrobatics...",
status="in_progress",
),
],
)
time.sleep(4)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I've been experimenting with something similar on my own, I like this idea. Featuring what is the agent doing is great like (tools calls or thinking steps)

)

returned_message = call_llm([{"role": "user", "content": text}])
returned_message = call_llm(text)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⭐ nice

@zimeg
Copy link
Member Author

zimeg commented Jan 19, 2026

📸 Demo with current changes!

Timeline with tool calls and LLM call

dice.mov

Plan with mocked data

plan.mov

Copy link
Member

@mwbrooks mwbrooks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Super fun prompts that showcase the timeline and plan display styles alongside the new task and plan blocks. I think this is a good start to inspiring developers to integrate their own tool calls.

🧪 Local testing worked like a charm. I didn't find any issues.

📝 I left a few suggestions. Nothing is blocking, but I'd ask to consider switching the inverse logic (maybe I'm just a little slow in the ol'noodle and need it outlined simpler). Feel free to pump any-or-all to the future.

🚀 Please drop that draft label so we can smash the merge! :shipit:

Comment on lines 43 to 44
# This first example shows a generated text response for the provided prompt
if message["text"] != "Wonder a few deep thoughts.":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: It took me a few re-reads before I noticed the inverted logic (!=).

At first, I kept wondering why "Wonder a few deep thoughts." would be passed directly to the LLM, while the else statement handled the example.

Non-blocker, but could we flip this logic to be == "Wonder a few deep thoughts.": and have the else handle the general LLM response?

Comment on lines +85 to +106
streamer.append(
chunks=[
MarkdownTextChunk(
text="Hello.\nI have received the task. ",
),
MarkdownTextChunk(
text="This task appears manageable.\nThat is good.",
),
TaskUpdateChunk(
id="001",
title="Understanding the task...",
status="in_progress",
details="- Identifying the goal\n- Identifying constraints",
),
TaskUpdateChunk(
id="002",
title="Performing acrobatics...",
status="pending",
),
],
)
time.sleep(4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

praise: Love this example. It's dead simple and easy to grok. It's ripe to be altered, experimented, and hacked on for folks playing with the sample. 🙌🏻

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: I think it's reasonable to rename ai/ to agent/ if we want, but it can also happen in a later PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwbrooks Awesome callout! Let's make it in this PR as part of 6ab8795 with the addition of tools 🤖

@srtaalej srtaalej self-requested a review January 20, 2026 18:59
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i really like this example 🤩

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srtaalej It makes for fun games! 🎲 ✨

Copy link
Contributor

@srtaalej srtaalej left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice changes and its working for me ⭐ ⭐ ⭐

@zimeg zimeg marked this pull request as ready for review January 20, 2026 19:52
messages.extend(messages_in_thread)
response = openai_client.responses.create(
model="gpt-4o-mini", input=messages, stream=True
streamer: ChatStream,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do we think about moving the OpenAI client initialization to the module level instead of creating it inside call_llm() 🤔 that way we’re not re-initializing the client on every call
it’s probably fine either way for this dice-rolling use case, but it might be better to model best practices for users ❓

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srtaalej That might be nice to follow up with! At the moment I find the following error perhaps due to import order and the "dotenv" package of the main module though:

openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🗣️ ramble: We might also consider keeping it within a function to avoid global variables, but I do not know the best practices of modules, I admit!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, i was getting the same error when i was working on the app. Ill try to figure out how we fixed it in case we run into it again but good work! ⭐ ⭐ ⭐

Copy link
Member Author

@zimeg zimeg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jonigl @srtaalej @mwbrooks Thanks all for taking a look at these changes and the kind encouragement 💌

I made a few changes around code structure to make understanding easier I hope, but we might want to follow up with instructions around development with the CLI too.

Let's merge this after a bit more testing 🧪 ✨

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwbrooks Awesome callout! Let's make it in this PR as part of 6ab8795 with the addition of tools 🤖

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srtaalej It makes for fun games! 🎲 ✨

messages.extend(messages_in_thread)
response = openai_client.responses.create(
model="gpt-4o-mini", input=messages, stream=True
streamer: ChatStream,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srtaalej That might be nice to follow up with! At the moment I find the following error perhaps due to import order and the "dotenv" package of the main module though:

openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

messages.extend(messages_in_thread)
response = openai_client.responses.create(
model="gpt-4o-mini", input=messages, stream=True
streamer: ChatStream,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🗣️ ramble: We might also consider keeping it within a function to avoid global variables, but I do not know the best practices of modules, I admit!

Co-authored-by: Michael Brooks <michael@michaelbrooks.ca>
@zimeg
Copy link
Member Author

zimeg commented Jan 21, 2026

@mwbrooks Thanks for that last minute find too! I'll merge this now for upcoming testing and continued iterations 🚢 💨 🫡

@zimeg zimeg merged commit b387ecf into feat-ai-apps-thinking-steps Jan 21, 2026
2 checks passed
@zimeg zimeg deleted the zimeg-feat-chunks branch January 21, 2026 00:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants