From 3c384326955dc23f3ecef45e669a830283aa9410 Mon Sep 17 00:00:00 2001 From: Zeke Sikelianos Date: Mon, 6 Oct 2025 11:35:19 -0700 Subject: [PATCH] docs: remove replicate.stream from README --- README.md | 16 ---------------- 1 file changed, 16 deletions(-) diff --git a/README.md b/README.md index d4812d4..59d01b5 100644 --- a/README.md +++ b/README.md @@ -110,22 +110,6 @@ output = replicate.run("...", input={...}, wait=False) When `wait=False`, the method returns immediately after creating the prediction, and you'll need to poll for the result manually. -## Run a model and stream its output - -For models that support streaming (particularly language models), you can use `replicate.stream()`: - -```python -import replicate - -for event in replicate.stream( - "meta/meta-llama-3-70b-instruct", - input={ - "prompt": "Please write a haiku about llamas.", - }, -): - print(str(event), end="") -``` - ## Async usage Simply import `AsyncReplicate` instead of `Replicate` and use `await` with each API call: