[Partner Nodes] new OpenAI Image node with DynamicCombo and Autogrow#13838
Conversation
📝 WalkthroughWalkthroughThis pull request introduces 🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
comfy_api_nodes/nodes_openai.py (1)
878-917: ⚡ Quick winMove the new image/mask preprocessing off the request coroutine.
This path can now synchronously flatten, downscale, and PNG-encode up to 16 reference images plus a mask inside
async def execute, so one large edit request can block other API-node work on the same event loop. Please push that preparation intoasyncio.to_thread(...)or an equivalent background helper. Based on learnings: Incomfy_api_nodesPython async node implementations, if the PR adds new synchronous CPU/IO work insideasync def execute, prefer offloading withasyncio.to_thread(or an equivalent background executor) to avoid blocking the event loop.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@comfy_api_nodes/nodes_openai.py` around lines 878 - 917, The image/mask flattening, downscaling and PNG encoding in the execute coroutine (code paths using image_tensors, flat, downscale_image_tensor, mask and building files) must be moved out of the event loop into a synchronous helper run via asyncio.to_thread; implement a synchronous function (e.g. prepare_image_files or preprocess_images_and_mask) that accepts image_tensors and mask, performs the flattening, downscale_image_tensor calls, converts tensors to uint8 numpy arrays, creates PNG BytesIO objects and returns the files list, and then in async def execute call files = await asyncio.to_thread(prepare_image_files, image_tensors, mask); ensure the helper preserves the same behavior (single "image" vs "image[]" naming, mask validation and exception raising) and returns seeked BytesIO objects ready for upload.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@comfy_api_nodes/nodes_openai.py`:
- Around line 768-777: The new "seed" control is declared but never forwarded to
the OpenAI image API, so OpenAIGPTImageNodeV2 ignores it; update the code that
builds the generation and edit request bodies (the places that assemble the
image generation payload and the image edit payload in OpenAIGPTImageNodeV2) to
include the seed value (e.g., add seed: seedValue or the appropriate field name
used by the target API) when present, or if seeding is unsupported by the
backend remove the "seed" IO.Int.Input control; ensure both the generation and
edit request constructors reference the same seed input so the value is not
dropped end-to-end.
- Around line 654-662: The input allows up to 16 images but the code only
validates masks via n_images and doesn't reject larger batched tensors before
calling the edits endpoint; add a preflight validation where the "images" input
is processed (the same place that computes n_images and the codepath that
uploads frames to the edits endpoint) to count the total individual
images/frames in any tensor/batched input and throw/return a clear client-side
error if count > 16, ensuring this check runs before any upload/edits API call;
apply the same check to the alternate upload branch referenced (the code
handling batched/frame uploads used elsewhere around the edits call).
---
Nitpick comments:
In `@comfy_api_nodes/nodes_openai.py`:
- Around line 878-917: The image/mask flattening, downscaling and PNG encoding
in the execute coroutine (code paths using image_tensors, flat,
downscale_image_tensor, mask and building files) must be moved out of the event
loop into a synchronous helper run via asyncio.to_thread; implement a
synchronous function (e.g. prepare_image_files or preprocess_images_and_mask)
that accepts image_tensors and mask, performs the flattening,
downscale_image_tensor calls, converts tensors to uint8 numpy arrays, creates
PNG BytesIO objects and returns the files list, and then in async def execute
call files = await asyncio.to_thread(prepare_image_files, image_tensors, mask);
ensure the helper preserves the same behavior (single "image" vs "image[]"
naming, mask validation and exception raising) and returns seeked BytesIO
objects ready for upload.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 243d920f-4b44-4de3-8697-fad30a2f48bf
📒 Files selected for processing (1)
comfy_api_nodes/nodes_openai.py
API Node PR Checklist
Scope
Pricing & Billing
If Need pricing update:
QA
Comms