-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Fix prompt isolation tests for QwenImage edit and edit plus #12403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
sayakpaul
wants to merge
12
commits into
main
Choose a base branch
from
prompt-isolation-tests-qwen
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
12 commits
Select commit
Hold shift + click to select a range
f82c152
fix prompt isolation test.
sayakpaul a9d50c8
up
sayakpaul 1185f82
up
sayakpaul f84b0ab
up
sayakpaul b26f7fc
revert lora utils tests.
sayakpaul 16544bb
revert pipeline changes.
sayakpaul 290b283
up
sayakpaul d29bfb7
Merge branch 'main' into prompt-isolation-tests-qwen
sayakpaul ba12026
Merge branch 'main' into prompt-isolation-tests-qwen
sayakpaul c541f9a
Merge branch 'main' into prompt-isolation-tests-qwen
sayakpaul 93a470b
Apply suggestions from code review
sayakpaul d287428
Update tests/pipelines/qwenimage/test_qwenimage_edit_plus.py
sayakpaul File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -134,16 +134,18 @@ def get_dummy_inputs(self, device, seed=0): | |
| else: | ||
| generator = torch.Generator(device=device).manual_seed(seed) | ||
|
|
||
| image = Image.new("RGB", (32, 32)) | ||
| # Even if we specify smaller dimensions for the images, it won't work because of how | ||
| # the internal implementation enforces a minimal resolution of 384*384. | ||
| image = Image.new("RGB", (384, 384)) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Similar comment to #12403 (comment), but may not be as big of an issue since the dummy |
||
| inputs = { | ||
| "prompt": "dance monkey", | ||
| "image": [image, image], | ||
| "negative_prompt": "bad quality", | ||
| "generator": generator, | ||
| "num_inference_steps": 2, | ||
| "true_cfg_scale": 1.0, | ||
| "height": 32, | ||
| "width": 32, | ||
| "height": 384, | ||
| "width": 384, | ||
| "max_sequence_length": 16, | ||
| "output_type": "pt", | ||
| } | ||
|
|
@@ -236,9 +238,14 @@ def test_vae_tiling(self, expected_diff_max: float = 0.2): | |
| "VAE tiling should not affect the inference results", | ||
| ) | ||
|
|
||
| @pytest.mark.xfail(condition=True, reason="Preconfigured embeddings need to be revisited.", strict=True) | ||
| def test_encode_prompt_works_in_isolation(self, extra_required_param_value_dict=None, atol=1e-4, rtol=1e-4): | ||
| super().test_encode_prompt_works_in_isolation(extra_required_param_value_dict, atol, rtol) | ||
| def test_encode_prompt_works_in_isolation( | ||
| self, extra_required_param_value_dict=None, keep_params=None, atol=1e-4, rtol=1e-4 | ||
| ): | ||
| # We include `image` because it's needed in both `encode_prompt` and some other subsequent calculations. | ||
| # `max_sequence_length` to maintain parity between its value during all invocations of `encode_prompt` | ||
| # in the following test. | ||
sayakpaul marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| keep_params = ["image", "max_sequence_length"] | ||
| super().test_encode_prompt_works_in_isolation(extra_required_param_value_dict, keep_params, atol, rtol) | ||
|
|
||
| @pytest.mark.xfail(condition=True, reason="Batch of multiple images needs to be revisited", strict=True) | ||
| def test_num_images_per_prompt(): | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we are making the dummy
imageinput much bigger, are theQwenImageEditPipelineFastTeststoo heavy for the CI? Runningtakes ~4 minutes on a A100 on DGX.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Observe how
txtchanges here whenencode_prompt()is called in isolation as opposed to withpipe(...). First is whenencode_promptin isolation:txt=["<|im_start|>system\nDescribe the key features of the input image (color, shape, size, texture, objects, background), then explain how the user's text instruction should alter or modify the image. Generate a new image that meets the user's requirements while maintaining consistency with the original input where appropriate.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>dance monkey<|im_end|>\n<|im_start|>assistant\n"], image.size=(32, 32)Second is with
pipe(...):txt=["<|im_start|>system\nDescribe the key features of the input image (color, shape, size, texture, objects, background), then explain how the user's text instruction should alter or modify the image. Generate a new image that meets the user's requirements while maintaining consistency with the original input where appropriate.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>dance monkey<|im_end|>\n<|im_start|>assistant\n"], image.size=(1024, 1024)This changes (
image.sizebeing different because of this) theprompt_embedsand this causes the assertion to fail. Fixing the image size to (1024, 1024) doesn't change theprompt_image.But I am also concerned about the runtime increase because of increasing the image resolution like this. Maybe, we just keep the PR open for now unless it's been requested for this model by the community?