Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propmts Queue #1773

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from
Draft

Propmts Queue #1773

wants to merge 6 commits into from

Conversation

docppp
Copy link
Contributor

@docppp docppp commented Jan 6, 2024

Basic prompts queue that remembers all selected settings. If queue is greater than 0. button Generate will run multiple times with those options simulating" settings&prompting-generating" cycle made by hand.

@docppp docppp mentioned this pull request Jan 6, 2024
@mashb1t mashb1t linked an issue Jan 6, 2024 that may be closed by this pull request
@mashb1t
Copy link
Collaborator

mashb1t commented Jan 6, 2024

Thank you very much (again) for your participation and collaboration in Fooocus, much appreciated 👍

ongoing discussion: see #1664 (comment) and below
relates to #1751

Copy link
Collaborator

@mashb1t mashb1t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see #1664 (comment) to make the code compatible for simultaneous use of multiple users. I can provide code optimisations tomorrow.

@LordMilutin
Copy link

LordMilutin commented Jan 6, 2024

I have tried setting it, and running it, it works fine on the frontend, I am able to add prompts to queue, however, the generation does not work:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/stable-diffusion/modules/advanced_parameters.py", line 21, in set_all_advanced_parameters
    disable_preview, adm_scaler_positive, adm_scaler_negative, adm_scaler_end, adaptive_cfg, sampler_name, \
ValueError: too many values to unpack (expected 32)
Traceback (most recent call last):
  File "/stable-diffusion/modules/async_worker.py", line 806, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/stable-diffusion/modules/async_worker.py", line 150, in handler
    cn_tasks[cn_type].append([cn_img, cn_stop, cn_weight])
KeyError: 0.6
Total time: 150.35 seconds
Traceback (most recent call last):
  File "/stable-diffusion/modules/async_worker.py", line 806, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/stable-diffusion/modules/async_worker.py", line 150, in handler
    cn_tasks[cn_type].append([cn_img, cn_stop, cn_weight])
KeyError: 0.6
Total time: 0.03 seconds

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 7, 2024

I have tried setting it, and running it, it works fine on the frontend, I am able to add prompts to queue, however, the generation does not work

@LordMilutin works for me. Please check that when resolving merge conflicts the appropriate parameter count is returned in modules/advanced_parameters.py (adjust counter accordingly).

@docppp Here are even more reasons why this queue needs to be well thought out:

  • advanced parameters are currently shared between generations, so when queueing a render and changing an advanced parameter in the Developer Debug Mode tab, this overrides the previous queued entries. A solution would be to set all advanced params to your queue, then call advanced_parameters.set_all_advanced_parameters with them on queue execution in your while loop. This does not work for multi-user scenarios though as advanced_parameter uses globals, so no separation whatsoever.
  • Your current implementation breaks image output for when a queue has only items with image amount = 1 queued, as there is no final gallery output due to do_not_show_finished_images=len(tasks) == 1 in async_worker.py
  • Speed of the gallery is heavily impacted for large queue / image amount size. Solved by add advanced parameter for disable_intermediate_results (progress_gallery) #1013

I'd propose to mark this PR as draft, as it is not ready to be merged and might need to undergo an in-depth feasibility analysis.

@mashb1t mashb1t marked this pull request as draft January 7, 2024 17:48
@LordMilutin
Copy link

I have tested it, and it works perfectly for my issue. Great job!

@docppp
Copy link
Contributor Author

docppp commented Jan 8, 2024

good points, I will try to analyze the code more and see if I can come up with any feasible solution. but I will focus solely on single-user scenario as this is mine field of interest (also Gradio not being good at parallel usage may even be blocker for production-ready multi-user queue).

@mashb1t
Copy link
Collaborator

mashb1t commented Jan 8, 2024

Thanks!
Just to clarify: Gradio generally offers all that one would need for parallel usage and multiple users etc., but Fooocus is not built for maximum speed and parallel processing, but accessibility and user-experience. When active development continues in mid/end January (see https://github.com/lllyasviel/Fooocus/blob/main/update_log.md first line) there might be new information and optimisations to integrate this feature properly.

@LordMilutin
Copy link

@docppp one thing to consider, I tried queuing last night 50 queues with 32 images each, and after the 2nd or 3rd queue it stops generating. After refreshing the page and queuing again it works for a few queues and then stops.
Is it possible to send all queues to the backend at once and process them that way even if the frontend freezes?

@docppp
Copy link
Contributor Author

docppp commented Jan 13, 2024

@LordMilutin please check it out, with new fix, I am able to generate over 250 images and it keeps going.

@blablablazhik
Copy link

@docppp hello! Sry for bothering you again, but can you add txt prompt reader in queue? So u can add 1 txt file with prompts on each line and it will generate with all settings each prompt per line

@LordMilutin
Copy link

@LordMilutin please check it out, with new fix, I am able to generate over 250 images and it keeps going.

Thanks, I was finally able to generate the whole queue last night with the update you made. Thank you so much for this!

Two things to consider:

  1. I believe there should be a clear queue button as I accidentally added the same prompt to queue a few times and I had to restart the app or let it generate the same prompt a few times.
  2. This version shows only the latest prompt-generated images in the UI. This isn't a huge dealbreaker as I've found previous prompts in tmp/gradio folder so I can sort it by timestamp and roughly see what prompts they belong to. I prefer this rather than the previous version that showed all prompt-generated images but the queue never completed.

FINISHED_IMG.append(product[-1])
FINISHED_IMG = FINISHED_IMG[-32:]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a note, the max image amount per generation is configurable in the config and only defaults to 32, so this might be changed to the config rather the hardcoded number

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather leave it as is, as more items in gallery may break things. It is easy to check what was generated in history log file where prompts are also shown.
@LordMilutin yup, clear button is a good idea and about point 2, see above.

@blablablazhik prompts from file is not in my plans, but you can try hardcoding it yourself (i believe i showed somewhere how to do it).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather leave it as is, as more items in gallery may break things. It is easy to check what was generated in history log file where prompts are also shown. @LordMilutin yup, clear button is a good idea and about point 2, see above.

@blablablazhik prompts from file is not in my plans, but you can try hardcoding it yourself (i believe i showed somewhere how to do it).

Upon further usage, I agree that a clear queue button is a must, as well as a stop queue button. Because right now if I stop it, it will move on to another queue, and so on, so I have to stop each queue which is a lot if I have 50 queues.

@VictorZakharov
Copy link

Even single user would be great to start. Useful for experimenting with styles with batch gen, for example, to avoid waiting. Queue 100-1000 gens then come back in an hour or smth like that.

@mashb1t mashb1t added the Size L large change, may depend on other changes, thoroughly test label Feb 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Size L large change, may depend on other changes, thoroughly test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Queue prompts
5 participants