Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add headless generation of images, and batch processing of prompts in a json file, directly from CLI. #471

Closed
wants to merge 19 commits into from

Conversation

TimothyAlexisVass
Copy link

@TimothyAlexisVass TimothyAlexisVass commented Sep 21, 2023

It's a start. Can be developed further.
Tested in Colab with:
!python entry_with_update.py --headless --prompt "Beautiful woman"
and
!python launch.py --headless --prompt Wow

Will solve #246 , #385 and #484

After adding the --headless option, there could also be a --batch path/to/bulk_commands.json.
That way, the whole environment wouldn't need to be reloaded for every generation.

@TimothyAlexisVass TimothyAlexisVass marked this pull request as ready for review September 21, 2023 19:41
@TimothyAlexisVass TimothyAlexisVass changed the title Add headless generation of images directly from CLI. Add headless generation of images, and batch processing of prompts in a json file, directly from CLI. Sep 22, 2023
@TimothyAlexisVass
Copy link
Author

Okay, both --headless with and without --batch /path/to/batch_prompts_file.json has now been tested and it works fine for both entry_with_update.py and launch.py.

@baifagg
Copy link

baifagg commented Sep 24, 2023

Hello, I also have the need to generate pictures in batches. My job is to convert a lot of text usage fooocus given by users into pictures and store them locally. But my fooocus is deployed in the cloud, so I need a software that gets the base64 encoding of the fooocus generated image.

@baifagg
Copy link

baifagg commented Sep 24, 2023

Your code file directly replaces the corresponding code in the fooocus, can you use it successfully like your example?

@baifagg
Copy link

baifagg commented Sep 24, 2023

[
{
"prompt": "Note that seed -1 means 'random'. Only 'prompt' is required all other are optional and have default values. This example shows the default values",
"negative_prompt": "",
"styles": ["Fooocus V2", "Default (Slightly Cinematic)"],
"performance": "Speed",
"aspect_ratio": "1024x1024",
"image_number": 2,
"seed": -1,
"sharpness": 2.0,
"base_model": "sd_xl_base_1.0_0.9vae.safetensors",
"refiner_model": "sd_xl_refiner_1.0_0.9vae.safetensors",
"l1": "sd_xl_offset_example-lora_1.0.safetensors",
"w1": 0.5,
"l2": "None",
"w2": 0.5,
"l3": "None",
"w3": 0.5,
"l4": "None",
"w4": 0.5,
"l5": "None",
"w5": 0.5,
"current_tab": "",
"use_input_image": false,
"uov_method": "disabled",
"uov_input_image": null,
"outpaint": [],
"inpaint_input_image": null
},
{
"prompt": "This one will generate four images with aspect_ratio 1280x768",
"image_number": 4,
"aspect_ratio": "1280x768"
},
{
"prompt": "Here is another one with some other parameters set",
"negative_prompt": "You can include as many as you need",
"seed": "12345",
"performance": "Quality",
"sharpness": 10.0
}
]

How should the example call you provided be used? What I need is to change the way style is called every time.

@TimothyAlexisVass
Copy link
Author

TimothyAlexisVass commented Sep 25, 2023

First of all, note that this is a Pull Request which is still awaiting approval.

Here is an example what you could do, with the same prompt for every batch:

my_batches.json

[
  {
    "prompt": "Your prompt",
    "styles": ["Fooocus V2", "The first style you want"]
  },
  {
    "prompt": "Your prompt",
    "styles": ["Fooocus V2", "The second style you want"]
  },
  {
    "prompt": "Your prompt",
    "styles": ["Fooocus V2", "The third style"]
  },
  ...add as many as you want, with the settings that you need...
]

Then you would run it using either entry_with_update.py or launch.py with the --headless and --batch parameters:
python entry_with_update.py --headless --batch /path/to/where/you/put/my_batches.json

But, you could also combine --headless with --prompt PROMPT and --batch /path/to/batch_file.json like this:

my_batches.json

[
  {
    "styles": ["Fooocus V2", "The first style you want"]
  },
  {
    "styles": ["Fooocus V2", "The second style you want"]
  },
  {
    "styles": ["Fooocus V2", "The third style"]
  }
]

And then set the same prompt for the whole batch like this:
python entry_with_update.py --headless --prompt "The prompt you want" --batch /path/to/my_batches.json

@xtremebeing
Copy link

I get an error after merging your PR. Any idea how to fix this?

python entry_with_update.py --headless --prompt "Beautiful woman"
...
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
loading new
'NoneType' object has no attribute 'local_url'
Traceback (most recent call last):
File "/home/x/Fooocus/modules/async_worker.py", line 439, in worker
handler(task)
File "/home/x/Fooocus/fooocus_env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/x/Fooocus/fooocus_env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/x/Fooocus/modules/async_worker.py", line 50, in handler
prompt, negative_prompt, style_selections, performance_selection,
ValueError: not enough values to unpack (expected 39, got 26)
Total time: 62.78 seconds
Image generation failed.

@TimothyAlexisVass
Copy link
Author

Yeah, since there are now more parameters than when this PR was suggested, you are missing some of the parameters as indicated by expected 39, got 26.

@TimothyAlexisVass TimothyAlexisVass closed this by deleting the head repository Oct 18, 2023
@YellowTigerr
Copy link

Why do I still get an error when I modify the launch.py and add headless.py according to the method? launch.py: error: unrecognized arguments: --headless --batch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants