Skip to content
This repository has been archived by the owner on Oct 12, 2023. It is now read-only.

About supported resolution. #24

Closed
ltdrdata opened this issue Aug 13, 2023 · 20 comments
Closed

About supported resolution. #24

ltdrdata opened this issue Aug 13, 2023 · 20 comments

Comments

@ltdrdata
Copy link

ltdrdata commented Aug 13, 2023

I'm contemplating how to support AIT in the Detailer node, and it seems quite challenging to apply in a different way.

So, when the model passed to the Detailer is an AIT model, I intend to approach it by restricting the upscale factor to the resolution supported by AIT.

To achieve this, I need to determine the supported resolutions from the model passed to the Detailer. Is it possible for the model to provide such information?

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required

Get outputs with:
_output_name_to_index/_construct_output_name_to_index_map
Get maximum shape with name or index:
get_output_maximum_shape

module = ...
outputs = module._output_name_to_index # or module._construct_output_name_to_index_map()
for name, idx in outputs.items():
    shape = module.get_output_maximum_shape(idx)
    print(name, shape)

@ltdrdata
Copy link
Author

ltdrdata commented Aug 13, 2023

Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required

Get outputs with:
_output_name_to_index/_construct_output_name_to_index_map
Get maximum shape with name or index:
get_output_maximum_shape

module = ...
outputs = module._output_name_to_index # or module._construct_output_name_to_index_map()
for name, idx in outputs.items():
    shape = module.get_output_maximum_shape(idx)
    print(name, shape)

Is it correct that the size is limited to multiples of 64, ranging from a minimum of 64 to a specific maximum size?

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Current modules support 8px increment.

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Size reported by module for unet is in latent size, so multiply or divide by 8 if you're working with pixel size.
For vae decode, the output size reported is in pixels.
For vae encode, the output size reported is in latent.

@ltdrdata
Copy link
Author

ltdrdata commented Aug 13, 2023

Size reported by module for unet is in latent size, so multiply or divide by 8 if you're working with pixel size. For vae decode, the output size reported is in pixels. For vae encode, the output size reported is in latent.

Based on a quick test, it appears that pixel size of the image doesn't need to be a concern. Since it gets truncated to a multiple of 8 when converted to latent, I don't need to worry about it.

@ltdrdata
Copy link
Author

Yes the module can provide maximum output shape. Input names are available, and maximum input shape is exported from the module but the code for maximum input shape is not present in the interface, I can add this if required

Get outputs with: _output_name_to_index/_construct_output_name_to_index_map Get maximum shape with name or index: get_output_maximum_shape

module = ...
outputs = module._output_name_to_index # or module._construct_output_name_to_index_map()
for name, idx in outputs.items():
    shape = module.get_output_maximum_shape(idx)
    print(name, shape)

Should the size of the manageable max resolution in the module be obtained from the name? Anyway, if AITemplateLoader pass the max resolution to a place like model_options, it seems like it would be very helpful.

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Yes get_output_maximum_shape also works with names.

The AITemplateLoader does not actually load the module, it only signals to sampler that it should use AIT, loading unet modules happens in sample and module is selected based on sizes etc

Could you share more details of your use case? Maybe you can just detect the module to use from within your node

@ltdrdata
Copy link
Author

In the detailer, a specific part of the image is upscaled and then encoded using VAE for KSample.

If the upscale size exceeds a certain limit, an error occurs. I'm trying to impose an additional restriction here in the form of a maximum resolution.

When the model is passed to the Detailer and resolution constraints are enforced by the model, having the maximum resolution information somewhere within the model would allow me to utilize this information.

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

This plugin overrides comfy.sample.sample, AITemplateLoader only adds a flag that AITemplate is used and the override version of sample detects this flag then selects an appropriate module.
For vae the module selection happens within the node. There is no module passed from AITemplateLoader, and no module is loaded at that point of execution.
The vae encode you mentioned I assume is here, this would need code from the ait vae encode node, the code from that node selects the module based on the input shape.
Sample in the detailer node is here, as far as I know if this plugin is installed then any other nodes should have comfy.sample.sample overridden, with AITemplateLoader connected to the MODEL any third party node's usage of KSampler would use AIT and the module selection would be automatic.
So while I understand you would use the maximum shape as a restriction on resolution, I'm not sure why any restriction need apply.

If you could share any links to relevant code sections, details on how you're integrating, etc and with regards to "an error occurs", what is the error that occurs?

@ltdrdata
Copy link
Author

ltdrdata commented Aug 13, 2023

When I tried just simple T2I with 1024x1024 on SD1.5 model. The generation is failed.

This issue will break Detailer's behavior.
I need to restrict upscale size to avoid this situation.

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Could you please share any relevant code sections where you are attempting to integrate, and any errors you are receiving?

@ltdrdata
Copy link
Author

ltdrdata commented Aug 13, 2023

Could you please share any relevant code sections where you are attempting to integrate, and any errors you are receiving?

I thought this error is normal behavior.

Since 1024x1024 caused an error in SD1.5, and there were no issues even with 2048x2048 in SDXL, I was searching for a basis on which to establish the max resolution setting.

1024

!!! Exception during processing !!!
Traceback (most recent call last):
  File "/home/rho/git/ComfyUI/worklist_execution.py", line 42, in exception_helper
    task()
  File "/home/rho/git/ComfyUI/worklist_execution.py", line 254, in task
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/execution.py", line 97, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/execution.py", line 90, in map_node_over_list
    results.append(getattr(obj, func)(**params))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/nodes.py", line 1206, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 175, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/AITemplate.py", line 308, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/samplers.py", line 720, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/comfy/samplers.py", line 263, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 43, in apply_model
    return unet_inference(
           ^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/inference.py", line 98, in unet_inference
    exe_module.run_with_tensors(inputs, ys, graph_mode=False)
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 566, in run_with_tensors
    outputs_ait = self.run(
                  ^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 469, in run
    return self._run_impl(
           ^^^^^^^^^^^^^^^
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 408, in _run_impl
    self.DLL.AITemplateModelContainerRun(
  File "/home/rho/git/ComfyUI/custom_nodes/AIT/AITemplate/ait/module/model.py", line 212, in _wrapped_func
    raise RuntimeError(f"Error in function: {method.__name__}")
RuntimeError: Error in function: AITemplateModelContainerRun

@ltdrdata
Copy link
Author

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

There will be some additional error message above the traceback.

The maximum you should set there would be 4096, at this is the largest module supported currently. Other than that, you should not need to set restrictions based on loaded module, because module selection is automatic, and AITemplateLoader does not pass the module itself, only a flag that AITemplate should be used.

vae encode here can use code from AIT vae encode, then module selection is automatic
sample here should be using overridden, comfy.sample.sample, and use AIT if AITemplateLoader is connected to the nodes' MODEL input, and module is selected based on input shape.

@ltdrdata
Copy link
Author

ltdrdata commented Aug 13, 2023

There will be some additional error message above the traceback.

The maximum you should set there would be 4096, at this is the largest module supported currently. Other than that, you should not need to set restrictions based on loaded module, because module selection is automatic, and AITemplateLoader does not pass the module itself, only a flag that AITemplate should be used.

vae encode here can use code from AIT vae encode, then module selection is automatic sample here should be using overridden, comfy.sample.sample, and use AIT if AITemplateLoader is connected to the nodes' MODEL input, and module is selected based on input shape.

So, if setting a resolution exceeding 768 results in KSample failure, that wouldn't be considered a normal situation, right?

Found 4 modules for linux v1 sm80 1 776 unet
Using 1430bb4e84b5b53befc0bf8e12d25cdd65720f16505f20287f739625f5c89a51
Error: [SetValue] Dimension got value out of bounds; expected value to be in [1, 96], but got 97.
  0%|          | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Thank you for providing the error message, it is important to provide all error messages to assist in diagnosing the issue. linux/sm80/bs1/768/unet_v1_768.so.xz had the same sha256 as 1024, this resulted in 768 being selected instead of 1024.

@ltdrdata
Copy link
Author

Thank you for providing the error message, it is important to provide all error messages to assist in diagnosing the issue. linux/sm80/bs1/768/unet_v1_768.so.xz had the same sha256 as 1024, this resulted in 768 being selected instead of 1024.

Oh... There was an issue with module selection due to hash collisions. I misunderstood that error as a limitation of the AIT approach. Thx.

@hlky
Copy link
Collaborator

hlky commented Aug 13, 2023

Ensure you delete the current file otherwise the correct module will not download.

@ltdrdata
Copy link
Author

Ensure you delete the current file otherwise the correct module will not download.

Oh.. should I delete that file?

@ltdrdata
Copy link
Author

unet_v1_768.so.xz

It works well :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants