Skip to content

More models#10

Merged
Aatricks merged 5 commits intomainfrom
More-models
Nov 1, 2025
Merged

More models#10
Aatricks merged 5 commits intomainfrom
More-models

Conversation

@Aatricks
Copy link
Copy Markdown
Owner

@Aatricks Aatricks commented Nov 1, 2025

This pull request introduces significant improvements and new features for model support, especially around SDXL models and quantization compatibility. The main changes include new SDXL model classes with advanced conditioning, expanded quantization format support for GGUF/ggml models, and robustness improvements in attention and UNet block configuration.

New model features and conditioning (SDXL):

  • Added new classes SDXL, SDXLRefiner, and supporting modules for SDXL models, including advanced ADM conditioning, aesthetic score handling, and noise augmentation for CLIP embeddings. This enables richer conditioning and support for SDXL and related models.
  • Integrated SDXL models into the main model list in src/SD15/SD15.py, allowing them to be used alongside existing models. [1] [2]
  • Added configuration file for the BigG CLIP text model (clip_config_bigg.json), supporting larger CLIP architectures for SDXL.

Quantization and GGUF/ggml compatibility:

  • Added support for additional GGUF/ggml quantization types (Q8_1, Q8_K, Q6_K, Q5_K, Q4_0, Q4_1), including fallback mechanisms and exact unpacking logic for Q4_0 and Q4_1. This improves compatibility with a wider range of quantized models and exports.

Robustness and correctness in attention and UNet configuration:

  • Improved attention head reshaping in AttentionMethods.py to handle cases where the hidden dimension is not divisible by the number of heads, with error handling and clearer shape logic.
  • Fixed UNet block configuration logic to correctly handle cases where num_head_channels is set, ensuring correct calculation of num_heads and dim_head throughout the network. [1] [2] [3]

…Flux handling, and UI/history integration

- feat(models): add src/user/model_loader.py for model discovery, type detection (GGUF/safetensors/pt) and pipeline loading; wire model_path through CLI, pipeline, and generation flow; ensure FLUX (.gguf) paths are handled safely.
- feat(quantize): expand Quantizer with pragmatic gguf fallbacks and handlers (register Q8_1, Q8_K, Q6_K, Q5_K, Q4_0, Q4_1), add unpacking helpers and compatibility shims for additional ggml quant formats.
- fix(util): prefer/handle torch.load weights_only argument for newer PyTorch versions and respect safe_load semantics to avoid unsafe full loads when requested.
- fix(app): harden preview file cleanup in AppInstance (retry on PermissionError, safely remove missing entries, reduce noisy Windows file-in-use errors).
- feat(ui): add model selection dropdown to pages, persist model_path in settings, surface model_type/model_path in generation history and pages, and pass model_path into pipeline/generation; update settings defaults and webui_settings.json accordingly.
- feat(tests): add Windows PowerShell API helper script tests/server_test.ps1 for easier manual API testing.

Small robustness and wiring changes: update hidiffusion.utils.guess_model_type to prefer explicit checks and fallback on latent channel heuristics; adjust pipeline to produce a compatible checkpoint tuple for FLUX and non-FLUX flows.
Guard multiscale preset handling with an isinstance check to avoid attempting
to apply presets for non-string values, and add a debug print showing the
preset and its type for easier troubleshooting.
… support and related utilities

- Add SDXL model configs and CLIP integration
  - New SDXL model and refiner configs: src/SD15/SDXL.py
  - New SDXL CLIP implementations and tokenizers: src/SD15/SDXLClip.py
  - Add SDXL helper for user code: src/user/sdxl_impl.py
  - Include big-G CLIP config JSON (clip_config_bigg.json) under include/src paths

- Extend core model base for SDXL
  - Add Timestep and CLIPEmbeddingNoiseAugmentation classes and SDXL-specific BaseModel subclasses in src/Model/ModelBase.py
  - Provide sdxl_pooled helper and SDXL/Refiner encode_adm implementations

- UNet/attention improvements
  - UNet: support num_head_channels option and compute num_heads/dim_head accordingly (src/NeuralNetwork/unet.py)
  - Attention: improve attention_pytorch reshape logic, validate divisibility of total_dim by heads and raise clear error / log (src/Attention/AttentionMethods.py)

- Latent formats
  - Add SDXL and SDXL_Playground_2_5 latent formats with mean/std handling and RGB factors (src/Utilities/Latent.py)

- Utilities and converters
  - Add transformers_convert and clip_text_transformers_convert helpers to convert state_dict prefixes for CLIP/text transformers (src/Utilities/util.py)

- Sampling and scheduler tweaks
  - Minor comment/note in AYS schedules about 20-step schedule optimization (src/sample/ays_scheduler.py)
  - ksampler util: auto-detect model type for AYS scheduler (SDXL vs SD15) and add explicit ays_sd15 mapping (src/sample/ksampler_util.py)

- SD15 registry update
  - Register SDXL variants alongside existing SD15 and Flux entries (src/SD15/SD15.py)

These changes add end-to-end SDXL support (model configs, CLIP G/L handling, tokenizer, latent handling, utils, sampling hooks and a small user helper) and harden attention/UNet head sizing.
… labels

Expand the Presets dropdown with SDXL and Flux resolution options (including Flux 2.0MP max),
add section headers for clarity, and map each new preset to the appropriate width/height.
Also update existing SD1.5 presets to include explicit "(SD1.5)" labels.
…; detect/propagate Flux and include Flux output fallback

- pipeline: replace hardcoded Flux sampling values (steps=20, sampler="euler_cfgpp", scheduler="beta")
  with the user-configured steps/sampler/scheduler and keep cfg=1 comment for Flux.
  Also update the metadata map to report the actual sampler/steps/scheduler used.

- ui/generation: detect if selected model is FLUX and pass flux_enabled into pipeline calls.
  Preserve model detection behavior and add Flux as a fallback output directory when model type is unknown.
@Aatricks Aatricks self-assigned this Nov 1, 2025
@Aatricks Aatricks merged commit 1b6ba62 into main Nov 1, 2025
1 check failed
@Aatricks Aatricks deleted the More-models branch November 1, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant