Skip to content

feat: web UI (React + TypeScript)#5

Draft
pedropaf wants to merge 6 commits intomainfrom
feat/ui
Draft

feat: web UI (React + TypeScript)#5
pedropaf wants to merge 6 commits intomainfrom
feat/ui

Conversation

@pedropaf
Copy link
Collaborator

@pedropaf pedropaf commented Mar 8, 2026

Summary

  • Full web UI for modl: React 19 + TypeScript + Tailwind + Radix UI + TanStack Query
  • Generate, Gallery, Training, Studio, Datasets pages
  • Axum server with SSE streaming, REST API endpoints
  • build.rs for frontend build integration

Draft — UI is work in progress.

🤖 Generated with Claude Code

@pedropaf pedropaf self-assigned this Mar 8, 2026
pedropaf and others added 5 commits March 9, 2026 12:26
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Load diffusers pipelines component-by-component from the local modl
store instead of falling back to HuggingFace downloads. Supports fp8
quantized models via layerwise casting. Tested working for Flux-dev
(fp8), Z-Image Turbo, and SDXL.

- Add assemble_pipeline() with per-component loading from store
- Bundle diffusers/transformers config files for offline pipeline init
- Rich gen_components in arch_config (model_class, config_dir, model_id)
- resolve_gen_assembly() maps model IDs to local store paths
- detect_model_format() via safetensors header inspection
- fp8 inference: load bf16 then enable_layerwise_casting(fp8 storage)
- Text encoders: init_empty_weights + load_state_dict for zero-copy load
- Fix preflight to accept optional_variant dependencies (e.g. t5-xxl-fp8)
- Add pipeline-loading-strategy spec doc

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix Chroma: pipeline_class → ChromaPipeline, text encoder → T5-XXL
  (was incorrectly z-image-text-encoder/Qwen3), add ChromaTransformer2DModel
- Add Qwen-Image gen_components: QwenImagePipeline, QwenImageTransformer2DModel,
  Qwen2_5_VLForConditionalGeneration, AutoencoderKLQwenImage
- Bundle config files for chroma-transformer, chroma-scheduler,
  qwen-image-transformer, qwen-image-vae, qwen-image-text-encoder,
  qwen-image-tokenizer, qwen-image-scheduler
- Add _detect_weight_dtype() — reads safetensors header to show fp8/bf16/fp16
- Log each component with filename + dtype during pipeline loading
- Embed model_files (file + dtype per component) in PNG metadata
- Include base_model_path in Rust DB artifact metadata

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Flux 2 is BFL's new 32B model with a completely different architecture:
- Single text encoder (Mistral Small 3.1 24B) instead of CLIP-L + T5-XXL
- New VAE (AutoencoderKLFlux2) with patch-based encoding
- Flux2Transformer2DModel: 8 double + 48 single stream blocks
- Uses AutoProcessor (PixtralProcessor) instead of regular tokenizer

Changes:
- Add flux2 arch_config with full gen_components
- Add Flux2 to MODEL_REGISTRY (black-forest-labs/FLUX.2-dev)
- Add BaseModelFamily::Flux2 on Rust side (presets, generate defaults)
- Add flux2-dev to gated models list (HuggingFace auth required)
- Bundle configs: flux2-dev-transformer, flux2-vae, flux2-dev-scheduler,
  flux2-text-encoder (Mistral3), flux2-processor (PixtralProcessor)
- Support HF directory loading path in assemble_pipeline (for quantized
  text encoders like NF4 Mistral3)
- Default params: 28 steps, guidance 4.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…der)

- Add flux2_klein architecture with Flux2KleinPipeline (not Flux2Pipeline)
- ComfyUI fp8 dequantization: apply weight_scale before stripping scale tensors
- Skip enable_layerwise_casting for dequantized weights (already bf16)
- Add pipeline_kwargs pass-through for is_distilled=True
- Add Klein transformer + scheduler config files
- Add Klein detection in CLI default_steps (4) and default_guidance (1.0)
- Fix flux2 text encoder model_id to flux2-mistral-text-encoder

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant