Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add dark mode with toggle and system preference #201

Closed
wants to merge 98 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
98 commits
Select commit Hold shift + click to select a range
a903bf3
add dark mode with toggle and system preference
bzlibby Feb 23, 2023
f191c89
Merge branch 'main' of https://github.com/ssube/onnx-web into dark-mode
bzlibby Mar 2, 2023
f5c8b71
resolved merge conflicts
bzlibby Mar 9, 2023
3573a67
Merge branch 'main' into dark-mode
bzlibby Mar 22, 2023
0042c59
leave readiness unset for new images
ssube Mar 23, 2023
4dd68ea
fix(api): restart worker threads if they crash
ssube Mar 23, 2023
86c1b29
lint(api): extract worker thread main functions (#279)
ssube Mar 23, 2023
6b4c046
pass pool to threads
ssube Mar 23, 2023
2c47904
lint(api): use constant for model filename
ssube Mar 24, 2023
88f4713
fix(api): use lock when restarting workers
ssube Mar 25, 2023
95a61f3
fix(api): restart worker threads when their respective queues are full
ssube Mar 25, 2023
8c3c0de
add debug config
ssube Mar 26, 2023
580d621
fix(api): make format list in schema match code
ssube Mar 26, 2023
a1c3b28
fix(docs): explain extras file format in user guide
ssube Mar 26, 2023
aeb71ad
lint lock name
ssube Mar 26, 2023
e552a55
feat(api): check device worker pool and recycle on a regular interval…
ssube Mar 26, 2023
3aa7b8a
chore(docs): describe log levels in dev docs
ssube Mar 26, 2023
55e44e8
fix(api): increment job counter for worker when it starts a new job (…
ssube Mar 26, 2023
2b179be
fix(api): always reset job counter when creating new device worker
ssube Mar 26, 2023
f3ab25f
lint(api): add start method to worker pool
ssube Mar 26, 2023
14ade83
fix(api): enqueue next job when previous one finishes and after recyc…
ssube Mar 26, 2023
83884bc
enqueue jobs on idle workers during progress check
ssube Mar 26, 2023
8eab92a
define device on pending job
ssube Mar 26, 2023
ea36082
add job count to healthy worker logs
ssube Mar 26, 2023
ccf8d51
feat(api): split up status endpoint by job status
ssube Mar 26, 2023
0af406c
only enqueue jobs from progress worker
ssube Mar 26, 2023
27500ec
fix(api): do not move jobs from pending to running until progress is …
ssube Mar 26, 2023
2d2283e
fix(api): attempt to read progress updates from recycled workers
ssube Mar 26, 2023
36bfcca
fix(api): include worker totals in status endpoint
ssube Mar 26, 2023
bb5d063
sonar lint
ssube Mar 26, 2023
dca8a97
feat(api): pin pytorch versions and update nightly ORT
ssube Mar 26, 2023
e1219cc
fix(api): close queues after stopping workers
ssube Mar 26, 2023
4ddd69b
fix(api): watch for progress events from leaking workers
ssube Mar 26, 2023
0ea0442
apply lint
ssube Mar 26, 2023
d7e5480
fix(tests): make release tests fail if image was not successful (#287)
ssube Mar 27, 2023
88ab20c
chore(deps): update dependency esbuild to v0.17.14
renovate[bot] Mar 26, 2023
ba95f7e
chore(deps): update dependency @types/react to v18.0.29
renovate[bot] Mar 27, 2023
03b00e8
chore(deps): update dependency esbuild-plugin-copy to v2.1.1
renovate[bot] Mar 27, 2023
b186a65
chore(deps): update dependency sinon to v15.0.3
renovate[bot] Mar 27, 2023
db95107
chore(docs): update features, link to user guide
ssube Mar 27, 2023
afa8f5e
feat(api): add optimization for internal fp16 conversion
ssube Mar 27, 2023
33f5992
chore(docs): describe long prompt weighting and permanent blending
ssube Mar 27, 2023
73e9cf8
fix(api): disable internal fp16 for VAE encoder (#290)
ssube Mar 27, 2023
c2f8fb1
fix(api): combine names for ONNX fp16 optimization
ssube Mar 27, 2023
2bbc5d8
lint(api): explicitly bind the device pool to shutdown callback
ssube Mar 27, 2023
c0ece24
fix(docs): add more runtimes to memory usage table
ssube Mar 27, 2023
93fcfd1
fix(api): update LPW pipeline (#298)
ssube Mar 28, 2023
d19bbfc
fix(gui): add prompt tokens to correct tab (#296)
ssube Mar 28, 2023
69ee836
lint(docs): update setup intro to not repeat ToC
ssube Mar 28, 2023
c39f450
note fp16 on AMD in readme
ssube Mar 28, 2023
b4ac7c6
chore(release): 0.9.0
ssube Mar 28, 2023
186b3b2
chore(docs): update readme screenshot
ssube Mar 28, 2023
16dd579
crop readme preview better
ssube Mar 29, 2023
0050fae
chore(gui): update build tools, typedefs
ssube Mar 29, 2023
f774f2e
chore(gui): update MUI packages, lockfile
ssube Mar 29, 2023
1cd436d
feat(api): add setup scripts for Windows
ssube Mar 31, 2023
205ff3e
feat(exe): add specs and launch scripts for Windows EXE bundle (#305)
ssube Mar 31, 2023
9c9857b
indent bundle readme better
ssube Mar 31, 2023
968058b
fix(docs): move container setup to server guide, link to setup method…
ssube Mar 31, 2023
6e509c0
fix phrasing and order
ssube Mar 31, 2023
ee3774b
fix(docs): add VC redist and security note to Windows setup
ssube Mar 31, 2023
9b05b6b
fix(exe): include realesrgan and coloredlogs in bundles
ssube Mar 31, 2023
27c6e71
fix(api): add coloredlogs to base deps
ssube Mar 31, 2023
f4c7f02
fix(docs): note security option on bundle launch scripts
ssube Mar 31, 2023
cdaf1b8
feat(api): add support for highres images
ssube Apr 1, 2023
66e938f
split steps before and after highres
ssube Apr 1, 2023
4ab1b6c
resize tiles before running refinement steps
ssube Apr 1, 2023
ca80e92
fix progress for highres, increase img2img steps
ssube Apr 1, 2023
f462d80
run correction before highres
ssube Apr 1, 2023
ba09748
feat(gui): add highres parameters
ssube Apr 1, 2023
4e481e4
add highres to server params
ssube Apr 1, 2023
4f41145
add highres to txt2img tab and request
ssube Apr 1, 2023
3c2bab3
fix highres steps max
ssube Apr 1, 2023
4a68984
add highres strings
ssube Apr 1, 2023
6aac0fe
fix(api): restart workers on MIOPEN memory errors
ssube Apr 1, 2023
a1f54f0
include highres params in retry
ssube Apr 1, 2023
e0e0999
fix(api): restart workers on HIP memory errors
ssube Apr 1, 2023
0f79f42
apply lint
ssube Apr 1, 2023
f451d8d
feat: add method parameter for highres mode
ssube Apr 1, 2023
6d23491
only run correction before highres when selected in options
ssube Apr 1, 2023
89c3b2a
correctly upscale highres tiles
ssube Apr 1, 2023
e4f55af
fix scale in upscale copy ctor
ssube Apr 1, 2023
ed694aa
pass highres scale param to upscaling method
ssube Apr 1, 2023
bcf396d
lint(gui): remove linebreak between batch and CFG parameters
ssube Apr 1, 2023
6bad599
lint(api): move some chatty logs to trace level
ssube Apr 1, 2023
56c359c
remove undefined names
ssube Apr 1, 2023
bbbef8d
apply lint
ssube Apr 1, 2023
8e5971a
fix lone else
ssube Apr 1, 2023
83dbd47
update params version for highres
ssube Apr 1, 2023
e8ac20b
fix(docs): add output image size table to user guide
ssube Apr 1, 2023
85b3324
fix(api): convert hidden states to fp32 before doing normalization on…
ssube Apr 1, 2023
56ff902
fix(api): use min/max from config for more params
ssube Apr 1, 2023
0fdf4d3
implement proper spiral grid coverage
ssube Apr 4, 2023
c8382dc
feat(api): implement spiral tile order for non-square images
ssube Apr 5, 2023
1cfc538
fix(api): ensure spiral grid coords are always whole pixels
ssube Apr 6, 2023
36ad1ac
add dark mode with toggle and system preference
bzlibby Feb 23, 2023
7c0bd3b
use system theme when the theme is not set
bzlibby Apr 6, 2023
ca900f7
Merge branch 'dark-mode' of https://github.com/bzlibby/onnx-web into …
bzlibby Apr 6, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Remote Attach",
"type": "python",
"request": "attach",
"connect": {
"host": "127.0.0.1",
"port": 5678
},
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/opt/onnx-web"
}
],
"justMyCode": true
}
]
}
108 changes: 108 additions & 0 deletions CHANGELOG.md

Large diffs are not rendered by default.

542 changes: 60 additions & 482 deletions README.md

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions api/entry.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import os

def script_method(fn, _rcb=None):
return fn

def script(obj, optimize=True, _frames_up=0, _rcb=None):
return obj

import torch.jit
torch.jit.script_method = script_method
torch.jit.script = script

import multiprocessing

if __name__ == '__main__':
multiprocessing.freeze_support()
try:
from onnx_web.main import main
app, pool = main()
print("starting workers")
pool.start()
print("starting flask")
app.run("0.0.0.0", 5000, debug=False)
input("press the any key")
pool.join()
except Exception as e:
print(e)
finally:
os.system("pause")
2 changes: 1 addition & 1 deletion api/onnx_web/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
run_upscale_pipeline,
)
from .diffusers.stub_scheduler import StubScheduler
from .diffusers.upscale import run_upscale_correction
from .image import (
expand_image,
mask_filter_gaussian_multiply,
Expand Down Expand Up @@ -48,7 +49,6 @@
apply_patch_facexlib,
apply_patches,
)
from .upscale import run_upscale_correction
from .utils import (
base_join,
get_and_clamp_float,
Expand Down
7 changes: 7 additions & 0 deletions api/onnx_web/chain/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@


class StageCallback(Protocol):
"""
Definition for a stage job function.
"""

def __call__(
self,
job: WorkerContext,
Expand All @@ -25,6 +29,9 @@ def __call__(
source: Image.Image,
**kwargs: Any
) -> Image.Image:
"""
Run this stage against a source image.
"""
pass


Expand Down
110 changes: 91 additions & 19 deletions api/onnx_web/chain/utils.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from logging import getLogger
from math import ceil
from typing import List, Protocol, Tuple

from PIL import Image
Expand All @@ -9,7 +10,14 @@


class TileCallback(Protocol):
"""
Definition for a tile job function.
"""

def __call__(self, image: Image.Image, dims: Tuple[int, int, int]) -> Image.Image:
"""
Run this stage against a single tile.
"""
pass


Expand Down Expand Up @@ -58,31 +66,13 @@ def process_tile_spiral(
image = Image.new("RGB", (width * scale, height * scale))
image.paste(source, (0, 0, width, height))

center_x = (width // 2) - (tile // 2)
center_y = (height // 2) - (tile // 2)

# TODO: should add/remove tiles when overlap != 0.5
tiles = [
(0, tile * -overlap),
(tile * overlap, tile * -overlap),
(tile * overlap, 0),
(tile * overlap, tile * overlap),
(0, tile * overlap),
(tile * -overlap, tile * overlap),
(tile * -overlap, 0),
(tile * -overlap, tile * -overlap),
]

# tile tuples is source, multiply by scale for dest
counter = 0
tiles = generate_tile_spiral(width, height, tile, overlap=overlap)
for left, top in tiles:
left = center_x + int(left)
top = center_y + int(top)

counter += 1
logger.debug("processing tile %s of %s, %sx%s", counter, len(tiles), left, top)

# TODO: only valid for scale == 1, resize source for others
tile_image = image.crop((left, top, left + tile, top + tile))

for filter in filters:
Expand Down Expand Up @@ -113,3 +103,85 @@ def process_tile_order(
else:
logger.warn("unknown tile order: %s", order)
raise ValueError()


def generate_tile_spiral(
width: int,
height: int,
tile: int,
overlap: float = 0.0,
) -> List[Tuple[int, int]]:
spacing = 1.0 - overlap

# round dims up to nearest tiles
tile_width = ceil(width / tile)
tile_height = ceil(height / tile)

# start walking from the north-west corner, heading east
dir_height = 0
dir_width = 1

walk_height = tile_height
walk_width = tile_width

accum_height = 0
accum_width = 0

tile_top = 0
tile_left = 0

tile_coords = []
while walk_width > 0 and walk_height > 0:
# exhaust the current direction, then turn
while accum_width < walk_width and accum_height < walk_height:
# add a tile
logger.trace(
"adding tile at %s:%s, %s:%s, %s:%s",
tile_left,
tile_top,
accum_width,
accum_height,
walk_width,
walk_height,
spacing,
)
tile_coords.append((int(tile_left), int(tile_top)))

# move to the next
tile_top += dir_height * spacing * tile
tile_left += dir_width * spacing * tile

accum_height += abs(dir_height * spacing)
accum_width += abs(dir_width * spacing)

# reset for the next direction
accum_height = 0
accum_width = 0

# why tho
tile_top -= dir_height
tile_left -= dir_width

# turn right
if dir_width == 1 and dir_height == 0:
dir_width = 0
dir_height = 1
elif dir_width == 0 and dir_height == 1:
dir_width = -1
dir_height = 0
elif dir_width == -1 and dir_height == 0:
dir_width = 0
dir_height = -1
elif dir_width == 0 and dir_height == -1:
dir_width = 1
dir_height = 0

# step to the next tile as part of the turn
tile_top += dir_height
tile_left += dir_width

# shrink the last direction
walk_height -= abs(dir_height)
walk_width -= abs(dir_width)

return tile_coords
2 changes: 2 additions & 0 deletions api/onnx_web/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ONNX_MODEL = "model.onnx"
ONNX_WEIGHTS = "weights.pb"
20 changes: 8 additions & 12 deletions api/onnx_web/convert/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
from transformers import CLIPTokenizer
from yaml import safe_load

from ..constants import ONNX_MODEL, ONNX_WEIGHTS
from .correction_gfpgan import convert_correction_gfpgan
from .diffusion.diffusers import convert_diffusion_diffusers
from .diffusion.lora import blend_loras
Expand Down Expand Up @@ -297,7 +298,7 @@ def convert_models(ctx: ConversionContext, args, models: Models):
path.join(
dest,
"text_encoder",
"model.onnx",
ONNX_MODEL,
)
)

Expand Down Expand Up @@ -341,13 +342,13 @@ def convert_models(ctx: ConversionContext, args, models: Models):
path.join(
dest,
"text_encoder",
"model.onnx",
ONNX_MODEL,
)
)

if "unet" not in blend_models:
blend_models["text_encoder"] = load_model(
path.join(dest, "unet", "model.onnx")
path.join(dest, "unet", ONNX_MODEL)
)

# load models if not loaded yet
Expand Down Expand Up @@ -377,7 +378,7 @@ def convert_models(ctx: ConversionContext, args, models: Models):

for name in ["text_encoder", "unet"]:
if name in blend_models:
dest_path = path.join(dest, name, "model.onnx")
dest_path = path.join(dest, name, ONNX_MODEL)
logger.debug(
"saving blended %s model to %s", name, dest_path
)
Expand All @@ -386,7 +387,7 @@ def convert_models(ctx: ConversionContext, args, models: Models):
dest_path,
save_as_external_data=True,
all_tensors_to_one_file=True,
location="weights.pb",
location=ONNX_WEIGHTS,
)

except Exception:
Expand Down Expand Up @@ -459,7 +460,7 @@ def main() -> int:
"--half",
action="store_true",
default=False,
help="Export models for half precision, faster on some Nvidia cards.",
help="Export models for half precision, smaller and faster on most GPUs.",
)
parser.add_argument(
"--opset",
Expand All @@ -477,16 +478,11 @@ def main() -> int:
logger.info("CLI arguments: %s", args)

ctx = ConversionContext.from_environ()
ctx.half = args.half
ctx.half = args.half or "onnx-fp16" in ctx.optimizations
ctx.opset = args.opset
ctx.token = args.token
logger.info("converting models in %s using %s", ctx.model_path, ctx.training_device)

if ctx.half and ctx.training_device != "cuda":
raise ValueError(
"half precision model export is only supported on GPUs with CUDA"
)

if not path.exists(ctx.model_path):
logger.info("model path does not existing, creating: %s", ctx.model_path)
makedirs(ctx.model_path)
Expand Down
17 changes: 9 additions & 8 deletions api/onnx_web/convert/diffusion/diffusers.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
from onnxruntime.transformers.float16 import convert_float_to_float16
from torch.onnx import export

from ...constants import ONNX_MODEL, ONNX_WEIGHTS
from ...diffusers.load import optimize_pipeline
from ...diffusers.pipeline_onnx_stable_diffusion_upscale import (
OnnxStableDiffusionUpscalePipeline,
Expand Down Expand Up @@ -79,7 +80,7 @@ def onnx_export(
f"{output_file}",
save_as_external_data=external_data,
all_tensors_to_one_file=True,
location="weights.pb",
location=ONNX_WEIGHTS,
)


Expand Down Expand Up @@ -144,7 +145,7 @@ def convert_diffusion_diffusers(
None, # output attentions
torch.tensor(True).to(device=ctx.training_device, dtype=torch.bool),
),
output_path=output_path / "text_encoder" / "model.onnx",
output_path=output_path / "text_encoder" / ONNX_MODEL,
ordered_input_names=["input_ids"],
output_names=["last_hidden_state", "pooler_output", "hidden_states"],
dynamic_axes={
Expand All @@ -169,7 +170,7 @@ def convert_diffusion_diffusers(

unet_in_channels = pipeline.unet.config.in_channels
unet_sample_size = pipeline.unet.config.sample_size
unet_path = output_path / "unet" / "model.onnx"
unet_path = output_path / "unet" / ONNX_MODEL
onnx_export(
pipeline.unet,
model_args=(
Expand Down Expand Up @@ -207,7 +208,7 @@ def convert_diffusion_diffusers(
unet_model_path,
save_as_external_data=True,
all_tensors_to_one_file=True,
location="weights.pb",
location=ONNX_WEIGHTS,
convert_attribute=False,
)
del pipeline.unet
Expand All @@ -233,7 +234,7 @@ def convert_diffusion_diffusers(
).to(device=ctx.training_device, dtype=dtype),
False,
),
output_path=output_path / "vae" / "model.onnx",
output_path=output_path / "vae" / ONNX_MODEL,
ordered_input_names=["latent_sample", "return_dict"],
output_names=["sample"],
dynamic_axes={
Expand All @@ -259,14 +260,14 @@ def convert_diffusion_diffusers(
),
False,
),
output_path=output_path / "vae_encoder" / "model.onnx",
output_path=output_path / "vae_encoder" / ONNX_MODEL,
ordered_input_names=["sample", "return_dict"],
output_names=["latent_sample"],
dynamic_axes={
"sample": {0: "batch", 1: "channels", 2: "height", 3: "width"},
},
opset=ctx.opset,
half=ctx.half,
half=False, # https://github.com/ssube/onnx-web/issues/290
)

# VAE DECODER
Expand All @@ -282,7 +283,7 @@ def convert_diffusion_diffusers(
).to(device=ctx.training_device, dtype=dtype),
False,
),
output_path=output_path / "vae_decoder" / "model.onnx",
output_path=output_path / "vae_decoder" / ONNX_MODEL,
ordered_input_names=["latent_sample", "return_dict"],
output_names=["sample"],
dynamic_axes={
Expand Down
Loading