Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend-defined Dependencies #1834

Merged
merged 68 commits into from
Jun 11, 2023
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
387b8bf
start of defining dependencies on backend
joeyballentine Apr 17, 2023
c55516a
ncnn dep
joeyballentine Apr 17, 2023
0252885
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps
joeyballentine Apr 19, 2023
151f102
nvidia helper
joeyballentine Apr 19, 2023
8f3d883
define rest of deps in backend
joeyballentine Apr 19, 2023
b376fd4
some fixes
joeyballentine Apr 19, 2023
724c126
import testing & dep installing
joeyballentine Apr 20, 2023
6a72aaa
fixed install stuff
joeyballentine Apr 20, 2023
75aa35f
this dependency stuff is a mess
joeyballentine Apr 22, 2023
de1f539
...
joeyballentine Apr 22, 2023
d6df278
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps
joeyballentine May 3, 2023
2a276cf
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps
joeyballentine May 25, 2023
2af8399
WIP core dependency workaround
joeyballentine May 25, 2023
5ee2243
Fixes
joeyballentine May 25, 2023
e202351
fix scoping issue
joeyballentine May 25, 2023
75a274a
Fixes
joeyballentine May 25, 2023
810ed10
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps-wip
joeyballentine May 26, 2023
4ccf360
cleanup
joeyballentine May 26, 2023
805ef5c
fix workflow
joeyballentine May 26, 2023
e44c585
some fixes
joeyballentine May 26, 2023
a535784
lint
joeyballentine May 26, 2023
a6b356d
make it not run the b uild workflow
joeyballentine May 26, 2023
78307e1
update requirements.txt
joeyballentine May 26, 2023
122808f
renaming
joeyballentine May 26, 2023
3af6271
reorganize stuff
joeyballentine May 27, 2023
e9e86b7
Correct order of operations
joeyballentine May 27, 2023
3c20fa1
SEE for backend ready event
joeyballentine May 27, 2023
e3b4384
wip more
joeyballentine May 27, 2023
b9e353d
getting closer to the finish line
joeyballentine May 29, 2023
c7f246a
move a few required deps over
joeyballentine May 29, 2023
08e0d2b
remove a few prints
joeyballentine May 29, 2023
6322a69
Merge branch 'main' into backend-deps-wip
joeyballentine May 29, 2023
3dead66
float progress
joeyballentine May 30, 2023
67d31e2
Install multiple dependencies at once + other pr suggestions
joeyballentine Jun 6, 2023
a03b6d4
Other minor PR suggestions
joeyballentine Jun 6, 2023
9c46d85
fix dumb mistake
joeyballentine Jun 7, 2023
2e5da26
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps-wip
joeyballentine Jun 7, 2023
b088760
fixes & debugging
joeyballentine Jun 7, 2023
bc8e615
lint
joeyballentine Jun 7, 2023
72ed5ab
attempting to use a separate sse
joeyballentine Jun 8, 2023
bee1117
setup-sse from main process
joeyballentine Jun 9, 2023
08f07a5
use progress slice
joeyballentine Jun 10, 2023
6af430c
use task
joeyballentine Jun 10, 2023
bba3691
sleepy time
RunDevelopment Jun 10, 2023
1c65810
reuse event loop, because we probably should
joeyballentine Jun 10, 2023
ca71c53
put_and_wait
RunDevelopment Jun 10, 2023
fc6be08
replace console.log with comment
joeyballentine Jun 10, 2023
0d0c3c5
backend linting
joeyballentine Jun 10, 2023
ccef953
lint + remove logs
joeyballentine Jun 10, 2023
7d71cb4
remove comment
joeyballentine Jun 10, 2023
13289ea
Merge branch 'main' into backend-deps-wip
joeyballentine Jun 10, 2023
2ef2413
Merge remote-tracking branch 'chaiNNer-org/main' into backend-deps-wip
joeyballentine Jun 11, 2023
c1570eb
retry getting python info
joeyballentine Jun 11, 2023
5512a2e
Don't --upgrade, allow None versions
joeyballentine Jun 11, 2023
4c0911a
clean up requirements.txt
joeyballentine Jun 11, 2023
84d8bda
Revert "clean up requirements.txt"
joeyballentine Jun 11, 2023
46bc112
More declarative deps
RunDevelopment Jun 11, 2023
9d40b89
remove unnecessary global
joeyballentine Jun 11, 2023
739c42d
remove comments
joeyballentine Jun 11, 2023
a2de212
other cleanup
joeyballentine Jun 11, 2023
406d989
rename some things
joeyballentine Jun 11, 2023
8fd7c3c
add error listener
joeyballentine Jun 11, 2023
2d633ba
add back loading status
joeyballentine Jun 11, 2023
4106d8a
Pr suggestions
joeyballentine Jun 11, 2023
406d046
Added `nodes_available` function
RunDevelopment Jun 11, 2023
750542c
Removed my stupid log.debug
RunDevelopment Jun 11, 2023
429700f
Update src/renderer/splash.tsx
RunDevelopment Jun 11, 2023
7984730
Update src/renderer/main.tsx
RunDevelopment Jun 11, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 46 additions & 3 deletions backend/src/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@

import importlib
import os
import platform
import sys
from dataclasses import dataclass, field
from typing import Callable, Dict, Iterable, List, Tuple, TypedDict, TypeVar, Union

Expand All @@ -19,6 +21,10 @@
typeValidateSchema,
)

KB = 1024**1
MB = 1024**2
GB = 1024**3


def _process_inputs(base_inputs: Iterable[Union[BaseInput, NestedGroup]]):
inputs: List[BaseInput] = []
Expand Down Expand Up @@ -174,11 +180,34 @@ def toDict(self):
}


@dataclass
class Dependency:
display_name: str
package_name: str
RunDevelopment marked this conversation as resolved.
Show resolved Hide resolved
version: str
size_estimate: int | float
auto_update: bool = False
RunDevelopment marked this conversation as resolved.
Show resolved Hide resolved
extra_index_url: str | None = None

import_name: str | None = None

def toDict(self):
return {
"displayName": self.display_name,
"packageName": self.package_name,
"version": self.version,
"sizeEstimate": int(self.size_estimate),
"autoUpdate": self.auto_update,
"findLink": self.extra_index_url,
}


@dataclass
class Package:
where: str
name: str
dependencies: List[str] = field(default_factory=list)
description: str
dependencies: List[Dependency] = field(default_factory=list)
categories: List[Category] = field(default_factory=list)

def add_category(
Expand All @@ -200,6 +229,12 @@ def add_category(
self.categories.append(result)
return result

def add_dependency(
self,
dependency: Dependency,
):
self.dependencies.append(dependency)


def _iter_py_files(directory: str):
for root, _, files in os.walk(directory):
Expand Down Expand Up @@ -271,5 +306,13 @@ def _refresh_nodes(self):
registry = PackageRegistry()


def add_package(where: str, name: str, dependencies: List[str]) -> Package:
return registry.add(Package(where, name, dependencies))
def add_package(
where: str, name: str, description: str, dependencies: List[Dependency]
) -> Package:
return registry.add(Package(where, name, description, dependencies))


is_mac = sys.platform == "darwin"
is_arm_mac = is_mac and platform.machine() == "arm64"
is_windows = sys.platform == "win32"
is_linux = sys.platform == "linux"
59 changes: 59 additions & 0 deletions backend/src/gpu.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import pynvml as nv
from sanic.log import logger

nvidia_is_available = False

try:
nv.nvmlInit()
nvidia_is_available = True
nv.nvmlShutdown()
except nv.NVMLError as e:
logger.info("No Nvidia GPU found, or invalid driver installed.")
except Exception as e:
logger.info(f"Unknown error occurred when trying to initialize Nvidia GPU: {e}")


class NvidiaHelper:
def __init__(self):
self.nvidia_is_available = nvidia_is_available
joeyballentine marked this conversation as resolved.
Show resolved Hide resolved
if not nvidia_is_available:
raise RuntimeError("Nvidia GPU not found, or invalid driver installed.")

nv.nvmlInit()

self.__num_gpus = nv.nvmlDeviceGetCount()

self.__gpus = []
for i in range(self.__num_gpus):
handle = nv.nvmlDeviceGetHandleByIndex(i)
self.__gpus.append(
{
"name": nv.nvmlDeviceGetName(handle),
"uuid": nv.nvmlDeviceGetUUID(handle),
"index": i,
"handle": handle,
}
)

def __del__(self):
if nvidia_is_available:
nv.nvmlShutdown()

def list_gpus(self):
if not nvidia_is_available:
return None
joeyballentine marked this conversation as resolved.
Show resolved Hide resolved
return self.__gpus

def get_current_vram_usage(self, gpu_index=0):
if not nvidia_is_available:
return None

info = nv.nvmlDeviceGetMemoryInfo(self.__gpus[gpu_index]["handle"])

return info.total, info.used, info.free


__all__ = [
"nvidia_is_available",
"NvidiaHelper",
]
RunDevelopment marked this conversation as resolved.
Show resolved Hide resolved
29 changes: 29 additions & 0 deletions backend/src/installed_deps.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
import subprocess
import sys

python_path = sys.executable

# pylint: disable=global-at-module-level
global installed_packages
installed_packages = {}


def install_dependency(package_name, version):
if package_name not in installed_packages:
subprocess.check_call(
[
python_path,
"-m",
"pip",
"install",
"--upgrade",
f"{package_name}=={version}",
]
)
installed_packages[package_name] = version


def set_installed_packages(packages):
# pylint: disable=global-statement
global installed_packages
installed_packages = packages
7 changes: 6 additions & 1 deletion backend/src/packages/chaiNNer_external/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,12 @@

from api import add_package

package = add_package(__file__, name="chaiNNer_external", dependencies=[])
package = add_package(
__file__,
name="External",
description="Interact with an external Stable Diffusion API",
dependencies=[],
)

external_stable_diffusion_category = package.add_category(
name="Stable Diffusion (External)",
Expand Down
18 changes: 16 additions & 2 deletions backend/src/packages/chaiNNer_ncnn/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,22 @@
from sanic.log import logger

from api import add_package
from api import MB, Dependency, add_package, is_mac

package = add_package(__file__, name="chaiNNer_ncnn", dependencies=[])
package = add_package(
__file__,
name="NCNN",
description="NCNN uses .bin/.param models to upscale images. NCNN uses Vulkan for GPU acceleration, meaning it supports any modern GPU. Models can be converted from PyTorch to NCNN.",
dependencies=[
Dependency(
"NCNN",
"ncnn-vulkan",
"2022.9.12",
7 * MB if is_mac else 4 * MB,
auto_update=True,
import_name="ncnn_vulkan",
),
],
)

ncnn_category = package.add_category(
name="NCNN",
Expand Down
81 changes: 79 additions & 2 deletions backend/src/packages/chaiNNer_onnx/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,85 @@
from sanic.log import logger

from api import add_package
from api import KB, MB, Dependency, add_package, is_arm_mac
from gpu import nvidia_is_available

package = add_package(__file__, name="chaiNNer_onnx", dependencies=[])

def get_onnx_runtime():
if is_arm_mac:
return Dependency(
display_name="ONNX Runtime (Silicon)",
package_name="onnxruntime-silicon",
version="1.13.1",
size_estimate=6 * MB,
import_name="onnxruntime",
)
elif nvidia_is_available:
return Dependency(
display_name="ONNX Runtime (GPU)",
package_name="onnxruntime-gpu",
version="1.13.1",
size_estimate=110 * MB,
import_name="onnxruntime",
)
else:
return Dependency(
display_name="ONNX Runtime",
package_name="onnxruntime",
version="1.13.1",
size_estimate=5 * MB,
)


dependencies = [
Dependency(
display_name="ONNX",
package_name="onnx",
version="1.13.0",
size_estimate=12 * MB,
),
]

if not is_arm_mac:
dependencies.append(
Dependency(
display_name="ONNX Optimizer",
package_name="onnxoptimizer",
version="0.3.6",
size_estimate=300 * KB,
)
)

dependencies.extend(
[
get_onnx_runtime(),
Dependency(
display_name="Protobuf",
package_name="protobuf",
version="3.20.2",
size_estimate=500 * KB,
),
Dependency(
display_name="SciPy",
package_name="scipy",
version="1.9.3",
size_estimate=42 * MB,
),
Dependency(
display_name="Numba",
package_name="numba",
version="0.56.3",
size_estimate=2.5 * MB,
),
]
)


package = add_package(
__file__,
name="ONNX",
description="ONNX uses .onnx models to upscale images. It also helps to convert between PyTorch and NCNN. It is fastest when CUDA is supported. If TensorRT is installed on the system, it can also be configured to use that.",
dependencies=dependencies,
)

onnx_category = package.add_category(
name="ONNX",
Expand Down
79 changes: 77 additions & 2 deletions backend/src/packages/chaiNNer_pytorch/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,83 @@
import sys

from sanic.log import logger

from api import add_package
from api import GB, KB, MB, Dependency, add_package
from gpu import nvidia_is_available

python_version = sys.version_info

package = add_package(__file__, name="chaiNNer_pytorch", dependencies=[])
dependencies = []
if python_version.minor < 10:
dependencies.extend(
[
Dependency(
display_name="PyTorch",
package_name="torch",
version="1.10.2+cu113" if nvidia_is_available else "1.10.2",
size_estimate=2 * GB if nvidia_is_available else 140 * MB,
extra_index_url="https://download.pytorch.org/whl/cu113"
if nvidia_is_available
else None,
),
Dependency(
display_name="TorchVision",
package_name="torchvision",
version="0.11.3+cu113" if nvidia_is_available else "0.11.3",
size_estimate=2 * MB if nvidia_is_available else 800 * KB,
extra_index_url="https://download.pytorch.org/whl/cu113"
if nvidia_is_available
else None,
),
]
)
elif python_version.minor >= 10:
dependencies.extend(
[
Dependency(
display_name="PyTorch",
package_name="torch",
version="1.12.1+cu116" if nvidia_is_available else "1.12.1",
size_estimate=2 * GB if nvidia_is_available else 140 * MB,
extra_index_url="https://download.pytorch.org/whl/cu116"
if nvidia_is_available
else None,
),
Dependency(
display_name="TorchVision",
package_name="torchvision",
version="0.13.1+cu116" if nvidia_is_available else "0.13.1",
size_estimate=2 * MB if nvidia_is_available else 800 * KB,
extra_index_url="https://download.pytorch.org/whl/cu116"
if nvidia_is_available
else None,
),
]
)

dependencies.extend(
[
Dependency(
display_name="FaceXLib",
package_name="facexlib",
version="0.2.5",
size_estimate=1.1 * MB,
),
Dependency(
display_name="Einops",
package_name="einops",
version="0.5.0",
size_estimate=36.5 * KB,
),
]
)

package = add_package(
__file__,
name="PyTorch",
description="PyTorch uses .pth models to upscale images, and is fastest when CUDA is supported (Nvidia GPU). If CUDA is unsupported, it will install with CPU support (which is very slow).",
dependencies=dependencies,
)

pytorch_category = package.add_category(
name="PyTorch",
Expand Down
Loading
Loading