Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
714fff3
add new console frontend to initial model selection, and other improv…
lstein Feb 13, 2023
197e6b9
add missing file
lstein Feb 13, 2023
47f94bd
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 13, 2023
fbbbba2
correct crash on edge case
lstein Feb 13, 2023
9cacba9
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 13, 2023
7545e38
frontend design done; functionality not hooked up yet
lstein Feb 14, 2023
f299f40
convert existing model display to column format
lstein Feb 14, 2023
e87a2fe
model installer frontend done - needs to be hooked to backend
lstein Feb 15, 2023
1bb0779
model installer downloads starter models + user-provided paths and re…
lstein Feb 16, 2023
fe31877
bring in url download bugfix from PR 2630
lstein Feb 16, 2023
07be605
mostly working
lstein Feb 16, 2023
b1341bc
fully functional and ready for review
lstein Feb 16, 2023
314ed7d
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 16, 2023
c5cc832
check maximum value of python version as well as minimum
lstein Feb 16, 2023
6217edc
tweak wording of python version requirements
lstein Feb 16, 2023
5d617ce
rebuild front end
lstein Feb 17, 2023
f3f4c68
fix model download and autodetection bugs
lstein Feb 17, 2023
f3351a5
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 17, 2023
0963bbb
rebuild frontend after merge conflict
lstein Feb 17, 2023
d69156c
remove superseded code
lstein Feb 17, 2023
65a7432
disable xformers if cuda not available
lstein Feb 17, 2023
c55bbd1
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 17, 2023
587faa3
preparation for startup option editor
lstein Feb 17, 2023
e5646d7
both forms functional; need integration
lstein Feb 19, 2023
9d8236c
tested and working on Ubuntu
lstein Feb 19, 2023
e1a85d8
fix incorrect passing of precision to model installer
lstein Feb 19, 2023
ca10d06
show title of add models screen
lstein Feb 19, 2023
5461318
clean up diagnostic messages
lstein Feb 20, 2023
7beebc3
resolved conflicts; ran black and isort
lstein Feb 20, 2023
a4c0dfb
fix broken --ckpt_convert option
lstein Feb 20, 2023
7d77fb9
fixed --default_only behavior
lstein Feb 20, 2023
702da71
swap y/n values for broken model reconfiguration prompt
lstein Feb 20, 2023
e852ad0
fix bug that prevented converted files from being written into models…
lstein Feb 20, 2023
58be915
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 20, 2023
55dce6c
remove more dead code
lstein Feb 20, 2023
3795b40
implemented the following fixes:
lstein Feb 21, 2023
27a2e27
fix crash when installed models < number columns
lstein Feb 21, 2023
fff41a7
merged with main
lstein Feb 21, 2023
d01e239
fix problem that was causing CI failures
lstein Feb 21, 2023
4878c7a
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 21, 2023
5a49675
reformat with black and isort
lstein Feb 21, 2023
9b1a7b5
add "hit any key to exit" pause at end of install
lstein Feb 22, 2023
6b7be4e
remove dangling debug statement
lstein Feb 22, 2023
972aecc
fix responsive resizing
lstein Feb 22, 2023
609bb19
fixes to resizing and init file editing
lstein Feb 22, 2023
168a51c
fix textual inversion output directory path
lstein Feb 22, 2023
16aea1e
Merge branch 'main' into install/refactor-configure-and-model-select
lstein Feb 22, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions installer/install.bat.in
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ del /q .tmp1 .tmp2
@rem -------------- Install and Configure ---------------

call python .\lib\main.py
pause
exit /b

@rem ------------------------ Subroutines ---------------
@rem routine to do comparison of semantic version numbers
Expand Down
10 changes: 7 additions & 3 deletions installer/install.sh.in
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,16 @@ cd $scriptdir
function version { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }

MINIMUM_PYTHON_VERSION=3.9.0
MAXIMUM_PYTHON_VERSION=3.11.0
PYTHON=""
for candidate in python3.10 python3.9 python3 python python3.11 ; do
for candidate in python3.10 python3.9 python3 python ; do
if ppath=`which $candidate`; then
python_version=$($ppath -V | awk '{ print $2 }')
if [ $(version $python_version) -ge $(version "$MINIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
if [ $(version $python_version) -lt $(version "$MAXIMUM_PYTHON_VERSION") ]; then
PYTHON=$ppath
break
fi
fi
fi
done
Expand All @@ -28,3 +31,4 @@ if [ -z "$PYTHON" ]; then
fi

exec $PYTHON ./lib/main.py ${@}
read -p "Press any key to exit"
32 changes: 20 additions & 12 deletions installer/templates/invoke.bat.in
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ echo 1. command-line
echo 2. browser-based UI
echo 3. run textual inversion training
echo 4. merge models (diffusers type only)
echo 5. re-run the configure script to download new models
echo 6. update InvokeAI
echo 7. open the developer console
echo 8. command-line help
set /P restore="Please enter 1, 2, 3, 4, 5, 6, 7 or 8: [2] "
echo 5. download and install models
echo 6. change InvokeAI startup options
echo 7. re-run the configure script to fix a broken install
echo 8. open the developer console
echo 9. update InvokeAI
echo 10. command-line help
set /P restore="Please enter 1-10: [2] "
if not defined restore set restore=2
IF /I "%restore%" == "1" (
echo Starting the InvokeAI command-line..
Expand All @@ -25,17 +27,20 @@ IF /I "%restore%" == "1" (
python .venv\Scripts\invokeai.exe --web %*
) ELSE IF /I "%restore%" == "3" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui %*
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%restore%" == "4" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui %*
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%restore%" == "5" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe %*
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%restore%" == "6" (
echo Running invokeai-update...
python .venv\Scripts\invokeai-update.exe %*
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%restore%" == "7" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --default_only
) ELSE IF /I "%restore%" == "8" (
echo Developer Console
echo Python command is:
where python
Expand All @@ -47,7 +52,10 @@ IF /I "%restore%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%restore%" == "8" (
) ELSE IF /I "%restore%" == "9" (
echo Running invokeai-update...
python .venv\Scripts\invokeai-update.exe %*
) ELSE IF /I "%restore%" == "10" (
echo Displaying command line help...
python .venv\Scripts\invokeai.exe --help %*
pause
Expand Down
29 changes: 18 additions & 11 deletions installer/templates/invoke.sh.in
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,14 @@ if [ "$0" != "bash" ]; then
echo "2. browser-based UI"
echo "3. run textual inversion training"
echo "4. merge models (diffusers type only)"
echo "5. re-run the configure script to download new models"
echo "6. update InvokeAI"
echo "7. open the developer console"
echo "8. command-line help"
echo "5. download and install models"
echo "6. change InvokeAI startup options"
echo "7. re-run the configure script to fix a broken install"
echo "8. open the developer console"
echo "9. update InvokeAI"
echo "10. command-line help "
echo ""
read -p "Please enter 1, 2, 3, 4, 5, 6, 7 or 8: [2] " yn
read -p "Please enter 1-10: [2] " yn
choice=${yn:='2'}
case $choice in
1)
Expand All @@ -55,19 +57,24 @@ if [ "$0" != "bash" ]; then
exec invokeai-merge --gui $@
;;
5)
echo "Configuration:"
exec invokeai-configure --root ${INVOKEAI_ROOT}
exec invokeai-model-install --root ${INVOKEAI_ROOT}
;;
6)
echo "Update:"
exec invokeai-update
exec invokeai-configure --root ${INVOKEAI_ROOT} --skip-sd-weights --skip-support-models
;;
7)
echo "Developer Console:"
exec invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only
;;
8)
echo "Developer Console:"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
8)
9)
echo "Update:"
exec invokeai-update
;;
10)
exec invokeai --help
;;
*)
Expand Down
30 changes: 0 additions & 30 deletions invokeai/configs/INITIAL_MODELS.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,33 +56,3 @@ trinart-2.0:
vae:
repo_id: stabilityai/sd-vae-ft-mse
recommended: False
trinart-characters-2_0:
description: An SD model finetuned with 19.2M anime/manga style images (ckpt version) (4.27 GB)
repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion
config: v1-inference.yaml
file: derrida_final.ckpt
format: ckpt
vae:
repo_id: naclbit/trinart_derrida_characters_v2_stable_diffusion
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
width: 512
height: 512
recommended: False
ft-mse-improved-autoencoder-840000:
description: StabilityAI improved autoencoder fine-tuned for human faces. Improves legacy .ckpt models (335 MB)
repo_id: stabilityai/sd-vae-ft-mse-original
format: ckpt
config: VAE/default
file: vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
recommended: True
trinart_vae:
description: Custom autoencoder for trinart_characters for legacy .ckpt models only (335 MB)
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
config: VAE/trinart
format: ckpt
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
width: 512
height: 512
recommended: False
16 changes: 11 additions & 5 deletions ldm/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ def __init__(
Globals.full_precision = self.precision == "float32"

if is_xformers_available():
if not Globals.disable_xformers:
if torch.cuda.is_available() and not Globals.disable_xformers:
print(">> xformers memory-efficient attention is available and enabled")
else:
print(
Expand All @@ -221,9 +221,13 @@ def __init__(
print(">> xformers not installed")

# model caching system for fast switching
self.model_manager = ModelManager(mconfig, self.device, self.precision,
max_loaded_models=max_loaded_models,
sequential_offload=self.free_gpu_mem)
self.model_manager = ModelManager(
mconfig,
self.device,
self.precision,
max_loaded_models=max_loaded_models,
sequential_offload=self.free_gpu_mem,
)
# don't accept invalid models
fallback = self.model_manager.default_model() or FALLBACK_MODEL_NAME
model = model or fallback
Expand All @@ -246,7 +250,7 @@ def __init__(
# load safety checker if requested
if safety_checker:
try:
print(">> Initializing safety checker")
print(">> Initializing NSFW checker")
from diffusers.pipelines.stable_diffusion.safety_checker import (
StableDiffusionSafetyChecker,
)
Expand All @@ -270,6 +274,8 @@ def __init__(
"** An error was encountered while installing the safety checker:"
)
print(traceback.format_exc())
else:
print(">> NSFW checker is disabled")

def prompt2png(self, prompt, outdir, **kwargs):
"""
Expand Down
Loading