Skip to content

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Feb 28, 2023

All python code has been moved under invokeai. All vestiges of ldm and ldm.invoke are now gone.

You will need to run pip install -e . before the code will work again!

Everything seems to be functional, but extensive testing is advised.

A guide to where the files have gone is forthcoming.

This is the first phase of a big shifting of files and directories
in the source tree.

You will need to run `pip install -e .` before the code will work again!

Here's what's in the current commit:

1) Remove a lot of dead code that dealt with checkpoint and safetensor loading.
2) Entire ckpt_generator hierarchy is now gone!
3) ldm.invoke.generator.*   => invokeai.generator.*
4) ldm.model.*              => invokeai.model.*
5) ldm.invoke.model_manager => invokeai.model.model_manager

6) In addition, a number of frequently-accessed classes can be imported
   from the invokeai.model and invokeai.generator modules:

   from invokeai.generator import ( Generator, PipelineIntermediateState,
                                    StableDiffusionGeneratorPipeline, infill_methods)

   from invokeai.models import ( ModelManager, SDLegacyType
                                 InvokeAIDiffuserComponent, AttentionMapSaver,
                                 DDIMSampler, KSampler, PLMSSampler,
                                 PostprocessingSettings )
@lstein lstein marked this pull request as draft February 28, 2023 05:46
@lstein lstein marked this pull request as ready for review February 28, 2023 13:38
@psychedelicious
Copy link
Collaborator

  1. is git history/blame retained through the reorganisation?
  2. i have a large PR [ui]: migrate all styling to chakra-ui theme #2814 WIP for the frontend. i'll probably be finished by friday if not sooner. i would really like to get that into main and then have the reorganisation be after that. rebasing this PR on a different folder structure will be very tedious.

@Kyle0654
Copy link
Contributor

  1. is git history/blame retained through the reorganisation?
  2. i have a large PR [ui]: migrate all styling to chakra-ui theme #2814 WIP for the frontend. i'll probably be finished by friday if not sooner. i would really like to get that into main and then have the reorganisation be after that. rebasing this PR on a different folder structure will be very tedious.

Sounds like frontend files may not be moving?

@psychedelicious
Copy link
Collaborator

Sounds like frontend files may not be moving?

oh, you're right. all good

@lstein
Copy link
Collaborator Author

lstein commented Mar 1, 2023

  • is git history/blame retained through the reorganisation?
  • i have a large PR [ui]: migrate all styling to chakra-ui theme #2814 WIP for the frontend. i'll probably be finished by friday if not sooner. i would really like to get that into main and then have the reorganisation be after that. rebasing this PR on a different folder structure will be very tedious.

The identity of the files remains intact through organization and git blame stays the same. If a file has been moved you can still often, but not always, merge onto it without conflicts. That being said, the frontend and backend/webui files have not moved at all, and aren't going to. Everything else is moving to join them!

@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 1, 2023

Looks like some __pycache__ snuck in.

@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 1, 2023

It feels like the Stable Diffusion code should probably go in its own module under the backend folder, to keep it separate from app code (e.g. CLI or API). I noticed this most in the /invokeai/backend/models folder, where models here is referring to diffusion models, but models in an API usually refers to the data models used for serializing.

Then you could have /invokeai/backend/app with ~/cli, ~/api, and ~/core (or whatever makes sense).

Two popular templates are roughly organized this way:
https://github.com/Buuntu/fastapi-react
https://github.com/arthurhenrique/cookiecutter-fastapi

@lstein
Copy link
Collaborator Author

lstein commented Mar 1, 2023

Looks like some __pycache__ snuck in.

Removed

@lstein
Copy link
Collaborator Author

lstein commented Mar 1, 2023

It feels like the Stable Diffusion code should probably go in its own module under the backend folder, to keep it separate from app code (e.g. CLI or API). I noticed this most in the /invokeai/backend/models folder, where models here is referring to diffusion models, but models in an API usually refers to the data models used for serializing.

Then you could have /invokeai/backend/app with ~/cli, ~/api, and ~/core (or whatever makes sense).

Two popular templates are roughly organized this way: https://github.com/Buuntu/fastapi-react https://github.com/arthurhenrique/cookiecutter-fastapi

How about invokeai/backend/ldm/models ? I don't want to confuse people too much by renaming things too radically. Then the modules would move to invokeai/backend/ldm/modules and the utilities to invokeai/backend/ldm/util.

@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 1, 2023

How about invokeai/backend/ldm/models ? I don't want to confuse people too much by renaming things too radically. Then the modules would move to invokeai/backend/ldm/modules and the utilities to invokeai/backend/ldm/util.

Things are moving around a bunch already. No better time to change names. Might be a good idea to separate from the ldm name too? Didn't diffusers supersede that?

@lstein
Copy link
Collaborator Author

lstein commented Mar 2, 2023

How about invokeai/backend/ldm/models ? I don't want to confuse people too much by renaming things too radically. Then the modules would move to invokeai/backend/ldm/modules and the utilities to invokeai/backend/ldm/util.

Things are moving around a bunch already. No better time to change names. Might be a good idea to separate from the ldm name too? Didn't diffusers supersede that?

Diffusers are implementing latent diffusion models, so I think we can legitimately keep the ldm name. Structure is now invokeai/backend/ldm/{models, modules}

@damian0815
Copy link
Contributor

as i said in discord, just gonna hit "approve" on this

@lstein lstein changed the title first phase of source tree restructure Final phase of source tree restructure Mar 3, 2023
Copy link
Contributor

@damian0815 damian0815 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Member

@ebr ebr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new structure makes a lot of sense 💯 👍

@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 3, 2023

Could you undo the auto code formatting on at least the invocations (and add whatever needs to be added to avoid it)? It changed this:

    prompt: Optional[str]     = Field(description="The prompt to generate an image from")
    seed: int                 = Field(default=-1, ge=-1, le=np.iinfo(np.uint32).max, description="The seed to use (-1 for a random seed)")
    steps: int                = Field(default=10, gt=0, description="The number of steps to use to generate the image")
    width: int                = Field(default=512, multiple_of=64, gt=0, description="The width of the resulting image")
    height: int               = Field(default=512, multiple_of=64, gt=0, description="The height of the resulting image")
    cfg_scale: float          = Field(default=7.5, gt=0, description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt")
    sampler_name: SAMPLER_NAME_VALUES = Field(default="k_lms", description="The sampler to use")
    seamless: bool            = Field(default=False, description="Whether or not to generate an image that can tile without seams")
    model: str                = Field(default='', description="The model to use (currently ignored)")
    progress_images: bool     = Field(default=False, description="Whether or not to produce progress images during generation")

Into this:

    prompt: Optional[str] = Field(description="The prompt to generate an image from")
    seed: int = Field(
        default=-1,
        ge=-1,
        le=np.iinfo(np.uint32).max,
        description="The seed to use (-1 for a random seed)",
    )
    steps: int = Field(
        default=10, gt=0, description="The number of steps to use to generate the image"
    )
    width: int = Field(
        default=512,
        multiple_of=64,
        gt=0,
        description="The width of the resulting image",
    )
    height: int = Field(
        default=512,
        multiple_of=64,
        gt=0,
        description="The height of the resulting image",
    )
    cfg_scale: float = Field(
        default=7.5,
        gt=0,
        description="The Classifier-Free Guidance, higher values may result in a result closer to the prompt",
    )
    sampler_name: SAMPLER_NAME_VALUES = Field(
        default="k_lms", description="The sampler to use"
    )
    seamless: bool = Field(
        default=False,
        description="Whether or not to generate an image that can tile without seams",
    )
    model: str = Field(default="", description="The model to use (currently ignored)")
    progress_images: bool = Field(
        default=False,
        description="Whether or not to produce progress images during generation",
    )

IMO I prefer the original formatting. Especially since the latter isn't even consistent (e.g. steps has been consolidated to a single line, whereas cfg_scale splits each parameter to a separate line).

@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 3, 2023

It's also a nitpick, but a lot of what's in app would typically go in backend in a client-server application (with the client living in frontend). We have a CLI as well (as opposed to the traditional API and web client), which complicates things some, but I do feel like having all the Python code in the same folder (to the extent that you could potentially even split off backend into its own repo) would make sense.

@lstein
Copy link
Collaborator Author

lstein commented Mar 3, 2023

It's also a nitpick, but a lot of what's in app would typically go in backend in a client-server application (with the client living in frontend). We have a CLI as well (as opposed to the traditional API and web client), which complicates things some, but I do feel like having all the Python code in the same folder (to the extent that you could potentially even split off backend into its own repo) would make sense.

My thinking on this was to do the generate refactor (which means reimplementing nodes to run off of generator classes directly) and to integrate the node modules into the frontend/backend directories as the migration is finished.

@lstein
Copy link
Collaborator Author

lstein commented Mar 3, 2023

Done. Note that the way to disable autoformatting is like this:

#fmt: off
code code code
#fmt: on

@lstein lstein merged commit b3dccfa into main Mar 3, 2023
@lstein lstein deleted the refactor/move-models-and-generators branch March 3, 2023 20:05
@Kyle0654
Copy link
Contributor

Kyle0654 commented Mar 3, 2023

Do you mind hitting the rest of the invocations as well? E.g. the Img2Img one.

Ack looks like I was too late x.x

@lstein
Copy link
Collaborator Author

lstein commented Mar 3, 2023

Do you mind hitting the rest of the invocations as well? E.g. the Img2Img one.

Ack looks like I was too late x.x

Oh shoot. I thought all the invocations were at the bottom of the files. Will make a PR.

@lstein lstein restored the refactor/move-models-and-generators branch March 3, 2023 22:18
@mickr777
Copy link
Contributor

mickr777 commented Mar 3, 2023

@lstein fresh install on main when creating a image in txt2image (works fine on unified canvas and img2img)

Traceback (most recent call last):
  File "/home/invokeuser/InvokeAI/invokeai/backend/web/invoke_ai_web_server.py", line 1313, in generate_images
    self.generate.prompt2image(
  File "/home/invokeuser/InvokeAI/invokeai/backend/generate.py", line 539, in prompt2image
    generator = self.select_generator(
  File "/home/invokeuser/InvokeAI/invokeai/backend/generate.py", line 818, in select_generator
    return self._make_txt2img2img()
  File "/home/invokeuser/InvokeAI/invokeai/backend/generate.py", line 894, in _make_txt2img2img
    return self._load_generator(".txt2img2img", "Txt2Img2Img")
  File "/home/invokeuser/InvokeAI/invokeai/backend/generate.py", line 902, in _load_generator
    module = importlib.import_module(mn)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/invokeuser/InvokeAI/invokeai/backend/generator/txt2img2img.py", line 11, in <module>
    from ..models import PostprocessingSettings
ModuleNotFoundError: No module named 'invokeai.backend.models'

#2856

@lstein lstein deleted the refactor/move-models-and-generators branch April 11, 2023 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants