-
Notifications
You must be signed in to change notification settings - Fork 6.3k
[Clean up] Clean unused code #245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…clean_resnet_file
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
@@ -390,7 +390,7 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P | |||
) | |||
except EntryNotFoundError: | |||
raise EnvironmentError( | |||
f"{pretrained_model_name_or_path} does not appear to have a file named {model_file}." | |||
f"{pretrained_model_name_or_path} does not appear to have a file named {WEIGHTS_NAME}." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model
file is actually not yet defined when this error is hit
|
||
import torch | ||
import torch.nn.functional as F | ||
from torch import nn | ||
|
||
|
||
class AttentionBlockNew(nn.Module): | ||
class AttentionBlock(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's call it AttentionBlock
and not AttentionBlockNew
Solves #199 |
@@ -234,7 +178,7 @@ def forward(self, x, context=None, mask=None): | |||
h = self.heads | |||
|
|||
q = self.to_q(x) | |||
context = default(context, x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
death to one-line functions
@@ -280,155 +224,3 @@ def __init__(self, dim_in, dim_out): | |||
def forward(self, x): | |||
x, gate = self.proj(x).chunk(2, dim=-1) | |||
return x * F.gelu(gate) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this code is not used anymore
from .attention import AttentionBlockNew, SpatialTransformer | ||
from .resnet import Downsample2D, FirDownsample2D, FirUpsample2D, ResnetBlock, Upsample2D | ||
from .attention import AttentionBlock, SpatialTransformer | ||
from .resnet import Downsample2D, FirDownsample2D, FirUpsample2D, ResnetBlock2D, Upsample2D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2D is more in line with all the other imports here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thanks a lot for cleaning this.
@@ -234,7 +178,7 @@ def forward(self, x, context=None, mask=None): | |||
h = self.heads | |||
|
|||
q = self.to_q(x) | |||
context = default(context, x) | |||
context = context if context is not None else x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much better :D
* CleanResNet * refactor more * correct
* CleanResNet * refactor more * correct
This PRs finishes the easy refactoring (just deleting old code).
All slow tests pass + stable diffusion pipeline was tested for subjective image quality and all looks good 👍
Now the big single letter variable renaming can begin