Skip to content

Conversation

@sourcery-ai
Copy link

@sourcery-ai sourcery-ai bot commented Aug 11, 2022

Branch main refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the main branch, then run:

git fetch origin sourcery/main
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from edson-github August 11, 2022 21:20
Copy link
Author

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to GitHub API limits, only the first 60 comments can be shown.

Comment on lines -60 to +63
"Parameters can be overwritten or added with command-line options of the form `--key value`.",
default=list(),
"Parameters can be overwritten or added with command-line options of the form `--key value`.",
default=[],
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_parser refactored with the following changes:

Comment on lines -168 to +169
self.dataset_configs = dict()
self.dataset_configs = {}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DataModuleFromConfig.__init__ refactored with the following changes:

Comment on lines -190 to +195
self.datasets = dict(
(k, instantiate_from_config(self.dataset_configs[k]))
for k in self.dataset_configs)
self.datasets = {
k: instantiate_from_config(self.dataset_configs[k])
for k in self.dataset_configs
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DataModuleFromConfig.setup refactored with the following changes:

Comment on lines -203 to +212
return DataLoader(self.datasets["train"], batch_size=self.batch_size,
num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True,
worker_init_fn=init_fn)
return DataLoader(
self.datasets["train"],
batch_size=self.batch_size,
num_workers=self.num_workers,
shuffle=not is_iterable_dataset,
worker_init_fn=init_fn,
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DataModuleFromConfig._train_dataloader refactored with the following changes:

Comment on lines -264 to +299
if "callbacks" in self.lightning_config:
if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']:
os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True)
if (
"callbacks" in self.lightning_config
and 'metrics_over_trainsteps_checkpoint'
in self.lightning_config['callbacks']
):
os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True)
print("Project config")
print(OmegaConf.to_yaml(self.config))
OmegaConf.save(self.config,
os.path.join(self.cfgdir, "{}-project.yaml".format(self.now)))
OmegaConf.save(
self.config, os.path.join(self.cfgdir, f"{self.now}-project.yaml")
)


print("Lightning config")
print(OmegaConf.to_yaml(self.lightning_config))
OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}),
os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now)))
OmegaConf.save(
OmegaConf.create({"lightning": self.lightning_config}),
os.path.join(self.cfgdir, f"{self.now}-lightning.yaml"),
)

else:
# ModelCheckpoint callback created log directory --- remove it
if not self.resume and os.path.exists(self.logdir):
dst, name = os.path.split(self.logdir)
dst = os.path.join(dst, "child_runs", name)
os.makedirs(os.path.split(dst)[0], exist_ok=True)
try:
os.rename(self.logdir, dst)
except FileNotFoundError:
pass

elif not self.resume and os.path.exists(self.logdir):
dst, name = os.path.split(self.logdir)
dst = os.path.join(dst, "child_runs", name)
os.makedirs(os.path.split(dst)[0], exist_ok=True)
try:
os.rename(self.logdir, dst)
except FileNotFoundError:
pass
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function SetupCallback.on_pretrain_routine_start refactored with the following changes:

This removes the following comments ( why? ):

# ModelCheckpoint callback created log directory --- remove it

Comment on lines -53 to -57
interval = 0
for cl in self.cum_cycles[1:]:
for interval, cl in enumerate(self.cum_cycles[1:]):
if n <= cl:
return interval
interval += 1
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function LambdaWarmUpCosineScheduler2.find_in_interval refactored with the following changes:

Comment on lines -62 to +71
if self.verbosity_interval > 0:
if self.verbosity_interval > 0 and n % self.verbosity_interval == 0:
if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
f"current cycle {cycle}")
if n < self.lr_warm_up_steps[cycle]:
f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
self.last_f = f
return f
else:
t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle])
t = min(t, 1.0)
f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * (
1 + np.cos(t * np.pi))
self.last_f = f
return f

self.last_f = f
return f
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function LambdaWarmUpCosineScheduler2.schedule refactored with the following changes:

Comment on lines -86 to +91
if self.verbosity_interval > 0:
if self.verbosity_interval > 0 and n % self.verbosity_interval == 0:
if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, "
f"current cycle {cycle}")

if n < self.lr_warm_up_steps[cycle]:
f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle]
self.last_f = f
return f
else:
f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle])
self.last_f = f
return f

self.last_f = f
return f
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function LambdaLinearScheduler.schedule refactored with the following changes:

# xc a list of captions to plot
b = len(xc)
txts = list()
txts = []
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function log_txt_as_img refactored with the following changes:

Comment on lines -42 to +46
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] > 3)
return (
(len(x.shape) == 4) and (x.shape[1] > 3)
if isinstance(x, torch.Tensor)
else False
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ismap refactored with the following changes:

Comment on lines -48 to +54
if not isinstance(x, torch.Tensor):
return False
return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
return (
len(x.shape) == 4 and x.shape[1] in [3, 1]
if isinstance(x, torch.Tensor)
else False
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function isimage refactored with the following changes:

Comment on lines -100 to +104
if idx_to_fn:
res = func(data, worker_id=idx)
else:
res = func(data)
res = func(data, worker_id=idx) if idx_to_fn else func(data)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _do_parallel_data_prefetch refactored with the following changes:

Comment on lines -120 to +125
f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.'
'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.'
)

data = list(data.values())
if target_data_type == "ndarray":
data = np.asarray(data)
else:
data = list(data)
data = np.asarray(data) if target_data_type == "ndarray" else list(data)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function parallel_data_prefetch refactored with the following changes:

This removes the following comments ( why? ):

# order outputs

with open(path_to_yaml) as f:
di2s = yaml.load(f)
return dict((v,k) for k,v in di2s.items())
return {v: k for k,v in di2s.items()}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function synset2idx refactored with the following changes:

def __init__(self, config=None):
self.config = config or OmegaConf.create()
if not type(self.config)==dict:
if type(self.config) != dict:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ImageNetBase.__init__ refactored with the following changes:

  • Simplify logical expression using De Morgan identities (de-morgan)

for ik in ignore_keys:
if k.startswith(ik):
print("Deleting key {} from state_dict.".format(k))
print(f"Deleting key {k} from state_dict.")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel.init_from_ckpt refactored with the following changes:

quant = self.post_quant_conv(quant)
dec = self.decoder(quant)
return dec
return self.decoder(quant)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel.decode refactored with the following changes:

Comment on lines -114 to +113
dec = self.decode(quant_b)
return dec
return self.decode(quant_b)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel.decode_code refactored with the following changes:

Comment on lines -120 to +118
if return_pred_indices:
return dec, diff, ind
return dec, diff
return (dec, diff, ind) if return_pred_indices else (dec, diff)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel.forward refactored with the following changes:

Comment on lines -173 to +191
aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0,
self.global_step,
last_layer=self.get_last_layer(),
split="val"+suffix,
predicted_indices=ind
)

discloss, log_dict_disc = self.loss(qloss, x, xrec, 1,
self.global_step,
last_layer=self.get_last_layer(),
split="val"+suffix,
predicted_indices=ind
)
aeloss, log_dict_ae = self.loss(
qloss,
x,
xrec,
0,
self.global_step,
last_layer=self.get_last_layer(),
split=f"val{suffix}",
predicted_indices=ind,
)


discloss, log_dict_disc = self.loss(
qloss,
x,
xrec,
1,
self.global_step,
last_layer=self.get_last_layer(),
split=f"val{suffix}",
predicted_indices=ind,
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel._validation_step refactored with the following changes:

Comment on lines -234 to +240
log = dict()
log = {}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModel.log_images refactored with the following changes:

Comment on lines -281 to +287
dec = self.decoder(quant)
return dec
return self.decoder(quant)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function VQModelInterface.decode refactored with the following changes:

Comment on lines -319 to +324
print("Deleting key {} from state_dict.".format(k))
print(f"Deleting key {k} from state_dict.")
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function AutoencoderKL.init_from_ckpt refactored with the following changes:

Comment on lines -327 to +332
posterior = DiagonalGaussianDistribution(moments)
return posterior
return DiagonalGaussianDistribution(moments)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function AutoencoderKL.encode refactored with the following changes:

Comment on lines -332 to +336
dec = self.decoder(z)
return dec
return self.decoder(z)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function AutoencoderKL.decode refactored with the following changes:

Comment on lines -122 to +133
if x_T is None:
img = torch.randn(shape, device=device)
else:
img = x_T

img = torch.randn(shape, device=device) if x_T is None else x_T
if timesteps is None:
timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps
elif timesteps is not None and not ddim_use_original_steps:
elif not ddim_use_original_steps:
subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1
timesteps = self.ddim_timesteps[:subset_end]

intermediates = {'x_inter': [img], 'pred_x0': [img]}
time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps)
time_range = (
reversed(range(timesteps))
if ddim_use_original_steps
else np.flip(timesteps)
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DDIMSampler.ddim_sampling refactored with the following changes:

Comment on lines -194 to +201
print("Deleting key {} from state_dict.".format(k))
print(f"Deleting key {k} from state_dict.")
del sd[k]
missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
sd, strict=False)
missing, unexpected = (
self.model.load_state_dict(sd, strict=False)
if only_model
else self.load_state_dict(sd, strict=False)
)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DDPM.init_from_ckpt refactored with the following changes:

Comment on lines -258 to +267
for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
for i in tqdm(reversed(range(self.num_timesteps)), desc='Sampling t', total=self.num_timesteps):
img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long),
clip_denoised=self.clip_denoised)
if i % self.log_every_t == 0 or i == self.num_timesteps - 1:
intermediates.append(img)
if return_intermediates:
return img, intermediates
return img
return (img, intermediates) if return_intermediates else img
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DDPM.p_sample_loop refactored with the following changes:

x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
model_out = self.model(x_noisy, t)

loss_dict = {}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DDPM.p_losses refactored with the following changes:

Comment on lines -362 to +363
loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema}
loss_dict_ema = {f'{key}_ema': loss_dict_ema[key] for key in loss_dict_ema}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DDPM.validation_step refactored with the following changes:

@sourcery-ai
Copy link
Author

sourcery-ai bot commented Aug 11, 2022

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.34%.

Quality metrics Before After Change
Complexity 9.96 🙂 9.58 🙂 -0.38 👍
Method Length 98.56 🙂 97.98 🙂 -0.58 👍
Working memory 12.98 😞 12.94 😞 -0.04 👍
Quality 53.09% 🙂 53.43% 🙂 0.34% 👍
Other metrics Before After Change
Lines 9710 9719 9
Changed files Quality Before Quality After Quality Change
main.py 64.57% 🙂 65.23% 🙂 0.66% 👍
notebook_helpers.py 52.38% 🙂 53.90% 🙂 1.52% 👍
ldm/lr_scheduler.py 66.79% 🙂 67.83% 🙂 1.04% 👍
ldm/util.py 50.72% 🙂 49.91% 😞 -0.81% 👎
ldm/data/imagenet.py 55.21% 🙂 56.07% 🙂 0.86% 👍
ldm/data/lsun.py 73.32% 🙂 73.51% 🙂 0.19% 👍
ldm/models/autoencoder.py 70.16% 🙂 69.48% 🙂 -0.68% 👎
ldm/models/diffusion/classifier.py 68.19% 🙂 68.38% 🙂 0.19% 👍
ldm/models/diffusion/ddim.py 43.83% 😞 45.09% 😞 1.26% 👍
ldm/models/diffusion/ddpm.py 43.64% 😞 44.78% 😞 1.14% 👍
ldm/models/diffusion/plms.py 36.31% 😞 37.53% 😞 1.22% 👍
ldm/modules/attention.py 75.13% ⭐ 75.23% ⭐ 0.10% 👍
ldm/modules/ema.py 71.01% 🙂 71.08% 🙂 0.07% 👍
ldm/modules/x_transformer.py 55.04% 🙂 55.64% 🙂 0.60% 👍
ldm/modules/diffusionmodules/model.py 53.29% 🙂 53.12% 🙂 -0.17% 👎
ldm/modules/diffusionmodules/openaimodel.py 44.50% 😞 44.47% 😞 -0.03% 👎
ldm/modules/distributions/distributions.py 78.18% ⭐ 80.40% ⭐ 2.22% 👍
ldm/modules/encoders/modules.py 85.31% ⭐ 85.27% ⭐ -0.04% 👎
ldm/modules/image_degradation/bsrgan.py 48.02% 😞 48.03% 😞 0.01% 👍
ldm/modules/image_degradation/bsrgan_light.py 53.86% 🙂 53.93% 🙂 0.07% 👍
ldm/modules/image_degradation/utils_image.py 60.15% 🙂 60.31% 🙂 0.16% 👍
ldm/modules/losses/contperceptual.py 35.09% 😞 35.43% 😞 0.34% 👍
ldm/modules/losses/vqperceptual.py 43.49% 😞 44.08% 😞 0.59% 👍
scripts/img2img.py 32.73% 😞 32.86% 😞 0.13% 👍
scripts/knn2img.py 42.00% 😞 43.03% 😞 1.03% 👍
scripts/sample_diffusion.py 54.95% 🙂 54.74% 🙂 -0.21% 👎
scripts/train_searcher.py 49.11% 😞 49.66% 😞 0.55% 👍
scripts/txt2img.py 28.77% 😞 29.39% 😞 0.62% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
ldm/modules/diffusionmodules/openaimodel.py UNetModel.__init__ 77 ⛔ 831 ⛔ 36 ⛔ 1.01% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
ldm/modules/image_degradation/bsrgan.py degradation_bsrgan_plus 41 ⛔ 459 ⛔ 25 ⛔ 7.90% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
ldm/modules/x_transformer.py AttentionLayers.__init__ 29 😞 474 ⛔ 28 ⛔ 11.79% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
ldm/models/diffusion/ddpm.py LatentDiffusion.progressive_denoising 32 😞 394 ⛔ 25 ⛔ 11.84% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
ldm/models/diffusion/ddpm.py LatentDiffusion.log_images 28 😞 749 ⛔ 27 ⛔ 11.92% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants