-
Notifications
You must be signed in to change notification settings - Fork 0
Sourcery refactored main branch #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to GitHub API limits, only the first 60 comments can be shown.
| "Parameters can be overwritten or added with command-line options of the form `--key value`.", | ||
| default=list(), | ||
| "Parameters can be overwritten or added with command-line options of the form `--key value`.", | ||
| default=[], | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function get_parser refactored with the following changes:
- Replace list() with [] (
list-literal)
| self.dataset_configs = dict() | ||
| self.dataset_configs = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DataModuleFromConfig.__init__ refactored with the following changes:
- Replace dict() with {} (
dict-literal)
| self.datasets = dict( | ||
| (k, instantiate_from_config(self.dataset_configs[k])) | ||
| for k in self.dataset_configs) | ||
| self.datasets = { | ||
| k: instantiate_from_config(self.dataset_configs[k]) | ||
| for k in self.dataset_configs | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DataModuleFromConfig.setup refactored with the following changes:
- Replace list(), dict() or set() with comprehension (
collection-builtin-to-comprehension)
| return DataLoader(self.datasets["train"], batch_size=self.batch_size, | ||
| num_workers=self.num_workers, shuffle=False if is_iterable_dataset else True, | ||
| worker_init_fn=init_fn) | ||
| return DataLoader( | ||
| self.datasets["train"], | ||
| batch_size=self.batch_size, | ||
| num_workers=self.num_workers, | ||
| shuffle=not is_iterable_dataset, | ||
| worker_init_fn=init_fn, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DataModuleFromConfig._train_dataloader refactored with the following changes:
- Remove unnecessary casts to int, str, float or bool (
remove-unnecessary-cast) - Simplify boolean if expression (
boolean-if-exp-identity)
| if "callbacks" in self.lightning_config: | ||
| if 'metrics_over_trainsteps_checkpoint' in self.lightning_config['callbacks']: | ||
| os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True) | ||
| if ( | ||
| "callbacks" in self.lightning_config | ||
| and 'metrics_over_trainsteps_checkpoint' | ||
| in self.lightning_config['callbacks'] | ||
| ): | ||
| os.makedirs(os.path.join(self.ckptdir, 'trainstep_checkpoints'), exist_ok=True) | ||
| print("Project config") | ||
| print(OmegaConf.to_yaml(self.config)) | ||
| OmegaConf.save(self.config, | ||
| os.path.join(self.cfgdir, "{}-project.yaml".format(self.now))) | ||
| OmegaConf.save( | ||
| self.config, os.path.join(self.cfgdir, f"{self.now}-project.yaml") | ||
| ) | ||
|
|
||
|
|
||
| print("Lightning config") | ||
| print(OmegaConf.to_yaml(self.lightning_config)) | ||
| OmegaConf.save(OmegaConf.create({"lightning": self.lightning_config}), | ||
| os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now))) | ||
| OmegaConf.save( | ||
| OmegaConf.create({"lightning": self.lightning_config}), | ||
| os.path.join(self.cfgdir, f"{self.now}-lightning.yaml"), | ||
| ) | ||
|
|
||
| else: | ||
| # ModelCheckpoint callback created log directory --- remove it | ||
| if not self.resume and os.path.exists(self.logdir): | ||
| dst, name = os.path.split(self.logdir) | ||
| dst = os.path.join(dst, "child_runs", name) | ||
| os.makedirs(os.path.split(dst)[0], exist_ok=True) | ||
| try: | ||
| os.rename(self.logdir, dst) | ||
| except FileNotFoundError: | ||
| pass | ||
|
|
||
| elif not self.resume and os.path.exists(self.logdir): | ||
| dst, name = os.path.split(self.logdir) | ||
| dst = os.path.join(dst, "child_runs", name) | ||
| os.makedirs(os.path.split(dst)[0], exist_ok=True) | ||
| try: | ||
| os.rename(self.logdir, dst) | ||
| except FileNotFoundError: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function SetupCallback.on_pretrain_routine_start refactored with the following changes:
- Merge else clause's nested if statement into elif (
merge-else-if-into-elif) - Merge nested if conditions (
merge-nested-ifs) - Replace call to format with f-string [×2] (
use-fstring-for-formatting)
This removes the following comments ( why? ):
# ModelCheckpoint callback created log directory --- remove it
| interval = 0 | ||
| for cl in self.cum_cycles[1:]: | ||
| for interval, cl in enumerate(self.cum_cycles[1:]): | ||
| if n <= cl: | ||
| return interval | ||
| interval += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function LambdaWarmUpCosineScheduler2.find_in_interval refactored with the following changes:
- Replace manual loop counter with call to enumerate (
convert-to-enumerate)
| if self.verbosity_interval > 0: | ||
| if self.verbosity_interval > 0 and n % self.verbosity_interval == 0: | ||
| if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " | ||
| f"current cycle {cycle}") | ||
| if n < self.lr_warm_up_steps[cycle]: | ||
| f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] | ||
| self.last_f = f | ||
| return f | ||
| else: | ||
| t = (n - self.lr_warm_up_steps[cycle]) / (self.cycle_lengths[cycle] - self.lr_warm_up_steps[cycle]) | ||
| t = min(t, 1.0) | ||
| f = self.f_min[cycle] + 0.5 * (self.f_max[cycle] - self.f_min[cycle]) * ( | ||
| 1 + np.cos(t * np.pi)) | ||
| self.last_f = f | ||
| return f | ||
|
|
||
| self.last_f = f | ||
| return f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function LambdaWarmUpCosineScheduler2.schedule refactored with the following changes:
- Merge nested if conditions (
merge-nested-ifs) - Hoist repeated code outside conditional statement [×2] (
hoist-statement-from-if)
| if self.verbosity_interval > 0: | ||
| if self.verbosity_interval > 0 and n % self.verbosity_interval == 0: | ||
| if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_f}, " | ||
| f"current cycle {cycle}") | ||
|
|
||
| if n < self.lr_warm_up_steps[cycle]: | ||
| f = (self.f_max[cycle] - self.f_start[cycle]) / self.lr_warm_up_steps[cycle] * n + self.f_start[cycle] | ||
| self.last_f = f | ||
| return f | ||
| else: | ||
| f = self.f_min[cycle] + (self.f_max[cycle] - self.f_min[cycle]) * (self.cycle_lengths[cycle] - n) / (self.cycle_lengths[cycle]) | ||
| self.last_f = f | ||
| return f | ||
|
|
||
| self.last_f = f | ||
| return f |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function LambdaLinearScheduler.schedule refactored with the following changes:
- Merge nested if conditions (
merge-nested-ifs) - Hoist repeated code outside conditional statement [×2] (
hoist-statement-from-if)
| # xc a list of captions to plot | ||
| b = len(xc) | ||
| txts = list() | ||
| txts = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function log_txt_as_img refactored with the following changes:
- Replace list() with [] (
list-literal)
| if not isinstance(x, torch.Tensor): | ||
| return False | ||
| return (len(x.shape) == 4) and (x.shape[1] > 3) | ||
| return ( | ||
| (len(x.shape) == 4) and (x.shape[1] > 3) | ||
| if isinstance(x, torch.Tensor) | ||
| else False | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function ismap refactored with the following changes:
- Swap if/else branches of if expression to remove negation (
swap-if-expression) - Lift code into else after jump in control flow (
reintroduce-else) - Replace if statement with if expression (
assign-if-exp)
| if not isinstance(x, torch.Tensor): | ||
| return False | ||
| return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1) | ||
| return ( | ||
| len(x.shape) == 4 and x.shape[1] in [3, 1] | ||
| if isinstance(x, torch.Tensor) | ||
| else False | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function isimage refactored with the following changes:
- Swap if/else branches of if expression to remove negation (
swap-if-expression) - Lift code into else after jump in control flow (
reintroduce-else) - Replace if statement with if expression (
assign-if-exp) - Replace multiple comparisons of same variable with
inoperator (merge-comparisons)
| if idx_to_fn: | ||
| res = func(data, worker_id=idx) | ||
| else: | ||
| res = func(data) | ||
| res = func(data, worker_id=idx) if idx_to_fn else func(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _do_parallel_data_prefetch refactored with the following changes:
- Replace if statement with if expression (
assign-if-exp)
| f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.' | ||
| 'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.' | ||
| ) | ||
|
|
||
| data = list(data.values()) | ||
| if target_data_type == "ndarray": | ||
| data = np.asarray(data) | ||
| else: | ||
| data = list(data) | ||
| data = np.asarray(data) if target_data_type == "ndarray" else list(data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function parallel_data_prefetch refactored with the following changes:
- Swap if/else branches of if expression to remove negation (
swap-if-expression) - Replace f-string with no interpolated values with string [×2] (
remove-redundant-fstring) - Replace if statement with if expression [×2] (
assign-if-exp) - Replace unneeded comprehension with generator (
comprehension-to-generator) - Lift code into else after jump in control flow (
reintroduce-else)
This removes the following comments ( why? ):
# order outputs
| with open(path_to_yaml) as f: | ||
| di2s = yaml.load(f) | ||
| return dict((v,k) for k,v in di2s.items()) | ||
| return {v: k for k,v in di2s.items()} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function synset2idx refactored with the following changes:
- Replace list(), dict() or set() with comprehension (
collection-builtin-to-comprehension)
| def __init__(self, config=None): | ||
| self.config = config or OmegaConf.create() | ||
| if not type(self.config)==dict: | ||
| if type(self.config) != dict: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function ImageNetBase.__init__ refactored with the following changes:
- Simplify logical expression using De Morgan identities (
de-morgan)
| for ik in ignore_keys: | ||
| if k.startswith(ik): | ||
| print("Deleting key {} from state_dict.".format(k)) | ||
| print(f"Deleting key {k} from state_dict.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel.init_from_ckpt refactored with the following changes:
- Replace call to format with f-string (
use-fstring-for-formatting)
| quant = self.post_quant_conv(quant) | ||
| dec = self.decoder(quant) | ||
| return dec | ||
| return self.decoder(quant) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel.decode refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| dec = self.decode(quant_b) | ||
| return dec | ||
| return self.decode(quant_b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel.decode_code refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| if return_pred_indices: | ||
| return dec, diff, ind | ||
| return dec, diff | ||
| return (dec, diff, ind) if return_pred_indices else (dec, diff) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel.forward refactored with the following changes:
- Lift code into else after jump in control flow (
reintroduce-else) - Replace if statement with if expression (
assign-if-exp)
| aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, | ||
| self.global_step, | ||
| last_layer=self.get_last_layer(), | ||
| split="val"+suffix, | ||
| predicted_indices=ind | ||
| ) | ||
|
|
||
| discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, | ||
| self.global_step, | ||
| last_layer=self.get_last_layer(), | ||
| split="val"+suffix, | ||
| predicted_indices=ind | ||
| ) | ||
| aeloss, log_dict_ae = self.loss( | ||
| qloss, | ||
| x, | ||
| xrec, | ||
| 0, | ||
| self.global_step, | ||
| last_layer=self.get_last_layer(), | ||
| split=f"val{suffix}", | ||
| predicted_indices=ind, | ||
| ) | ||
|
|
||
|
|
||
| discloss, log_dict_disc = self.loss( | ||
| qloss, | ||
| x, | ||
| xrec, | ||
| 1, | ||
| self.global_step, | ||
| last_layer=self.get_last_layer(), | ||
| split=f"val{suffix}", | ||
| predicted_indices=ind, | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel._validation_step refactored with the following changes:
- Use f-string instead of string concatenation [×2] (
use-fstring-for-concatenation)
| log = dict() | ||
| log = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModel.log_images refactored with the following changes:
- Replace dict() with {} (
dict-literal)
| dec = self.decoder(quant) | ||
| return dec | ||
| return self.decoder(quant) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function VQModelInterface.decode refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| print("Deleting key {} from state_dict.".format(k)) | ||
| print(f"Deleting key {k} from state_dict.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function AutoencoderKL.init_from_ckpt refactored with the following changes:
- Replace call to format with f-string (
use-fstring-for-formatting)
| posterior = DiagonalGaussianDistribution(moments) | ||
| return posterior | ||
| return DiagonalGaussianDistribution(moments) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function AutoencoderKL.encode refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| dec = self.decoder(z) | ||
| return dec | ||
| return self.decoder(z) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function AutoencoderKL.decode refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable)
| if x_T is None: | ||
| img = torch.randn(shape, device=device) | ||
| else: | ||
| img = x_T | ||
|
|
||
| img = torch.randn(shape, device=device) if x_T is None else x_T | ||
| if timesteps is None: | ||
| timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps | ||
| elif timesteps is not None and not ddim_use_original_steps: | ||
| elif not ddim_use_original_steps: | ||
| subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 | ||
| timesteps = self.ddim_timesteps[:subset_end] | ||
|
|
||
| intermediates = {'x_inter': [img], 'pred_x0': [img]} | ||
| time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) | ||
| time_range = ( | ||
| reversed(range(timesteps)) | ||
| if ddim_use_original_steps | ||
| else np.flip(timesteps) | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DDIMSampler.ddim_sampling refactored with the following changes:
- Replace if statement with if expression (
assign-if-exp) - Remove redundant conditional (
remove-redundant-if) - Replace range(0, x) with range(x) (
remove-zero-from-range)
| print("Deleting key {} from state_dict.".format(k)) | ||
| print(f"Deleting key {k} from state_dict.") | ||
| del sd[k] | ||
| missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( | ||
| sd, strict=False) | ||
| missing, unexpected = ( | ||
| self.model.load_state_dict(sd, strict=False) | ||
| if only_model | ||
| else self.load_state_dict(sd, strict=False) | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DDPM.init_from_ckpt refactored with the following changes:
- Replace call to format with f-string (
use-fstring-for-formatting) - Swap if/else branches of if expression to remove negation (
swap-if-expression)
| for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): | ||
| for i in tqdm(reversed(range(self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): | ||
| img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), | ||
| clip_denoised=self.clip_denoised) | ||
| if i % self.log_every_t == 0 or i == self.num_timesteps - 1: | ||
| intermediates.append(img) | ||
| if return_intermediates: | ||
| return img, intermediates | ||
| return img | ||
| return (img, intermediates) if return_intermediates else img |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DDPM.p_sample_loop refactored with the following changes:
- Replace range(0, x) with range(x) (
remove-zero-from-range) - Lift code into else after jump in control flow (
reintroduce-else) - Replace if statement with if expression (
assign-if-exp)
| x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) | ||
| model_out = self.model(x_noisy, t) | ||
|
|
||
| loss_dict = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DDPM.p_losses refactored with the following changes:
- Merge dictionary assignment with declaration (
merge-dict-assign) - Move assignment closer to its usage within a block (
move-assign-in-block) - Add single value to dictionary directly rather than using update() [×3] (
simplify-dictionary-update)
| loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} | ||
| loss_dict_ema = {f'{key}_ema': loss_dict_ema[key] for key in loss_dict_ema} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function DDPM.validation_step refactored with the following changes:
- Use f-string instead of string concatenation (
use-fstring-for-concatenation)
Sourcery Code Quality Report✅ Merging this PR will increase code quality in the affected files by 0.34%.
Here are some functions in these files that still need a tune-up:
Legend and ExplanationThe emojis denote the absolute quality of the code:
The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request. Please see our documentation here for details on how these metrics are calculated. We are actively working on this report - lots more documentation and extra metrics to come! Help us improve this quality report! |
Branch
mainrefactored by Sourcery.If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.
See our documentation here.
Run Sourcery locally
Reduce the feedback loop during development by using the Sourcery editor plugin:
Review changes via command line
To manually merge these changes, make sure you're on the
mainbranch, then run:Help us improve this pull request!