Skip to content

Sourcery refactored main branch#1

Open
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main
Open

Sourcery refactored main branch#1
sourcery-ai[bot] wants to merge 1 commit intomainfrom
sourcery/main

Conversation

@sourcery-ai
Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot commented May 30, 2023

Branch main refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the main branch, then run:

git fetch origin sourcery/main
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai Bot requested a review from Deadstarjay62 May 30, 2023 00:50
Copy link
Copy Markdown
Author

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to GitHub API limits, only the first 60 comments can be shown.

logging.info(f"Uploading to GCS with path {self.gcs_output_path}")
assert os.path.isdir(local_path)
for local_file in glob.glob(local_path + "/*"):
for local_file in glob.glob(f"{local_path}/*"):
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MergeAndBuildIndex.extract_output refactored with the following changes:

Comment on lines -85 to +89
learn = trainer.learn
if args.distributed or args.num_workers is not None:
learn = trainer.train_and_evaluate

if not args.directly_export_best:
logging.info("Starting training")
start = datetime.now()
learn = (trainer.train_and_evaluate if args.distributed
or args.num_workers is not None else trainer.learn)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function main refactored with the following changes:

return parser.parse_args()
else:
return parser.parse_args(args)
return parser.parse_args() if args is None else parser.parse_args(args)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_params refactored with the following changes:

Comment on lines -89 to +88
if opt.output_checkpoint_dir == "none" or opt.output_checkpoint_dir == opt.warm_start_base_dir:
_warm_start_base_dir = os.path.normpath(opt.warm_start_base_dir) + "_backup_warm_start"
if opt.output_checkpoint_dir in ["none", opt.warm_start_base_dir]:
_warm_start_base_dir = (
f"{os.path.normpath(opt.warm_start_base_dir)}_backup_warm_start")
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _main refactored with the following changes:

output = tf.transpose(transposed_residual, perm=[0, 2, 1])

return output
return tf.transpose(transposed_residual, perm=[0, 2, 1])
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ChannelWiseDense.call refactored with the following changes:

Comment on lines -102 to +105
new_user_metric_ops = {name + "_new_users": ops for name, ops in new_user_metric_ops.items()}
new_user_metric_ops = {
f"{name}_new_users": ops
for name, ops in new_user_metric_ops.items()
}
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function add_new_user_metrics refactored with the following changes:

Comment on lines -140 to +143
classnames_unweighted = ["unweighted_" + classname for classname in classnames]
classnames_unweighted = [f"unweighted_{classname}" for classname in classnames]
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_meta_learn_single_binary_task_metric_fn refactored with the following changes:

Comment on lines -198 to +201
classnames_unweighted = ["unweighted_" + classname for classname in classnames]
classnames_unweighted = [f"unweighted_{classname}" for classname in classnames]
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_meta_learn_dual_binary_tasks_metric_fn refactored with the following changes:

return parser.parse_args()
else:
return parser.parse_args(args)
return parser.parse_args() if args is None else parser.parse_args(args)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_params refactored with the following changes:

sparse_shape = tf.stack([features.dense_shape[0], sparse_feature_dim])
sparse_tf = tf.SparseTensor(features.indices, features.values, sparse_shape)
return sparse_tf
return tf.SparseTensor(features.indices, features.values, sparse_shape)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _sparse_feature_fixup refactored with the following changes:

Comment on lines -41 to +40
if not base:
return base
return f"{base}:{suffix}"
return base if not base else f"{base}:{suffix}"
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function self_atten_dense refactored with the following changes:

Comment on lines -85 to -94
bn_gw_normalized_dense = tf.layers.batch_normalization(
gw_normalized_dense,
training=is_training,
renorm_momentum=0.9999,
momentum=0.9999,
renorm=is_training,
trainable=True,
return tf.layers.batch_normalization(
gw_normalized_dense,
training=is_training,
renorm_momentum=0.9999,
momentum=0.9999,
renorm=is_training,
trainable=True,
)

return bn_gw_normalized_dense
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_input_trans_func refactored with the following changes:

Comment on lines -120 to +128
if is_training:
with tf.variable_scope("sparse_dropout"):
values = input_tensor.values
keep_mask = tf.keras.backend.random_binomial(
tf.shape(values), p=1 - rate, dtype=tf.float32, seed=None
)
keep_mask.set_shape([None])
keep_mask = tf.cast(keep_mask, tf.bool)

keep_indices = tf.boolean_mask(input_tensor.indices, keep_mask, axis=0)
keep_values = tf.boolean_mask(values, keep_mask, axis=0)

dropped_tensor = tf.SparseTensor(keep_indices, keep_values, input_tensor.dense_shape)
return dropped_tensor
else:
if not is_training:
return input_tensor
with tf.variable_scope("sparse_dropout"):
values = input_tensor.values
keep_mask = tf.keras.backend.random_binomial(
tf.shape(values), p=1 - rate, dtype=tf.float32, seed=None
)
keep_mask.set_shape([None])
keep_mask = tf.cast(keep_mask, tf.bool)

keep_indices = tf.boolean_mask(input_tensor.indices, keep_mask, axis=0)
keep_values = tf.boolean_mask(values, keep_mask, axis=0)

return tf.SparseTensor(keep_indices, keep_values, input_tensor.dense_shape)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function tensor_dropout refactored with the following changes:

Comment on lines -175 to -184
bn_gw_normalized_dense = tf.layers.batch_normalization(
gw_normalized_dense,
training=is_training,
renorm_momentum=0.9999,
momentum=0.9999,
renorm=is_training,
trainable=True,
return tf.layers.batch_normalization(
gw_normalized_dense,
training=is_training,
renorm_momentum=0.9999,
momentum=0.9999,
renorm=is_training,
trainable=True,
)

return bn_gw_normalized_dense
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function adaptive_transformation refactored with the following changes:

Comment on lines -200 to +201
name + "_group_weight",
[1, group_num, input_dim, out_dim],
initializer=customized_glorot_uniform(
fan_in=input_dim * init_multiplier, fan_out=out_dim * init_multiplier
),
trainable=True,
f"{name}_group_weight",
[1, group_num, input_dim, out_dim],
initializer=customized_glorot_uniform(fan_in=input_dim * init_multiplier,
fan_out=out_dim * init_multiplier),
trainable=True,
)
self.b = tf.get_variable(
name + "_group_bias",
[1, group_num, out_dim],
initializer=tf.constant_initializer(0.0),
trainable=True,
f"{name}_group_bias",
[1, group_num, out_dim],
initializer=tf.constant_initializer(0.0),
trainable=True,
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function FastGroupWiseTrans.__init__ refactored with the following changes:


end = datetime.now()
logging.info("Evaluating time: " + str(end - start))
logging.info(f"Evaluating time: {str(end - start)}")
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 89-89 refactored with the following changes:


output_dict = {"output": logits}
return output_dict
return {"output": logits}
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function deepnorm_light_ranking refactored with the following changes:

Comment on lines -19 to +27
PREDICTED_CLASSES = \
["tf_target"] + ["tf_" + label_name for label_name in LABEL_NAMES] + ["tf_timelines.earlybird_score"] + \
["lolly_target"] + ["lolly_" + label_name for label_name in LABEL_NAMES] + ["lolly_timelines.earlybird_score"]
PREDICTED_CLASSES = (
(
["tf_target"]
+ [f"tf_{label_name}" for label_name in LABEL_NAMES]
+ ["tf_timelines.earlybird_score"]
+ ["lolly_target"]
)
+ [f"lolly_{label_name}" for label_name in LABEL_NAMES]
) + ["lolly_timelines.earlybird_score"]
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 19-21 refactored with the following changes:

Comment on lines -74 to +75
class_metric_name = metric_name + "_" + (classes[i] if classes is not None else str(i))
class_metric_name = f"{metric_name}_" + (classes[i] if classes
is not None else str(i))
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_multi_binary_class_metric_fn refactored with the following changes:

Comment on lines -111 to -117
export_outputs = {
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
tf.estimator.export.PredictOutput(
{"prediction": tf.identity(graph_output["output"], name="output_scores")}
)
return {
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
tf.estimator.export.PredictOutput({
"prediction":
tf.identity(graph_output["output"], name="output_scores")
})
}
return export_outputs
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function earlybird_output_fn refactored with the following changes:

Comment on lines -186 to +188
logging.info("Training and Evaluation time: " + str(trainingEndTime - trainingStartTime))
logging.info(
f"Training and Evaluation time: {str(trainingEndTime - trainingStartTime)}"
)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 186-212 refactored with the following changes:

Comment on lines -13 to +14
lolly_activations = tf.math.subtract(tf.math.log(eb_lolly_scores), tf.math.log(inverse_eb_lolly_scores))
return lolly_activations
return tf.math.subtract(tf.math.log(eb_lolly_scores),
tf.math.log(inverse_eb_lolly_scores))
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_lolly_logits refactored with the following changes:

logged_eb_lolly_scores = tf.reshape(labels[:, EB_SCORE_IDX], (-1, 1))
eb_lolly_scores = tf.truediv(logged_eb_lolly_scores, 100.0)
return eb_lolly_scores
return tf.truediv(logged_eb_lolly_scores, 100.0)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_lolly_scores refactored with the following changes:

def parse(self, line):
match = re.search(self.pattern(), line)
if match:
if match := re.search(self.pattern(), line):
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Parser.parse refactored with the following changes:

Comment on lines -141 to +140
print("Missing feature with id: " + str(feature_id))
print(f"Missing feature with id: {str(feature_id)}")
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function DBv2DataExampleParser._parse_match refactored with the following changes:

print('Smart bias init to ', smart_bias_value)
output_bias = tf.keras.initializers.Constant(smart_bias_value)
return output_bias
return tf.keras.initializers.Constant(smart_bias_value)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _get_bias refactored with the following changes:

Comment on lines -159 to -177
last_layer = tf.keras.layers.Dense(
kwargs["num_classes"], activation="softmax", kernel_initializer=glorot,
bias_initializer=output_bias, name=layer_name
return tf.keras.layers.Dense(
kwargs["num_classes"],
activation="softmax",
kernel_initializer=glorot,
bias_initializer=output_bias,
name=layer_name,
)

elif kwargs.get('num_raters', 1) > 1:
if kwargs.get('multitask', False):
raise NotImplementedError
last_layer = tf.keras.layers.Dense(
kwargs['num_raters'], activation="sigmoid", kernel_initializer=glorot,
bias_initializer=output_bias, name='probs')
return tf.keras.layers.Dense(
kwargs['num_raters'],
activation="sigmoid",
kernel_initializer=glorot,
bias_initializer=output_bias,
name='probs',
)

else:
last_layer = tf.keras.layers.Dense(
1, activation="sigmoid", kernel_initializer=glorot,
bias_initializer=output_bias, name=layer_name
return tf.keras.layers.Dense(
1,
activation="sigmoid",
kernel_initializer=glorot,
bias_initializer=output_bias,
name=layer_name,
)

return last_layer
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_last_layer refactored with the following changes:

if self.test
else f"..."
)
self.logdir = "..."
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Trainer._init_dirnames refactored with the following changes:

Comment on lines -153 to +150
fold_logdir = self.logdir + f"_fold{fold}"
fold_checkpoint_path = self.checkpoint_path + f"_fold{fold}/{{epoch:02d}}"
fold_logdir = f"{self.logdir}_fold{fold}"
fold_checkpoint_path = f"{self.checkpoint_path}_fold{fold}/{{epoch:02d}}"
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Trainer.get_callbacks refactored with the following changes:

Comment on lines -238 to -243
warm_up_schedule = WarmUp(
initial_learning_rate=self.learning_rate,
decay_schedule_fn=learning_rate_fn,
warmup_steps=warm_up_steps,
return WarmUp(
initial_learning_rate=self.learning_rate,
decay_schedule_fn=learning_rate_fn,
warmup_steps=warm_up_steps,
)
return warm_up_schedule
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Trainer.get_lr_schedule refactored with the following changes:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants