You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Updating dependencies. Note that jupyter was removed as a direct optional dependency.
You can always add it via poetry add jupyter.
Adding simple differentiation between t5 and esm tokenizer and models in embedders module
Features
Adding new residues_to_value protocol.
Similar to the residues_to_class protocol,
this protocol predicts a value for each sequence, using per-residue embeddings. It might, in some situations, outperform
the sequence_to_value protocol.
Bug fixes
For huggingface_transformer_embedder.py, all special tokens are now always deleted from the final embedding
(e.g. first/last for esm1b, last for t5)
Possible Bug: The method _early_stop in solver.py uses a decrement operation on _stop_count which might lead to negative values if called repeatedly beyond the patience threshold. Consider resetting _stop_count to self.patience when the early stop condition is not met.
Performance Concern: The method _do_dropout_iterations in solver.py might be computationally expensive as it involves multiple forward passes with dropout enabled. This could be optimized or parallelized to improve performance.
Code Clarity: The use of complex list comprehensions and nested functions in fasta.py might reduce code readability and maintainability. Simplifying these constructs could help in making the code more understandable.
Why: This suggestion addresses a significant security concern by replacing yaml.load() with yaml.safe_load(), which is a safer method for loading YAML content.
10
Possible bug
Correct the superclass initialization in the DeeperFNN class
In the DeeperFNN class, the superclass initialization is incorrectly calling super(FNN, self).init() which should be super(DeeperFNN, self).init(). This ensures that the DeeperFNN class correctly initializes its superclass.
special_tokens_mask = self._tokenizer.get_special_tokens_mask(input_id, already_has_special_tokens=True)
-embedding = np.delete(embeddings[seq_num],- [index for index, mask in enumerate(special_tokens_mask) if mask != 0], axis=0)+embedding = np.delete(embeddings[seq_num], np.where(special_tokens_mask)[0], axis=0)
Suggestion importance[1-10]: 9
Why: Using np.where() improves both readability and performance. This change makes the code more efficient and easier to understand.
9
Use torch.no_grad() to optimize inference performance
Consider using torch.no_grad() context manager during inference to disable gradient computation, which can reduce memory consumption and increase computation speed.
-inference_dict = solver.inference(dataloader, calculate_test_metrics=targets is not None)+with torch.no_grad():+ inference_dict = solver.inference(dataloader, calculate_test_metrics=targets is not None)
Suggestion importance[1-10]: 9
Why: This suggestion correctly identifies a performance optimization by using torch.no_grad() during inference, which can reduce memory consumption and increase computation speed.
9
Robustness
Add exception handling for model state loading to manage file-related errors
Implement exception handling for the torch.load function to manage potential errors during the loading of model states, such as file not found or corrupted files.
-state = torch.load(checkpoint_path, map_location=torch.device(self.device))+try:+ state = torch.load(checkpoint_path, map_location=torch.device(self.device))+except FileNotFoundError:+ logger.error("Checkpoint file not found.")+ return+except Exception as e:+ logger.error(f"Failed to load checkpoint: {str(e)}")+ return
Suggestion importance[1-10]: 9
Why: Adding exception handling for torch.load is a good practice to manage potential errors such as file not found or corrupted files. This enhances the robustness of the code.
9
Best practice
Improve type checking by using isinstance()
Replace the direct type checks with isinstance() for better type checking, especially when dealing with inheritance.
-return ("range" in str(self.value) or type(self.value) is list or- (type(self.value) is str and "[" in self.value and "]" in self.value))+return ("range" in str(self.value) or isinstance(self.value, list) or+ (isinstance(self.value, str) and "[" in self.value and "]" in self.value))
Suggestion importance[1-10]: 8
Why: Using isinstance() is a best practice for type checking, especially when dealing with inheritance. This change improves code robustness and readability.
8
Use more specific exception types for clearer error handling
Use a more specific exception type than the general Exception to provide clearer error handling.
-except Exception as e:- raise Exception(f"Loading {embedder_name} automatically and as {tokenizer_class.__class__.__name__} failed!"- f" Please provide a custom_embedder script for your use-case.") from e+except ImportError as e:+ raise ImportError(f"Loading {embedder_name} automatically and as {tokenizer_class.__class__.__name__} failed!"+ f" Please provide a custom_embedder script for your use-case.") from e
Suggestion importance[1-10]: 7
Why: Using a more specific exception type like ImportError provides clearer error handling and makes the code easier to debug. However, the improvement is minor and context-specific.
7
Enhancement
Enhance error messages for clarity and debugging
Replace the manual exception raising for unknown split_name with a more informative error message that includes the available splits.
-raise Exception(f"Unknown split_name {split_name} for given configuration!")+if split_name not in self.solvers_and_loaders_by_split:+ available_splits = ', '.join(self.solvers_and_loaders_by_split.keys())+ raise ValueError(f"Unknown split_name '{split_name}'. Available splits are: {available_splits}")
Suggestion importance[1-10]: 8
Why: The suggestion improves the clarity of error messages by including available split names, which aids in debugging and provides more informative feedback to the user.
8
Apply dropout consistently to both feature and attention convolutions in the LightAttention class
In the LightAttention class, the dropout operation is applied only to the output of feature_convolution but not to attention_convolution. Consistently applying dropout to both could potentially improve model performance by regularizing both features and attention mechanisms.
o = self.dropout(o)
+attention = self.dropout(attention)
Suggestion importance[1-10]: 8
Why: This suggestion potentially improves model performance by regularizing both features and attention mechanisms, making it a valuable enhancement.
8
Enhance the _early_stop method by logging the reason for stopping
Modify the _early_stop method to log the reason for stopping, which could be due to achieving a new minimum loss or reaching the patience limit. This enhances debugging and monitoring capabilities.
if self._stop_count == 0:
+ logger.info("Early stopping due to patience limit reached.")
Suggestion importance[1-10]: 8
Why: Logging the reason for early stopping enhances debugging and monitoring capabilities, making it easier to understand why the training was stopped. This is a useful enhancement for tracking the training process.
8
Improve variable naming for clarity in the FNN class's forward method
In the FNN class, consider using a more descriptive variable name for the input tensor x in the forward method. Renaming x to input_tensor would improve code readability and make the method's purpose clearer.
-embeddings_dict = {str(idx): embedding for idx, embedding in enumerate(embeddings)}+embeddings_dict = dict(enumerate(embeddings)) if not isinstance(embeddings, Dict) else embeddings
Suggestion importance[1-10]: 7
Why: This suggestion simplifies the code for creating embeddings_dict from an iterable, improving code readability and maintainability. However, the improvement is minor.
7
Refactor the mask calculation into a separate method in the LightAttention class
The mask calculation in the forward method of the LightAttention class should be moved to a separate method to improve code readability and maintainability. This change will make the forward method cleaner and focus primarily on the forward pass logic.
Why: This suggestion improves code readability and maintainability by separating concerns, but it does not address a critical issue.
7
Refactor to separate training and validation into distinct methods for better modularity
Refactor the train method to separate the training and validation phases into their own methods. This improves code readability and maintainability by modularizing the training process.
for epoch in range(self.start_epoch, self.number_of_epochs):
+ self._train_epoch(training_dataloader, epoch)+ self._validate_epoch(validation_dataloader, epoch)
Suggestion importance[1-10]: 7
Why: Refactoring the train method to separate training and validation phases improves code readability and maintainability. However, this suggestion requires additional implementation details for the new methods, which are not provided.
7
Simplify dictionary initialization using comprehension
Use dictionary comprehension to simplify the initialization of __DATASETS and __COLLATE_FUNCTIONS.
Why: While dictionary comprehension can make the code more concise, it may also reduce readability for some developers. The improvement is more about code style and maintainability.
bugSomething isn't workingdocumentationImprovements or additions to documentationenhancementNew feature or request
2 participants
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
09.06.2024 - Version 0.9.0
Maintenance
jupyter
was removed as a direct optional dependency.You can always add it via
poetry add jupyter
.embedders
moduleFeatures
residues_to_value
protocol.Similar to the residues_to_class protocol,
this protocol predicts a value for each sequence, using per-residue embeddings. It might, in some situations, outperform
the sequence_to_value protocol.
Bug fixes
huggingface_transformer_embedder.py
, all special tokens are now always deleted from the final embedding(e.g. first/last for esm1b, last for t5)