Pull request for PyTorch implementation#8
Conversation
MaikBastian
left a comment
There was a problem hiding this comment.
Thanks for the PR! I have just added some comments. Nothing major, just some little cleanups and suggestions. Please let me know what you think! :)
There was a problem hiding this comment.
I guess we can remove the two benchmark scripts again. I will add the arguments you used to the README, so that the training can be replicated in the future.
cipherTypeDetection/eval.py
Outdated
| if architecture == "FFNN": | ||
| results.append(model.evaluate(statistics, labels, batch_size=args.batch_size, verbose=1)) | ||
| if architecture in ("CNN", "LSTM", "Transformer"): | ||
| if hasattr(model, "evaluate"): # Keras model |
There was a problem hiding this comment.
I would prefer to be more explicit when checking which library to use. Maybe we could add a parameter to the function, i.e. backend which is an enum with two values: keras and pytorch. This parameter can then be provided when the function is called and either the FFNN or LSTM is evaluated .
cipherTypeDetection/eval.py
Outdated
| stats_np = statistics.numpy() | ||
|
|
||
| """ | ||
| # Normalization: solo al primo batch |
There was a problem hiding this comment.
Here and everywhere else: Please translate comments from Italian to English. :)
cipherTypeDetection/eval.py
Outdated
| model = None | ||
|
|
||
| if architecture in ("FFNN", "CNN", "LSTM", "Transformer"): | ||
| if architecture == "FFNN" and model_path.endswith(".pth"): |
There was a problem hiding this comment.
A parameter with the enum Backend could be useful here, again.
cipherTypeDetection/eval.py
Outdated
|
|
||
| if architecture in ("FFNN", "CNN", "LSTM", "Transformer"): | ||
| if architecture == "FFNN" and model_path.endswith(".pth"): | ||
| from cipherTypeDetection.train import TorchFFNN |
There was a problem hiding this comment.
Please move this line to the top of the file, to the other imports.
cipherTypeDetection/train.py
Outdated
| for device in tf.config.list_physical_devices('GPU'): | ||
| tf.config.experimental.set_memory_growth(device, True) | ||
|
|
||
| class FFNN(nn.Module): |
There was a problem hiding this comment.
We could move both modules into their own (or a combined) file.
cipherTypeDetection/train.py
Outdated
|
|
||
|
|
||
|
|
||
| def train_torch_ffnn(model, args, train_ds): |
There was a problem hiding this comment.
This function could be combined with train_torch_lstm. Maybe just as train_torch? If I see it correctly, there is just a small difference, where the inputs are converted into a different type in the feature learning case. Maybe we could add a parameter approach, which would be an enum with the two cases feature_learning or feature_engineering?
If you have better ideas for my suggested names, please feel free to change them! :)
| model.train() | ||
|
|
||
| """ | ||
| # --- Early stopping check --- |
There was a problem hiding this comment.
Could we add the early stopping again, but with an additional boolean parameter on the function to enable / disable it more easily?
cipherTypeDetection/train.py
Outdated
|
|
||
|
|
||
|
|
||
| def predict_torch_ffnn(model, test_ds, args): |
There was a problem hiding this comment.
As in the training case, we should be able to combine both prediction functions.
cipherTypeDetection/train.py
Outdated
| model = create_model(architecture, extend_model, output_layer_size, max_train_len) | ||
| if architecture in ("FFNN", "CNN", "LSTM", "Transformer") and extend_model is None: | ||
| model.summary() | ||
| if hasattr(model, "summary"): |
There was a problem hiding this comment.
Here a parameter backend could be useful, again. (See also a few times below.)
Add FFNN and LSTM PyTorch implementation