-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LSTMED + parametrize all variants of DAGMM #83
Conversation
# Conflicts: # src/evaluation/evaluator.py
# Conflicts: # main.py # src/evaluation/evaluator.py
main.py
Outdated
@@ -17,8 +17,8 @@ def main(): | |||
def run_pipeline(): | |||
if os.environ.get("CIRCLECI", False): | |||
datasets = [SyntheticDataGenerator.extreme_1()] | |||
detectors = [RecurrentEBM(num_epochs=2), LSTMAD(num_epochs=5), Donut(max_epoch=5), DAGMM(), | |||
LSTM_Enc_Dec(epochs=2)] | |||
detectors = [DAGMM(sequence_length=15, autoencoder_type=LSTMAutoEncoder), Donut(max_epoch=5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind adding the three remaining sequence_length-autoencoder_type-combinations of DAGMM to the detectors as well?
src/algorithms/autoencoder.py
Outdated
enc = self._encoder(x) | ||
dec = self._decoder(enc) | ||
|
||
return dec, enc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe swap the return values, might lead to confusion
df_evaluation = pd.DataFrame( | ||
columns=["dataset", "algorithm", "accuracy", "precision", "recall", "F1-score", "F0.1-score"]) | ||
for _ in range(5): | ||
evaluator.evaluate() | ||
df = evaluator.benchmarks() | ||
df_evaluation = df_evaluation.append(df) | ||
print(df_evaluation.to_string()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Delete print line or log instead
src/algorithms/lstm_enc_dec_axl.py
Outdated
|
||
self.lstmed.batch_size = prediction_batch_size # (!) | ||
self.lstmed.eval() | ||
errors = [np.nan]*(self.sequence_length-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would have been usefulto see a comment here - was thinking about it's meaning but I've got it now ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting work ;)
main.py
Outdated
@@ -3,7 +3,7 @@ | |||
import numpy as np | |||
import pandas as pd | |||
|
|||
from src.algorithms import DAGMM, Donut, RecurrentEBM, LSTMAD, LSTM_Enc_Dec | |||
from src.algorithms import DAGMM, Donut, LSTM_Enc_Dec, LSTMAD, LSTMAutoEncoder, LSTMED, RecurrentEBM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you know what I think about this ^^
Does it outperform the old LSTM in every way? If so we can drop it because optimizing it won't be worth it.
Please re-review |
# Conflicts: # src/algorithms/lstm_enc_dec_axl.py
… covariance matrices
…mble # Conflicts: # main.py
sequence_length
parameterAutoEncoder
class which can be passed via paramE.g.
DAGMM(sequence_length=1, autoencoder=NNAutoEncoder)
=> Original DAGMMDAGMM(sequence_length=15, autoencoder=NNAutoEncoder
=> DAGMM w/ windowDAGMM(sequence_length=1, autoencoder=LSTMAutoEncoder)
=> LSTM-DAGMM w/o windowDAGMM(sequence_length=15, autoencoder=LSTMAutoEncoder)
=> LSTM-DAGMM w/ window