-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about your paper's results #15
Comments
Hi there, I was just wondering why this question was marked as close without receiving an answer? Did you happen to figure out whether there was an issue with the POT scores or if this is the expected behaviour? |
+?
|
Hi! Dear authors,
I have a question about your reported results.
I have tested your OmniAnomaly model on the dataset MSL and I can get 89% Pot-F1 which is much closer to the result reported in your paper.
But, when I set the model's anomaly score as random numbers from [0, 1], the pot-F1 can reach above 89.8833%. it is confusing since these random "anomaly scores" are not obtained from the model.
I think this issue should be caused by the point-adjust approach mentioned in your paper.
Actually, I also try to evaluate a simple RNN with your codes and settings (same data and same evaluation), its best F1 can also be above 90%.
Can you help to explain my question? Many thanks.
The text was updated successfully, but these errors were encountered: