Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about your paper's results #15

Closed
ResearcherLifeng opened this issue Jan 21, 2020 · 2 comments
Closed

question about your paper's results #15

ResearcherLifeng opened this issue Jan 21, 2020 · 2 comments

Comments

@ResearcherLifeng
Copy link

Hi! Dear authors,

I have a question about your reported results.

I have tested your OmniAnomaly model on the dataset MSL and I can get 89% Pot-F1 which is much closer to the result reported in your paper.
But, when I set the model's anomaly score as random numbers from [0, 1], the pot-F1 can reach above 89.8833%. it is confusing since these random "anomaly scores" are not obtained from the model.

I think this issue should be caused by the point-adjust approach mentioned in your paper.
Actually, I also try to evaluate a simple RNN with your codes and settings (same data and same evaluation), its best F1 can also be above 90%.

Can you help to explain my question? Many thanks.

@sashastrelnikoff
Copy link

Hi! Dear authors,

I have a question about your reported results.

I have tested your OmniAnomaly model on the dataset MSL and I can get 89% Pot-F1 which is much closer to the result reported in your paper. But, when I set the model's anomaly score as random numbers from [0, 1], the pot-F1 can reach above 89.8833%. it is confusing since these random "anomaly scores" are not obtained from the model.

I think this issue should be caused by the point-adjust approach mentioned in your paper. Actually, I also try to evaluate a simple RNN with your codes and settings (same data and same evaluation), its best F1 can also be above 90%.

Can you help to explain my question? Many thanks.

Hi there, I was just wondering why this question was marked as close without receiving an answer? Did you happen to figure out whether there was an issue with the POT scores or if this is the expected behaviour?

@mirmuss
Copy link

mirmuss commented Apr 9, 2024

+?

Hi! Dear authors,
I have a question about your reported results.
I have tested your OmniAnomaly model on the dataset MSL and I can get 89% Pot-F1 which is much closer to the result reported in your paper. But, when I set the model's anomaly score as random numbers from [0, 1], the pot-F1 can reach above 89.8833%. it is confusing since these random "anomaly scores" are not obtained from the model.
I think this issue should be caused by the point-adjust approach mentioned in your paper. Actually, I also try to evaluate a simple RNN with your codes and settings (same data and same evaluation), its best F1 can also be above 90%.
Can you help to explain my question? Many thanks.

Hi there, I was just wondering why this question was marked as close without receiving an answer? Did you happen to figure out whether there was an issue with the POT scores or if this is the expected behaviour?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants