Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some results inconsistent with the original paper of other methods #6

Closed
Conearth opened this issue May 11, 2022 · 4 comments
Closed

Comments

@Conearth
Copy link

Hi ! Thx for your attention.
In your paper, I found some results inconsistent with the original paper of other methods, like "OmniAnomaly" and "InterFusion". Is there some thing different in experiment detail?

@wuhaixu2016
Copy link
Collaborator

wuhaixu2016 commented May 12, 2022

Hi, this mismatch is because of the inconsistency in the dataset. For example, in SMD, we adopt the full dataset, while other methods only use part of the dataset.
I think you can obtain the benchmark that we used from the link in this repo.

@Conearth
Copy link
Author

Yeah, this makes sense. I also noticed that other methods like 'interfusion', they seems do training and evaluation once in a single entity(for SMD, i.e. machin-x-x.), while your experiments train and evaluate the model on the whole dataset. Is this the cause of the inconsistency with the original 'interfusion' 'OmniAnomoly' papers?Thank you very much.

@wuhaixu2016
Copy link
Collaborator

Yes, you can check the data source for the data splitting strategy.

@Conearth
Copy link
Author

Many thx.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants