You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi ! Thx for your attention.
In your paper, I found some results inconsistent with the original paper of other methods, like "OmniAnomaly" and "InterFusion". Is there some thing different in experiment detail?
The text was updated successfully, but these errors were encountered:
Hi, this mismatch is because of the inconsistency in the dataset. For example, in SMD, we adopt the full dataset, while other methods only use part of the dataset.
I think you can obtain the benchmark that we used from the link in this repo.
Yeah, this makes sense. I also noticed that other methods like 'interfusion', they seems do training and evaluation once in a single entity(for SMD, i.e. machin-x-x.), while your experiments train and evaluate the model on the whole dataset. Is this the cause of the inconsistency with the original 'interfusion' 'OmniAnomoly' papers?Thank you very much.
Hi ! Thx for your attention.
In your paper, I found some results inconsistent with the original paper of other methods, like "OmniAnomaly" and "InterFusion". Is there some thing different in experiment detail?
The text was updated successfully, but these errors were encountered: