Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Q] Allow error metrics to compare models without anomaly score? #348

Open
breznak opened this issue Jul 10, 2019 · 0 comments

Comments

Projects
None yet
1 participant
@breznak
Copy link
Member

commented Jul 10, 2019

  • on a "strategic" level
    • can we include metrics to compare models (additionally) by sth else than anomaly score, comapre by the metric?
  • implementation:
    • I'd like to add MSE, R2 metrics
    • (opt) extend API to enforce algorithms to give also prediction (appart from anomalyScore), so we can compute these metrics for all algs? E(current, predicted) -> Real

Justification:

  • for some (most?) time-series algorithms provide prediction for T+1, but computing anomaly is sometimes impossible/difficult. See #347
  • papers more typically provide these error scores for timeseries than anomaly metric. Therefore this repo can serve easily as a benchmark/experimentation with different algorithms & datasets.
  • interesting hypothesis: is anomaly score metric the ideal metric for time-series?
  • relatively simple change would allow new comparisons & functionality
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.