Skip to content

Ground Truth for Benchmark / Semi-Supervised #583

@LuSchnitt

Description

@LuSchnitt
  • Orion version: 0.2.7
  • Python version: 3.11
  • Operating System: Windows

Question 1:
I want to make a Benchmark for some Pipelines, but how to set the ground truth in den input?
In this quickstart, we can use the evaluation function wiht ground_truth, but how to use in the benchmark-function?

Question 2:

As far as i understand, all these models work only in a unsupervised way, since all models are used in a regression-based way since they try to predict values from the time-series input which are part of the input. For models like auto-encoder, they can also wokr in a semi-supervised way, where they use some labeled data (Anomaly or not) to define/find a better threshhold to separate the distribution of normal data and anaomaly data.

Are all models limited to use them in a unsupervised way? And do i have access to the threshold value you use in https://sintel.dev/Orion/api_reference/api/orion.primitives.timeseries_anomalies.find_anomalies.html.

Since there is not exactly explained what this Threshiold is and how it will be computed. Can u explain or give me a reference where is is stated?

Best Lukas

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions