Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics calculated with filters end up in having conflicts among them #25455

Open
marcoruggine opened this issue Sep 16, 2022 · 12 comments
Open

Comments

@marcoruggine
Copy link

Describe the bug
We are currently introducing Metrics in our environment and we have discovered that when more than 1 Metric is applied to the question (eg: a pivot with 2+ Metrics based on the same model) they end up conflicting and giving wrong results.
With some manual analysis it seems to us that the conflict arises between the filters applied in the calculated Metric.

Example:
Completed Payments = distinct(payment_id) filtered by is_refunded = false
Refunded Payments = distinct(payment_id) filtered by is_refunded = true

If we add both to a chart/table to see the evolution over time of both, we end up with no results because there isn't any payment that, at the same time, satisfies both is_refunded = false + is_refunded = true.

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://metabase.playtomic.io/admin/datamodel/metric/create
  2. Create a basic Metric with a simple filter on is_canceled = true
  3. Create the mirror Metric with a simple filter on is_canceled = false
  4. Add them in a new question without any other filter

Expected behavior
What we would expect is that filters that are explicitly applied to the Metric stay within that Metric and don't get adopted at the whole query level.

One of the most important points of having Metrics is that we can calculate different things, that normally would require SQL because of these case-when filters, in the same question coming from the Notebook.

Screenshots
Screenshot 2022-09-16 at 12 53 19
Screenshot 2022-09-16 at 12 53 34
Screenshot 2022-09-16 at 12 54 09
Screenshot 2022-09-16 at 12 54 31

Information about your Metabase Installation:

V 0.44.3

Severity
This is actually blocking the whole process of introducing Metabase to the Company.
We need 1 single source of truth, Metrics are what we need to avoid endless repetitions (and mistakes) by people adding the same Metric to a dashboard or analysis.

@marcoruggine
Copy link
Author

On further inspection, also the drill-down to row-level is not working.
Usually, when you click on a figure and click 'View these XXX' it opens up the rows of the table that create it, keeping the overall filters applied in the question.
Doing this with a calculated Metric is not resulting in the same experience: filters applied to the Metric are not applied to the row-level representation, making it impossible for the end user to understand how a figure is created

@flamber
Copy link
Contributor

flamber commented Sep 16, 2022

@marcoruggine There's a separate issue for the drill-through #6010

@rcronin
Copy link

rcronin commented Sep 29, 2022

@marcoruggine - we're running into the same issue. The issue @flamber mentioned is corrected and merged to master.

Upon inspecting the generated SQL, it seems the filters specified on the metric are simply applied to the where clause of the query. I was a little surprised it would do that. I assumed it would have generated some sort of case statement for the actual column that is aggregated and then when drilled, actually apply the filter.

@flamber - is this limitation known?

@flamber
Copy link
Contributor

flamber commented Sep 29, 2022

@rcronin The idea was to completely revamp Metrics https://www.metabase.com/roadmap - I cannot give a timeline.

@ccaldicott1
Copy link

Really could use this fix. Can someone please share a timeline?

@paoliniluis
Copy link
Contributor

@ccaldicott1 i would suggest creating metrics in a model or a view in the db

@lbrdnk
Copy link
Contributor

lbrdnk commented Nov 9, 2023

Paused as per this comment.

@luizarakaki
Copy link
Contributor

At the current state, Metrics aren't supposed to be composed or combined. Users shouldn't be able to create metrics from others metrics (composition) or add multiple metrics to the same query (combination).

We run the aggregation pipeline in a single stage, meaning that we have only a single where clause, what leads to conflicts.

Given the our approach to Metrics and the desire to rework the feature into something much powerful, the right fix today is to disable Metric combination and composition to end users.

As we work on Metrics v2, supporting these cases will be a requirement.

@lbrdnk
Copy link
Contributor

lbrdnk commented Dec 13, 2023

Unassigning myself. Context in this slack thread.

@lbrdnk lbrdnk removed their assignment Dec 13, 2023
@darksciencebase
Copy link
Contributor

Removing this issue from the board as there's nothing left to do on the BE, but not closing since it will exist until #36422 is fixed

@darksciencebase darksciencebase removed .Team/QueryProcessor :hammer_and_wrench: .Wanted: MLv2 Issues that will be fixed (or easier to fix, or possible to fix) when we have MLv2 labels Dec 14, 2023
@marcoruggine
Copy link
Author

Well then, should we be expecting a new, more powerful version of metrics?
Have you considered leveraging dbt Semantic Layer or Cube?

@darksciencebase
Copy link
Contributor

@marcoruggine Metrics 2.0 is on the public roadmap, yes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
9 participants