Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: use expected label in metrics map #15707

Merged
merged 3 commits into from
Jul 16, 2021

Conversation

zhaoyongjie
Copy link
Member

@zhaoyongjie zhaoyongjie commented Jul 15, 2021

SUMMARY

ensure the key of metrics expression hashmap and the expected key are consistent

closes: #15693

BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF

Before

After

image

TESTING INSTRUCTIONS

ADDITIONAL INFORMATION

@@ -1086,7 +1086,7 @@ def get_sqla_query( # pylint: disable=too-many-arguments,too-many-locals,too-ma

# To ensure correct handling of the ORDER BY labeling we need to reference the
# metric instance if defined in the SELECT clause.
metrics_exprs_by_label = {m.name: m for m in metrics_exprs}
metrics_exprs_by_label = {m.key: m for m in metrics_exprs}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refer to:

if db_engine_spec.allows_alias_in_select:
label = db_engine_spec.make_label_compatible(label_expected)
sqla_col = sqla_col.label(label)
sqla_col.key = label_expected
return sqla_col

Copy link
Member

@suddjian suddjian Jul 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to have more documentation in this code. It's a critical piece of Superset that is currently quite cryptic for the un-initiated. Would you mind adding a comment at line 942 describing what the purpose is of key, and also here specifying why we use the key attribute?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review! I have added comments here. db_engine_spec of Bigquery changed metric_label.

def _mutate_label(label: str) -> str:

@codecov
Copy link

codecov bot commented Jul 15, 2021

Codecov Report

Merging #15707 (15547e0) into master (b489cff) will increase coverage by 0.08%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #15707      +/-   ##
==========================================
+ Coverage   76.82%   76.91%   +0.08%     
==========================================
  Files         983      983              
  Lines       51585    51600      +15     
  Branches     6974     6974              
==========================================
+ Hits        39632    39686      +54     
+ Misses      11729    11690      -39     
  Partials      224      224              
Flag Coverage Δ
hive 81.28% <100.00%> (+<0.01%) ⬆️
mysql 81.54% <100.00%> (+<0.01%) ⬆️
postgres 81.56% <100.00%> (+<0.01%) ⬆️
presto 81.28% <100.00%> (?)
python 82.10% <100.00%> (+0.15%) ⬆️
sqlite 81.18% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
superset/connectors/sqla/models.py 89.90% <100.00%> (+1.40%) ⬆️
superset/utils/mock_data.py 24.81% <0.00%> (-0.37%) ⬇️
superset/reports/api.py 87.78% <0.00%> (ø)
superset/models/reports.py 100.00% <0.00%> (ø)
superset/config.py 91.24% <0.00%> (+0.02%) ⬆️
superset/reports/schemas.py 98.71% <0.00%> (+0.06%) ⬆️
superset/models/core.py 90.05% <0.00%> (+0.26%) ⬆️
superset/utils/webdriver.py 79.48% <0.00%> (+0.82%) ⬆️
superset/db_engine_specs/presto.py 90.31% <0.00%> (+5.89%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b489cff...15547e0. Read the comment docs.

@suddjian
Copy link
Member

/testenv up

@github-actions
Copy link
Contributor

@suddjian Ephemeral environment spinning up at http://54.149.175.38:8080. Credentials are admin/admin. Please allow several minutes for bootstrapping and startup.

Copy link
Member

@suddjian suddjian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested, and investigated the code. All looks good as far as I can tell, but my experience in this part of the codebase is limited.

@zhaoyongjie zhaoyongjie merged commit 0721f54 into apache:master Jul 16, 2021
@github-actions
Copy link
Contributor

Ephemeral environment shutdown and build artifacts deleted.

@rosemarie-chiu
Copy link
Contributor

🏷 2021.27

@junlincc junlincc added rush! Requires immediate attention #bug:blocking! Blocking issues with high priority labels Jul 16, 2021
henryyeh pushed a commit to preset-io/superset that referenced this pull request Jul 19, 2021
* fix: use expected label in metrics map

* added comments

* fix type

(cherry picked from commit 0721f54)
cccs-RyanS pushed a commit to CybercentreCanada/superset that referenced this pull request Dec 17, 2021
* fix: use expected label in metrics map

* added comments

* fix type
QAlexBall pushed a commit to QAlexBall/superset that referenced this pull request Dec 29, 2021
* fix: use expected label in metrics map

* added comments

* fix type
cccs-rc pushed a commit to CybercentreCanada/superset that referenced this pull request Mar 6, 2024
* fix: use expected label in metrics map

* added comments

* fix type
@mistercrunch mistercrunch added 🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🚢 1.3.0 labels Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels #bug:blocking! Blocking issues with high priority preset:2021.27 rush! Requires immediate attention 🚢 1.3.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[chart] Chart that's using a virtual dataset with an orderby breaks
5 participants