Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug Fix] Making timeseries_limit not required for phase 2 #4581

Merged
merged 1 commit into from
Mar 13, 2018

Conversation

michellethomas
Copy link
Contributor

We have an issue on time series group by queries, if the chart doesn't have a limit, the data is incorrect. It gets run as a phase 1 query and shows only a single datapoint for each group by value instead of listing the full timeseries data.

It looks like this was added here, but I don't think queries without a limit should automatically be phase 1. I changed this but I'm not quite sure why it was added, is this needed for deckgl viz types?

I tested this on time series phase 2 queries and phase 1 without a limit, and tested bar and pie chart visualizations.

Fixes #4208

@mistercrunch @john-bodley

@codecov-io
Copy link

Codecov Report

Merging #4581 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4581      +/-   ##
==========================================
+ Coverage   71.17%   71.17%   +<.01%     
==========================================
  Files         187      187              
  Lines       14809    14810       +1     
  Branches     1085     1085              
==========================================
+ Hits        10540    10541       +1     
  Misses       4266     4266              
  Partials        3        3
Impacted Files Coverage Δ
superset/connectors/druid/models.py 76.39% <100%> (+0.03%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9edbd64...393cdd8. Read the comment docs.

@john-bodley
Copy link
Member

@mistercrunch would you mind reviewing this PR? It seems like you have the most context here.

@mistercrunch mistercrunch merged commit 1d0ec9f into apache:master Mar 13, 2018
mistercrunch pushed a commit to lyft/incubator-superset that referenced this pull request Mar 13, 2018
mistercrunch pushed a commit that referenced this pull request Mar 13, 2018
john-bodley pushed a commit to john-bodley/superset that referenced this pull request Mar 13, 2018
@michellethomas michellethomas changed the title Making timeseries_limit not required for phase 2 [Bug Fix] Making timeseries_limit not required for phase 2 Mar 19, 2018
hughhhh pushed a commit to lyft/incubator-superset that referenced this pull request Apr 1, 2018
* Cherry pick apache#4581

* Add flask-compress cherry

* Add shortner fix

* Add Return __time in Druid scan apache#4504

* Picking cherry Fixing regression from apache#4500 (apache#4549)

* [bugfix] SQL Lab 'MySQL has gone away'

It appears the 'MySQL has gone away' is triggered by the line of code
I wrapped in a try block here.

This is a temporary fix, there will be another PR shortly getting to the
bottom of this.

Related:
https://github.com/lyft/druidstream/issues/40
michellethomas added a commit to michellethomas/panoramix that referenced this pull request May 24, 2018
wenchma pushed a commit to wenchma/incubator-superset that referenced this pull request Nov 16, 2018
@mistercrunch mistercrunch added 🍒 0.23.3 🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🚢 0.24.0 labels Feb 27, 2024
cccs-rc pushed a commit to CybercentreCanada/superset that referenced this pull request Mar 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏷️ bot A label used by `supersetbot` to keep track of which PR where auto-tagged with release labels 🍒 0.23.3 🚢 0.24.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Druid run_query executes phase 1 query with no limit
4 participants