Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge integration to master version 0.9.7 #790

Merged
merged 50 commits into from
Jun 15, 2020
Merged

Merge integration to master version 0.9.7 #790

merged 50 commits into from
Jun 15, 2020

Conversation

broneill
Copy link
Contributor

No description provided.

broneill and others added 30 commits May 1, 2020 08:07
…-query (#741)

QuerySession object can now be used to store per-query lock state, trace information etc.
Preparing to guard block memory reclaims with shard level locks. This PR just introduces shared/read locks. Subsequent PRs will introduce write locks.
Since raw data retention is finite (say 3 days), long lookback queries ( > 3d) 
cannot be handled by raw cluster. They end up being delegated to downsample cluster.

However, there is a delay in population of downsample data. Most recent data would
not be available immediately. Hence window aggregations for latest time instants may
not be complete.

This PR omits instants with incomplete window aggregations from query results. We will 
consider performant window aggregations across raw/downsample clusters for the future.

The thinking here is that WoW and MoM comparisons can wait for 6-12 hours. Instant queries 
can apply offset of 6 or 12 hours depending on delay.
We now estimate data size based on schema, sample size, time range
and block queries that go beyond limit. The user guidance if this occurs
is to reduce time range, or number of time series queried.
…ies (#765)

Allows us to simulate the case where data is spread over many shards, and explore query issues with that setup
Merge develop to integration 0.9.7
… downsampled data (#770)

Other fixes:
* Also fixing DownsampleMainSpec to use synthetic timestamps to bypass
   Query Planner restriction on data TTL
* Fixed timeout check to reply with QueryError instead of failing actor
broneill and others added 20 commits June 4, 2020 16:30
For full-odp, PK bytes is looked up using Lucene. However, BytesRef length 
was not being taken into consideration when formulating Part Key on-heap 
byte array. This fix does a copy of the array taking length into consideration.
Merge develop to integration 0.9.7 Take 3
fix(memory,core) Add a Latch class and use it instead of StampedLock.
It is possible to miss metrics from FiloDB since any shutdown may 
miss metrics from the last time interval. This commit adds shutdown 
hook to drain kamon metrics to reporters.
Added to both server and spark jobs.
fix(memory): Latch wasn't wasn't always waking up blocked shared wait…
…784)

* Cardinality buster could not bust cardinality for a tag filter along with time range filter. This PR adds that capability.
* Also add kamon init back for spark executors
Cherry pick latch commits back to the develop branch
…eturn execPlan head in materialize when only one execPlan is present (#778)
* Removed additional monix task due to querySession.close and folded it into QueryActor inline.
* Kamon instrumentation of query scheduler for metrics
Merge develop to integration 0.9.7 Take 3
@broneill broneill self-assigned this Jun 15, 2020
Copy link
Member

@tjackpaul tjackpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you @broneill 👍

@broneill broneill merged commit 7388950 into filodb:master Jun 15, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants