Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve continuous agg user messages #1592

Merged
merged 1 commit into from
Jan 2, 2020

Conversation

cevian
Copy link
Contributor

@cevian cevian commented Dec 18, 2019

Switch from using internal timestamps to more user-friendly
timestamps in our log messages and clean up some messages.

@@ -421,11 +421,11 @@ get_materialization_end_point_for_table(int32 raw_hypertable_id, int32 materiali
if (verbose)
elog(INFO,
"new materialization range not found for %s.%s (time column %s): "
"not enough new data past completion threshold (" INT64_FORMAT ")",
"not enough new data past completion threshold as of %s",
Copy link
Contributor

@gayyappan gayyappan Dec 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

am confused by this message.
I expect as "as of %s" to show now() . But we use "as of" with reference to the different thresholds. Rewording to indicate what is the displayed value will help.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -396,11 +396,11 @@ get_materialization_end_point_for_table(int32 raw_hypertable_id, int32 materiali
if (verbose)
elog(INFO,
"new materialization range not found for %s.%s (time column %s): not enough data "
"in table (" INT64_FORMAT ")",
"past minimum time value as of %s",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be reworded. suggest we use some terms from the docs for the messages. suggestion "past materialization threshold as of %s".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually the correct message since it deals with underflow not any of the thresholds

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about rewording it as: " new materialization range not found ....: not enough data that satisfies refresh lag criterion as of ..."? ( minimum time value does not make much sense to end user?)

Copy link
Contributor

@gayyappan gayyappan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you reword the informational messages?

@codecov
Copy link

codecov bot commented Dec 30, 2019

Codecov Report

Merging #1592 into master will increase coverage by 2.38%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1592      +/-   ##
==========================================
+ Coverage   89.52%   91.91%   +2.38%     
==========================================
  Files         144      142       -2     
  Lines       21593    20997     -596     
==========================================
- Hits        19332    19299      -33     
+ Misses       2261     1698     -563
Flag Coverage Δ
#cron ?
#pr 91.91% <100%> (?)
Impacted Files Coverage Δ
src/utils.h 100% <ø> (+14.28%) ⬆️
src/utils.c 85.52% <100%> (+5.6%) ⬆️
tsl/src/continuous_aggs/materialize.c 92.79% <100%> (+0.7%) ⬆️
src/plan_add_hashagg.c 43.33% <0%> (-41.67%) ⬇️
src/planner_import.c 58.65% <0%> (-17.04%) ⬇️
src/telemetry/uuid.c 84.61% <0%> (-15.39%) ⬇️
src/cache_invalidate.c 78.12% <0%> (-2.44%) ⬇️
src/bgw_policy/chunk_stats.c 83.92% <0%> (-1.32%) ⬇️
src/histogram.c 89.02% <0%> (-1.22%) ⬇️
src/cache.c 84.82% <0%> (-0.65%) ⬇️
... and 111 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6122e08...d2c2078. Read the comment docs.

Copy link
Contributor

@gayyappan gayyappan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have another suggestion related to rewording. I think "minimum time value" doesn't mean much from end user perspective.

@cevian cevian force-pushed the cont_agg_better_msgs branch 3 times, most recently from 7ef57af to 9c612df Compare January 2, 2020 18:16
Switch from using internal timestamps to more user-friendly
timestamps in our log messages and clean up some messages.
@cevian cevian merged commit ef77c2a into timescale:master Jan 2, 2020
@cevian cevian added this to the 1.6.0 milestone Jan 8, 2020
svenklemm added a commit to svenklemm/timescaledb that referenced this pull request Jan 15, 2020
This release adds major new features and bugfixes since the 1.5.1 release.
We deem it moderate priority for upgrading.

The major new feature in this release allows users to keep the aggregated
data in a continuous aggregate while dropping the raw data with drop_chunks.
This allows users to save storage by keeping only the aggregates.

The semantics of the refresh_lag parameter for continuous aggregates has
been changed to be relative to the current timestamp instead of the maximum
value in the table. This change requires that an integer_now func be set on
hypertables with integer-based time columns to use continuous aggregates on
this table.

We added a timescaledb.ignore_invalidation_older_than parameter for continuous
aggregatess. This parameter accept a time-interval (e.g. 1 month). if set,
it limits the amount of time for which to process invalidation. Thus, if
timescaledb.ignore_invalidation_older_than = '1 month', then any modifications
for data older than 1 month from the current timestamp at modification time may
not cause continuous aggregate to be updated. This limits the amount of work
that a backfill can trigger. By default, all invalidations are processed.

**Major Features**
* timescale#1589 Allow drop_chunks while keeping continuous aggregates

**Minor Features**
* timescale#1568 Add ignore_invalidation_older_than option to continuous aggs
* timescale#1575 Reorder group-by clause for continuous aggregates
* timescale#1592 Improve continuous agg user messages

**Bugfixes**
* timescale#1565 Fix partial select query for continuous aggregate
* timescale#1591 Fix locf treat_null_as_missing option
* timescale#1594 Fix error in compression constraint check
* timescale#1603 Add join info to compressed chunk
* timescale#1606 Fix constify params during runtime exclusion
* timescale#1607 Delete compression policy when drop hypertable
* timescale#1608 Add jobs to timescaledb_information.policy_stats
* timescale#1609 Fix bug with parent table in decompression

**Thanks**
* @optijon for reporting an issue with locf treat_null_as_missing option
* @acarrera42 for reporting an issue with constify params during runtime exclusion
* @ChristopherZellermann for reporting an issue with the compression constraint check
* @SimonDelamare for reporting an issue with joining hypertables with compression
svenklemm added a commit to svenklemm/timescaledb that referenced this pull request Jan 15, 2020
This release adds major new features and bugfixes since the 1.5.1 release.
We deem it moderate priority for upgrading.

The major new feature in this release allows users to keep the aggregated
data in a continuous aggregate while dropping the raw data with drop_chunks.
This allows users to save storage by keeping only the aggregates.

The semantics of the refresh_lag parameter for continuous aggregates has
been changed to be relative to the current timestamp instead of the maximum
value in the table. This change requires that an integer_now func be set on
hypertables with integer-based time columns to use continuous aggregates on
this table.

We added a timescaledb.ignore_invalidation_older_than parameter for continuous
aggregatess. This parameter accept a time-interval (e.g. 1 month). if set,
it limits the amount of time for which to process invalidation. Thus, if
timescaledb.ignore_invalidation_older_than = '1 month', then any modifications
for data older than 1 month from the current timestamp at modification time may
not cause continuous aggregate to be updated. This limits the amount of work
that a backfill can trigger. By default, all invalidations are processed.

**Major Features**
* timescale#1589 Allow drop_chunks while keeping continuous aggregates

**Minor Features**
* timescale#1568 Add ignore_invalidation_older_than option to continuous aggs
* timescale#1575 Reorder group-by clause for continuous aggregates
* timescale#1592 Improve continuous agg user messages

**Bugfixes**
* timescale#1565 Fix partial select query for continuous aggregate
* timescale#1591 Fix locf treat_null_as_missing option
* timescale#1594 Fix error in compression constraint check
* timescale#1603 Add join info to compressed chunk
* timescale#1606 Fix constify params during runtime exclusion
* timescale#1607 Delete compression policy when drop hypertable
* timescale#1608 Add jobs to timescaledb_information.policy_stats
* timescale#1609 Fix bug with parent table in decompression
* timescale#1624 Fix drop_chunks for ApacheOnly
* timescale#1632 Check for NULL before dereferencing variable

**Thanks**
* @optijon for reporting an issue with locf treat_null_as_missing option
* @acarrera42 for reporting an issue with constify params during runtime exclusion
* @ChristopherZellermann for reporting an issue with the compression constraint check
* @SimonDelamare for reporting an issue with joining hypertables with compression
svenklemm added a commit that referenced this pull request Jan 15, 2020
This release adds major new features and bugfixes since the 1.5.1 release.
We deem it moderate priority for upgrading.

The major new feature in this release allows users to keep the aggregated
data in a continuous aggregate while dropping the raw data with drop_chunks.
This allows users to save storage by keeping only the aggregates.

The semantics of the refresh_lag parameter for continuous aggregates has
been changed to be relative to the current timestamp instead of the maximum
value in the table. This change requires that an integer_now func be set on
hypertables with integer-based time columns to use continuous aggregates on
this table.

We added a timescaledb.ignore_invalidation_older_than parameter for continuous
aggregatess. This parameter accept a time-interval (e.g. 1 month). if set,
it limits the amount of time for which to process invalidation. Thus, if
timescaledb.ignore_invalidation_older_than = '1 month', then any modifications
for data older than 1 month from the current timestamp at modification time may
not cause continuous aggregate to be updated. This limits the amount of work
that a backfill can trigger. By default, all invalidations are processed.

**Major Features**
* #1589 Allow drop_chunks while keeping continuous aggregates

**Minor Features**
* #1568 Add ignore_invalidation_older_than option to continuous aggs
* #1575 Reorder group-by clause for continuous aggregates
* #1592 Improve continuous agg user messages

**Bugfixes**
* #1565 Fix partial select query for continuous aggregate
* #1591 Fix locf treat_null_as_missing option
* #1594 Fix error in compression constraint check
* #1603 Add join info to compressed chunk
* #1606 Fix constify params during runtime exclusion
* #1607 Delete compression policy when drop hypertable
* #1608 Add jobs to timescaledb_information.policy_stats
* #1609 Fix bug with parent table in decompression
* #1624 Fix drop_chunks for ApacheOnly
* #1632 Check for NULL before dereferencing variable

**Thanks**
* @optijon for reporting an issue with locf treat_null_as_missing option
* @acarrera42 for reporting an issue with constify params during runtime exclusion
* @ChristopherZellermann for reporting an issue with the compression constraint check
* @SimonDelamare for reporting an issue with joining hypertables with compression
svenklemm added a commit that referenced this pull request Jan 15, 2020
This release adds major new features and bugfixes since the 1.5.1 release.
We deem it moderate priority for upgrading.

The major new feature in this release allows users to keep the aggregated
data in a continuous aggregate while dropping the raw data with drop_chunks.
This allows users to save storage by keeping only the aggregates.

The semantics of the refresh_lag parameter for continuous aggregates has
been changed to be relative to the current timestamp instead of the maximum
value in the table. This change requires that an integer_now func be set on
hypertables with integer-based time columns to use continuous aggregates on
this table.

We added a timescaledb.ignore_invalidation_older_than parameter for continuous
aggregatess. This parameter accept a time-interval (e.g. 1 month). if set,
it limits the amount of time for which to process invalidation. Thus, if
timescaledb.ignore_invalidation_older_than = '1 month', then any modifications
for data older than 1 month from the current timestamp at modification time may
not cause continuous aggregate to be updated. This limits the amount of work
that a backfill can trigger. By default, all invalidations are processed.

**Major Features**
* #1589 Allow drop_chunks while keeping continuous aggregates

**Minor Features**
* #1568 Add ignore_invalidation_older_than option to continuous aggs
* #1575 Reorder group-by clause for continuous aggregates
* #1592 Improve continuous agg user messages

**Bugfixes**
* #1565 Fix partial select query for continuous aggregate
* #1591 Fix locf treat_null_as_missing option
* #1594 Fix error in compression constraint check
* #1603 Add join info to compressed chunk
* #1606 Fix constify params during runtime exclusion
* #1607 Delete compression policy when drop hypertable
* #1608 Add jobs to timescaledb_information.policy_stats
* #1609 Fix bug with parent table in decompression
* #1624 Fix drop_chunks for ApacheOnly
* #1632 Check for NULL before dereferencing variable

**Thanks**
* @optijon for reporting an issue with locf treat_null_as_missing option
* @acarrera42 for reporting an issue with constify params during runtime exclusion
* @ChristopherZellermann for reporting an issue with the compression constraint check
* @SimonDelamare for reporting an issue with joining hypertables with compression
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants