Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-20.2: sql: implement sequence caching and cached sequence serial normalization #64690

Closed

Conversation

jordanlewis
Copy link
Member

@jordanlewis jordanlewis commented May 4, 2021

Backport 4/4 commits from #56954.
Backport 1/1 commits from #63548.

/cc @cockroachdb/release


sql: implement sequence caching

Previously, incrementing sequences at a high throughput
would result in many distributed writes to the KV layer
due to MVCC. This has caused garbage collection problems
in the past. This would occur in situations such as
bulk importing data while using the sequence number as an
id for each new row being added.

This change allows clients to cache sequence numbers in their local
session data. When the cache is empty, the sequence will be
incremented once by the cache size * increment amount, which are
both sequence options. Then, all the intermediate values will be
cached locally on a node to be given out whenever the sequence is
incremented.

To accommodate schema changes, cached sequences values will be
invalidated when new descriptor versions are seen by the cache.
This invalidation can occur when old versions are seen as well
to accommodate schema change rollbacks.

Release note (sql change): Using the CACHE sequence option no longer
results in an "unimplemented" error. The CACHE option is now fully
implemented and will allow nodes to cache sequence numbers. A cache
size of 1 means that there is no cache, and cache sizes of less than 1
are not valid.

sql: create benchmark for concurrent sequence increments

Previously, there was no benchmark that tested the performance
of concurrent increments to sequences. There was also no
benchmark which compared sequence performance based on
different cache sizes. This change updates a benchmark to measure
performance based on the above criteria.

Release note: None

sql: add serial normalization setting for cached sequences

Closes: #51259

Release note (sql change): The serial_normalization session
variable can now be set to the value sql_sequence_cached.
Cached sequences will allow nodes to cache 256 sequence numbers
locally. The underlying sequence will only be incremened once (by
256 increments) when the cache is empty. Using sql_sequence_cached
will result in better performance than sql_sequence because the
former will perform fewer distributed calls to increment sequences.
However, cached seqences may contribute to large gaps between
sequence numbers if cached values are lost due to errors or
node outages.

sql: make cached sequences bounds-related semantics match pg semantics

Previously, cached sequences did not obey pg semantics
with respect to exceeding bounds. For example, if there
were a sequence with a cache size of 5 and max value of 2,
calling nextval(...) would immediately cause an error due
to exceeded bounds. According to postgres, nextval(...)
should return 1, then 2, then finally return an error due
to the max value of 2 being reached. This change alters
sequence caching behavior to match the above semantics.

Additionally, this change now makes it possible for sequences
to exceed the bounds set by MAXVALUE and MINVALUE. This is
because calling nextval(...) should be as fast as possible,
and the fastest way to do this is to let nextval(...) always
succeed on incrementing a sequence. Despite this, a user
will never see any values that exceed a sequence's bounds.
To make a sequence incrementable again after exceeding its
bounds, there are two options:

  1. The user changes the sequence's value by calling setval(...).
  2. The user performs a schema change to alter the sequences MinValue
    or MaxValue. If the value of a sequence exceeds its bounds,
    it must be restored to the original MinValue or MaxValue in
    the same transaction as the schema change.
    This change adds code to handle case 2 above.

Release note: None

Benchmark

Using make bench PKG='./pkg/sql' BENCHES='BenchmarkSequenceIncrement' TESTFLAGS="--count=5 --benchmem" |& tee c{size} .txt:

Caching 256 values

name                     old time/op    new time/op    delta
SequenceIncrement/P-1-8     313µs ± 1%     107µs ± 1%  -65.85%  (p=0.008 n=5+5)
SequenceIncrement/P-2-8     327µs ± 2%      82µs ± 2%  -75.04%  (p=0.008 n=5+5)
SequenceIncrement/P-4-8     347µs ± 1%      70µs ± 2%  -79.89%  (p=0.008 n=5+5)
SequenceIncrement/P-8-8     389µs ± 2%      65µs ± 3%  -83.40%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
SequenceIncrement/P-1-8     130kB ± 1%      57kB ± 1%  -55.76%  (p=0.008 n=5+5)
SequenceIncrement/P-2-8     131kB ± 0%      49kB ± 1%  -62.56%  (p=0.008 n=5+5)
SequenceIncrement/P-4-8     132kB ± 0%      46kB ± 0%  -65.26%  (p=0.008 n=5+5)
SequenceIncrement/P-8-8     134kB ± 0%      44kB ± 1%  -67.09%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
SequenceIncrement/P-1-8       807 ± 0%       406 ± 0%  -49.75%  (p=0.000 n=5+4)
SequenceIncrement/P-2-8       812 ± 1%       363 ± 0%  -55.32%  (p=0.008 n=5+5)
SequenceIncrement/P-4-8       826 ± 1%       346 ± 0%  -58.10%  (p=0.008 n=5+5)
SequenceIncrement/P-8-8       863 ± 1%       341 ± 0%  -60.44%  (p=0.008 n=5+5)

For the other cache sizes I tested, the performance improvement is virtually the same as above.
c1.txt
c32.txt
c64.txt
c128.txt
c256.txt
c512.txt
c1024.txt

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@jordanlewis
Copy link
Member Author

@rafiss @RichardJCai any concerns about backporting this? Not sure if you were working with Jay when he made it. Andrew did but he's out this week.

@rafiss
Copy link
Collaborator

rafiss commented May 5, 2021

Richard is out until tomorrow, and I am out tomorrow and Thursday.

I hadn't really looked into the implementation of this before. I don't know of a specific reason not to, apart from it being a new feature so not something that we'd generally backport. Are we supposed to be rigorously following this proposal? https://docs.google.com/document/d/1icfBB-WlzyStvl4t5cTYAV79RQGXvtCHOAvRV7EeC7g/edit

If we are backporting, we'd need this too: #63548

jayshrivastava and others added 4 commits May 10, 2021 11:37
Previously, incrementing sequences at a high throughput
would result in many distributed writes to the KV layer
due to MVCC. This has caused garbage collection problems
in the past. This would occur in situations such as
bulk importing data while using the sequence number as an
id for each new row being added.

This change allows clients to cache sequence numbers in their local
session data. When the cache is empty, the sequence will be
incremented once by the cache size * increment amount, which are
both sequence options. Then, all the intermediate values will be
cached locally on a node to be given out whenever the sequence is
incremented.

To accommodate schema changes, cached sequences values will be
invalidated when new descriptor versions are seen by the cache.
This invalidation can occur when old versions are seen as well
to accommodate schema change rollbacks.

Release note (sql change): Using the CACHE sequence option no longer
results in an "unimplemented" error. The CACHE option is now fully
implemented and will allow nodes to cache sequence numbers. A cache
size of 1 means that there is no cache, and cache sizes of less than 1
are not valid.
Previously, there was no benchmark that tested the performance
of concurrent increments to sequences. There was also no
benchmark which compared sequence performance based on
different cache sizes. This change adds a benchmark to measure
performance based on the above criteria.

Release note: None
Closes: cockroachdb#51259

Release note (sql change): The `serial_normalization` session
variable can now be set to the value `sql_sequence_cached`.
If this value is set, the cluster setting
`sql.defaults.serial_sequences_cache_size` can be used to
control the number of values to cache in a user's session with a
default of 256. When the cache is empty, the the underlying
sequence will only be incremened once to populate it. Using
`sql_sequence_cached` will result in better performance than
`sql_sequence` because the former will perform fewer distributed
calls to increment sequences. However, cached seqences may
result in large gaps between serial sequence numbers if a session
terminates before using all the values in its cache.
Previously, cached sequences did not obey pg semantics
with respect to exceeding bounds. For example, if there
were a sequence with a cache size of 5 and max value of 2,
calling nextval(...) would immediately cause an error due
to exceeded bounds. According to postgres, nextval(...)
should return 1, then 2, then finally return an error due
to the max value of 2 being reached. This change alters
sequence caching behavior to match the above semantics.

Additionally, this change now makes it possible for sequences
to exceed the bounds set by MAXVALUE and MINVALUE. This is
because calling nextval(...) should be as fast as possible,
and the fastest way to do this is to let nextval(...) always
succeed on incrementing a sequence. Despite this, a user
will never see any values that exceed a sequence's bounds.
To make a sequence incrementable again after exceeding its
bounds, there are two options:
1. The user changes the sequence's value by calling setval(...).
2. The user performs a schema change to alter the sequences MinValue
   or MaxValue. If the value of a sequence exceeds its bounds,
   it must be restored to the original MinValue or MaxValue in
   the same transaction as the schema change.
This change adds code to handle case 2 above.

Release note: None
In 21.1 we added support for `CACHE` to sequences but we didn't add the logic
to render that clause.

Release note (bug fix): Render `CACHE` clause for sequences which use a cache.
@jordanlewis
Copy link
Member Author

Added the last commit.

In terms of risk, maybe I should add a version gate on top of all of this stuff, so it's only active when activated.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

In terms of risk, maybe I should add a version gate on top of all of this stuff, so it's only active when activated.

I don't think that's allowed. That version wouldn't exist on other releases.

Reviewed 11 of 11 files at r1, 1 of 1 files at r2, 6 of 6 files at r3, 3 of 3 files at r4, 3 of 3 files at r5.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @jordanlewis)

@jordanlewis
Copy link
Member Author

Err, I really meant cluster setting, not version gate.

@ajwerner
Copy link
Contributor

Err, I really meant cluster setting, not version gate.

I like that idea quite a bit.

@jordanlewis
Copy link
Member Author

I don't think that this needs to get in at this point, I'm just going to close it.

@jordanlewis jordanlewis closed this Oct 8, 2021
@jordanlewis jordanlewis deleted the backport20.2-56954 branch October 29, 2021 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants