Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tombstone gc with 'immediate' mode deletes all of records when flush to sstable #13572

Closed
glenn-kim opened this issue Apr 19, 2023 · 13 comments
Closed
Assignees
Milestone

Comments

@glenn-kim
Copy link

Tables with TOMBSTONE_GC = {'mode': 'immediate'} will delete all records with TTLs, regardless of whether they are expired or not.
During the flush from memtable to sstable (compacting sstables originated memtable), I could see in the logs that all the data disappeared (compacted to 0 byte).
I changed the mode to 'timeout' and it flushed normally again.

https://github.com/scylladb/scylladb/blob/scylla-5.1.6/tombstone_gc.cc#L76-L79

According to the above code, if tombstone_gc is immediate, gc_before is set to 'clock max'. It uses this gc_before value to determine whether sstable is fully expired or not, and my guess is that the expiration time of unexpired records is treated as "current time + remaining TTL" (in future), and since gc_before is very far in the future (clock max), so it is treated as expired.

I think the gc_before value should be 'query time', not 'clock max'.

To Reproduce
Steps to reproduce the behavior:

  1. Create a table with tombstone_gc mode as immediate
CREATE TABLE test_table
(
  partition_key  TEXT,
  request_ts     BIGINT,
  attrs          MAP<TEXT,TEXT>,
  PRIMARY KEY ( partition_key, request_ts)
) WITH CLUSTERING ORDER BY (request_ts DESC)
 AND BLOOM_FILTER_FP_CHANCE = 0.01
 AND CACHING = '{"keys":"ALL", "rows_per_partition":"NONE"}'
 AND COMMENT = ''
 AND GC_GRACE_SECONDS = 86400 -- 24 hours
 AND TOMBSTONE_GC = {'mode': 'immediate'}
 AND DEFAULT_TIME_TO_LIVE = 2678400 -- 31 days
 AND COMPACTION = {
          'compaction_window_size': '1',
          'compaction_window_unit': 'DAYS',
          'class': 'TimeWindowCompactionStrategy'
          };
  1. insert many records with ttl
    INSERT INTO test_table JSON ? USING TTL ?
  2. memtable flush occurred and no data remained

Expected behavior

  • in step 3, it should flush not expired records to sstable.

Logs
compaction logs with 'immediate' mode

INFO  2023-04-18 06:00:10,908 [shard 11] compaction - [Compact ******.************* 4753ddc0-ddae-11ed-93d3-0f1fc92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8820-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 06:00:14,123 [shard 11] compaction - [Compact ******.************* 4753ddc0-ddae-11ed-93d3-0f1fc92fd26a] Compacted 1 sstables to []. 308MB to 0 bytes (~0% of original) in 0ms = 0 bytes/s. ~479872 total partitions merged to 0.
INFO  2023-04-18 06:00:17,194 [shard 19] compaction - [Compact ******.************* 4b1308a0-ddae-11ed-8389-0f28c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8828-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 06:00:20,195 [shard 19] compaction - [Compact ******.************* 4b1308a0-ddae-11ed-8389-0f28c92fd26a] Compacted 1 sstables to []. 304MB to 0 bytes (~0% of original) in 0ms = 0 bytes/s. ~473216 total partitions merged to 0.
INFO  2023-04-18 06:00:41,037 [shard  3] compaction - [Compact ******.************* 59492fd0-ddae-11ed-b6f3-0f15c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8766-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 06:00:41,508 [shard 15] compaction - [Compact ******.************* 59910e40-ddae-11ed-9c94-0f21c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8847-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 06:00:43,740 [shard  3] compaction - [Compact ******.************* 59492fd0-ddae-11ed-b6f3-0f15c92fd26a] Compacted 1 sstables to []. 310MB to 0 bytes (~0% of original) in 0ms = 0 bytes/s. ~483200 total partitions merged to 0.
INFO  2023-04-18 06:00:44,176 [shard 15] compaction - [Compact ******.************* 59910e40-ddae-11ed-9c94-0f21c92fd26a] Compacted 1 sstables to []. 305MB to 0 bytes (~0% of original) in 0ms = 0 bytes/s. ~475648 total partitions merged to 0.
INFO  2023-04-18 06:00:49,613 [shard  5] compaction - [Compact ******.************* 5e65c7d0-ddae-11ed-b6ea-0f1ac92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8768-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 06:00:50,407 [shard 13] compaction - [Compact ******.************* 5edeef70-ddae-11ed-95fb-0f20c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-8799-big-Data.db:level=0:origin=memtable]

after changed to 'timeout' mode

INFO  2023-04-18 11:57:48,685 [shard 15] compaction - [Compact ******.************* 3d28cbd0-dde0-11ed-9c94-0f21c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9261-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9238-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9215-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9192-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:57:52,100 [shard 16] compaction - [Compact ******.************* 3f31e240-dde0-11ed-bf85-0f24c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9239-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9216-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9193-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9170-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:19,261 [shard 11] compaction - [Compact ******.************* 4f6252d0-dde0-11ed-93d3-0f1fc92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9234-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9211-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9188-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9165-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:22,868 [shard 15] compaction - [Compact ******.************* 3d28cbd0-dde0-11ed-9c94-0f21c92fd26a] Compacted 4 sstables to [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9284-big-Data.db:level=0]. 1GB to 1GB (~100% of original) in 34181ms = 34MB/s. ~1806592 total partitions merged to 1804036.
INFO  2023-04-18 11:58:26,982 [shard 16] compaction - [Compact ******.************* 3f31e240-dde0-11ed-bf85-0f24c92fd26a] Compacted 4 sstables to [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9262-big-Data.db:level=0]. 1GB to 1GB (~100% of original) in 34879ms = 34MB/s. ~1816832 total partitions merged to 1814371.
INFO  2023-04-18 11:58:28,185 [shard 17] compaction - [Compact ******.************* 54b40490-dde0-11ed-a925-0f27c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9240-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9217-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9194-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9171-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:42,061 [shard 13] compaction - [Compact ******.************* 5cf953d0-dde0-11ed-95fb-0f20c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9213-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9190-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9167-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9144-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:45,943 [shard 19] compaction - [Compact ******.************* 5f49ac70-dde0-11ed-8389-0f28c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9242-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9219-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9196-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9173-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:47,302 [shard 20] compaction - [Compact ******.************* 60190a60-dde0-11ed-9f3e-0f25c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9266-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9243-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9197-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9220-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:58:53,698 [shard 11] compaction - [Compact ******.************* 4f6252d0-dde0-11ed-93d3-0f1fc92fd26a] Compacted 4 sstables to [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9257-big-Data.db:level=0]. 1GB to 1GB (~100% of original) in 34434ms = 34MB/s. ~1808640 total partitions merged to 1805928.
INFO  2023-04-18 11:58:56,153 [shard 18] compaction - [Compact ******.************* 655f9890-dde0-11ed-817d-0f29c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9241-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9218-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9172-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9195-big-Data.db:level=0:origin=memtable]
INFO  2023-04-18 11:59:00,592 [shard 14] compaction - [Compact ******.************* 6804ef00-dde0-11ed-a8c6-0f22c92fd26a] Compacting [/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9283-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9260-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9214-big-Data.db:level=0:origin=memtable,/var/lib/scylla/data/******.*************-2d03c2c0d1f411edbc1e4ea1925a907b/me-9237-big-Data.db:level=0:origin=memtable]

Installation details

  • Platform: EKS (i3en.6xlarge * 4)
  • Kubernetes version: 1.24
  • Scylla version: 5.1.6
  • Scylla-operator version: 1.8
asias added a commit to asias/scylla that referenced this issue May 8, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes scylladb#13572
@asias
Copy link
Contributor

asias commented May 8, 2023

@glenn-kim Thanks for the report. A fix is posted here #13800.

asias added a commit to asias/scylla that referenced this issue May 9, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes scylladb#13572
asias added a commit to asias/scylla that referenced this issue May 11, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes scylladb#13572
yaronkaikov pushed a commit to yaronkaikov/scylla that referenced this issue May 11, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes scylladb#13572

Closes scylladb#13800
@denesb
Copy link
Contributor

denesb commented May 15, 2023

While searching for when this feature was introduced (to determine backport candidate versions), I found that this feature is either not documented at all or it is very hard to find. I tried looking in our CQL extensions page with no luck. Tried searching for tombstone_gc, tombstone immediate to no avail. Please add documentation for this @asias (/cc @annastuchlik).

denesb pushed a commit that referenced this issue May 15, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes #13572

Closes #13800

(cherry picked from commit 7fcc403)
denesb pushed a commit that referenced this issue May 15, 2023
The immediate mode is similar to timeout mode with gc_grace_seconds
zero. Thus, the gc_before returned should be the query_time instead of
gc_clock::time_point::max in immediate mode.

Setting gc_before to gc_clock::time_point::max, a row could be dropped
by compaction even if the ttl is not expired yet.

The following procedure reproduces the issue:

- Start 2 nodes

- Insert data

```
CREATE KEYSPACE ks2a WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
CREATE TABLE ks2a.tb (pk int, ck int, c0 text, c1 text, c2 text, PRIMARY
KEY(pk, ck)) WITH tombstone_gc = {'mode': 'immediate'};
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (10 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (20 ,1, 'x', 'y', 'z')
USING TTL 1000000;
INSERT into ks2a.tb (pk,ck, c0, c1, c2) values (30 ,1, 'x', 'y', 'z')
USING TTL 1000000;
```

- Run nodetool flush and nodetool compact

- Compaction drops all data

```
~128 total partitions merged to 0.
```

Fixes #13572

Closes #13800

(cherry picked from commit 7fcc403)
@denesb
Copy link
Contributor

denesb commented May 15, 2023

Backported to 5.1, 5.2 and 2022.1.

@mykaul
Copy link
Contributor

mykaul commented May 15, 2023

While searching for when this feature was introduced (to determine backport candidate versions), I found that this feature is either not documented at all or it is very hard to find. I tried looking in our CQL extensions page with no luck. Tried searching for tombstone_gc, tombstone immediate to no avail. Please add documentation for this @asias (/cc @annastuchlik).

https://www.scylladb.com/2022/06/30/preventing-data-resurrection-with-repair-based-tombstone-garbage-collection/ (not a replacement for proper documentation of course)

@tzach
Copy link
Contributor

tzach commented May 23, 2023

@mykaul Tombstones GC options is documented here https://docs.scylladb.com/stable/cql/ddl.html#tombstones-gc-options

@asias
Copy link
Contributor

asias commented May 23, 2023

While searching for when this feature was introduced (to determine backport candidate versions), I found that this feature is either not documented at all or it is very hard to find. I tried looking in our CQL extensions page with no luck. Tried searching for tombstone_gc, tombstone immediate to no avail. Please add documentation for this @asias (/cc @annastuchlik).

I worked with the doc team long time ago for this option. The link is here: https://docs.scylladb.com/stable/cql/ddl.html#tombstones-gc-options

@mykaul
Copy link
Contributor

mykaul commented May 23, 2023

So is our docs search engine not that great? Perhaps we can improve by asking Google to index it or something?

@tzach
Copy link
Contributor

tzach commented May 23, 2023

What search term did you use?
I found it with the doc internal search looking for "tombstone_gc"

@mykaul
Copy link
Contributor

mykaul commented May 23, 2023

What search term did you use? I found it with the doc internal search looking for "tombstone_gc"

Indeed - now I can find it (tried 'tombstone immediate' and got multiple reasons right away!). Strange.

@denesb
Copy link
Contributor

denesb commented May 23, 2023

I can find it too now. Although it helps that I now know which page to look for. The first result is often talking about gc_grace_seconds. Also, on the right page, one has to search with the browser, because the explanation is quite some way down.

@tzach
Copy link
Contributor

tzach commented May 23, 2023

@annastuchlik please open an issue for the doc serach problems

@annastuchlik
Copy link
Collaborator

Sure, I can open an issue, but I'll need to understand better the problem you were facing.
@denesb Could you tell me what search term or phrase you used?
I've tried "tombstone garbage collection", "tombstone immediate", and "tombstone mode" and the first page had the answer.
I need to understand what exactly the user may be looking for.

@denesb
Copy link
Contributor

denesb commented May 24, 2023

For "tombstone gc", the right page is the 5th.

@DoronArazii DoronArazii added this to the 5.4 milestone Jun 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants