Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Reader cannot read before start of the log" logging endlessly on enabling TS #17086

Closed
daisukebe opened this issue Mar 14, 2024 · 2 comments · Fixed by #17112
Closed

"Reader cannot read before start of the log" logging endlessly on enabling TS #17086

daisukebe opened this issue Mar 14, 2024 · 2 comments · Fixed by #17112
Assignees
Labels
area/cloud-storage Shadow indexing subsystem kind/bug Something isn't working

Comments

@daisukebe
Copy link
Contributor

daisukebe commented Mar 14, 2024

Version & Environment

Redpanda version: (use rpk version): v23.3.7

Under a certain condition, brokers ends up with complaining below forever on enabling cloud_storage_enabled

WARN  2024-03-14 14:10:40,229 [shard 1:main] raft - [group_id:1, {kafka/foo/0}] state_machine_manager.cc:419 - exception thrown from background apply fiber for archival_metadata_stm - std::runtime_error (Reader cannot read before start of the log 0 < 11)

What went wrong?

What works

  • Produce
  • Show / alter topic configs

What doesn't work

  • Consume
  • List partitions

What should have happened instead?

Warning should not last forever and nothing visible to users should happen.

How to reproduce the issue?

  1. Have a cluster with TS disabled
  2. Create a topic and produce some records
  3. Let retention kick in and move start offset forward
  4. Produce some other records, hence, making end_offset > start_offset (non zero)
  5. Enable TS and restart the broker. Here're configs changed
cloud_storage_enabled: true
cloud_storage_enable_remote_write: true
cloud_storage_enable_remote_read: true
cloud_storage_azure_container: givencontainer
cloud_storage_azure_storage_account: givenaccount
cloud_storage_azure_shared_key: [redacted]
cloud_storage_segment_max_upload_interval_sec: 10
  1. You'll hit the issue

Additional information

Please attach any relevant logs, backtraces, or metric charts.

@daisukebe daisukebe added kind/bug Something isn't working area/cloud-storage Shadow indexing subsystem labels Mar 14, 2024
@Lazin
Copy link
Contributor

Lazin commented Mar 14, 2024

On step 3 do we need retention to remove all data?

@Lazin Lazin self-assigned this Mar 14, 2024
@daisukebe
Copy link
Contributor Author

Not necessarily. The problem arises with either case below on step 3.

  • LOG-START-OFFSET == HIGH-WATERMARK
  • LOG-START-OFFSET < HIGH-WATERMARK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cloud-storage Shadow indexing subsystem kind/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants