-
Notifications
You must be signed in to change notification settings - Fork 274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redo log: data was lost or damaged in some test cases, and sometimes changefeed failed: "redo log flush fail" #5486
Labels
affects-5.3
affects-5.4
affects-6.1
area/ticdc
Issues or PRs related to TiCDC.
found/automation
Bugs found by automation cases
severity/major
This is a major bug.
type/bug
This is a bug.
Projects
Comments
ti-chi-bot
pushed a commit
that referenced
this issue
May 27, 2022
This was referenced May 27, 2022
ti-chi-bot
added a commit
that referenced
this issue
May 27, 2022
This was referenced May 27, 2022
ti-chi-bot
pushed a commit
that referenced
this issue
May 29, 2022
Closed by #5621 |
ti-chi-bot
added a commit
that referenced
this issue
May 30, 2022
ti-chi-bot
added a commit
that referenced
this issue
Jun 15, 2022
ti-chi-bot
added a commit
that referenced
this issue
Jun 15, 2022
ti-chi-bot
added a commit
that referenced
this issue
Jun 21, 2022
This was referenced Jun 23, 2022
ti-chi-bot
added a commit
that referenced
this issue
Jun 24, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
affects-5.3
affects-5.4
affects-6.1
area/ticdc
Issues or PRs related to TiCDC.
found/automation
Bugs found by automation cases
severity/major
This is a major bug.
type/bug
This is a bug.
What did you do?
Run sysbench : sysbench oltp_insert prepare --tables=10 --table-size=500 --threads=10 && sysbench oltp_insert run --tables=10 --table-size=500 --threads=10
Run upstream cluster chaos step by step:
What did you expect to see?
No response
What did you see instead?
Somethimes changefeed failed, case log: http://rms.pingcap.net:31714/artifacts/testground/plan-exec-840037/plan-exec-840037-493892576/main-logs
{
"id": "redo-apply-cdc-all-node-restart-sync",
"summary": {
"state": "failed",
"tso": 433315158137241606,
"checkpoint": "2022-05-19 13:15:48.900",
"error": {
"addr": "upstream-ticdc-1.upstream-ticdc-peer.cdc-testbed-tps-840037-1-931.svc:8301",
"code": "CDC:ErrProcessorUnknown",
"message": "[CDC:ErrS3StorageAPI]s3 storage api: RequestCanceled: request context canceled\ncaused by: context deadline exceeded"
}
cdc.log ERROR:
[2022/05/19 13:20:33.310 +00:00] [ERROR] [file.go:199] ["redo log flush fail"] [namespace=default] [changefeed=redo-apply-cdc-all-node-restart-sync] [error="[CDC:ErrS3StorageAPI]s3 storage api: RequestCanceled: request context canceled\ncaused by: context deadline exceeded"]
Versions of the cluster
Upstream TiDB cluster version (execute
SELECT tidb_version();
in a MySQL client):(paste TiDB cluster version here)
Upstream TiKV version (execute
tikv-server --version
):TiCDC version (execute
cdc version
):The text was updated successfully, but these errors were encountered: