Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NEW] AOF persistence #13186

Closed
BarackYoung opened this issue Apr 2, 2024 · 7 comments
Closed

[NEW] AOF persistence #13186

BarackYoung opened this issue Apr 2, 2024 · 7 comments

Comments

@BarackYoung
Copy link

The problem/use-case that the feature addresses

In fields such as finance and e-commerce, the pursuit of high throughput comes with certain risks when using Redis, as enabling always for AOF persistence cannot guarantee that data will not be lost. We hope to add an additional option to ensure that each successfully executed command is saved.

Description of the feature

After enabling the new drop disk option in AOF, it can be ensured that each successful instruction has been flushed.

@BarackYoung BarackYoung changed the title [NEW] AOF persistence Apr 2, 2024
@BarackYoung BarackYoung changed the title AOF persistence [NEW] AOF persistence Apr 2, 2024
@sundb
Copy link
Collaborator

sundb commented Apr 2, 2024

you can use config appendfsync always to fsync after every write to the append only log.
however, there is no guarantee that data won't be lost, such as hardware failure or power.

@BarackYoung
Copy link
Author

you can use config appendfsync always to fsync after every write to the append only log. however, there is no guarantee that data won't be lost, such as hardware failure or power.

yeah,So I suggest adding an option to ensure no loss。

@sundb
Copy link
Collaborator

sundb commented Apr 2, 2024

@BarackYoung no database can guarantee that it won't lose data, only that it will lose as little as possible, as I suggested using appendfsync always, which writes to the aof file after each command is executed, but we can still lose the last command if the server shutdown due to power outage.

@BarackYoung
Copy link
Author

@BarackYoung no database can guarantee that it won't lose data, only that it will lose as little as possible, as I suggested using appendfsync always, which writes to the aof file after each command is executed, but we can still lose the last command if the server shutdown due to power outage.

I mean, like mysql or kafka, when the command or sql execute finished and return success, it guarantee that it won't lose data because the data has bean writeen to the disk corectlly. So I'm wondering if redis can support such feature.

@BarackYoung BarackYoung reopened this Apr 14, 2024
@sundb
Copy link
Collaborator

sundb commented Apr 14, 2024

If the premise is when the command or sql execute finished and return success, appendfsync always already guarantees that the data will be writeen to the disk corectlly.
Without this premise, mysql and kafka have no guarantee of data loss, and any physical failure between command execution and disk write can result in data loss.

@BarackYoung
Copy link
Author

If the premise is when the command or sql execute finished and return success, appendfsync always already guarantees that the data will be writeen to the disk corectlly. Without this premise, mysql and kafka have no guarantee of data loss, and any physical failure between command execution and disk write can result in data loss.

I don't think so. Mysql can guarantee the data won't be lose by redo log. If sql execute success and the computer failover happened, the data will be rewrite to disk corectlly. But Redis appendfsync always don't sync data to disk every command but every eventloop. If the command execute finished and return success but the eventloop not finish yet because of server power off, the business think it's has done correctly and continue to do the task such as delivery, the inconsistency happened. So there are no guarantee of data loss even thogh the command execute finished and return success.So we have to use other components to guarantee the data not loss such as kafka and mysql.

So I think we can add a new appendfsync option to make sure the data sync to disk correctlly when the command return success. Writing to a disk sequentially like kafka has very high throughput, and I think it can be considered.

@sundb
Copy link
Collaborator

sundb commented Apr 15, 2024

AFAIK, Redis wites to aof before replying to clients, ref the following code:

void beforeSleep(struct aeEventLoop *eventLoop) {
    ....
    handleClientsWithPendingReadsUsingThreads(); <- read and process commans
    ....

    if (server.aof_state == AOF_ON || server.aof_state == AOF_WAIT_REWRITE)
        flushAppendOnlyFile(0); <- fsync if aof is enable

    handleClientsWithPendingWritesUsingThreads(); <- write to client
    ....
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants