We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
binlogs->Put(buf, slice(val)); binlogs->add_log(log_type, BinlogCommand::KSET, buf); 其中 add_log中也存在一次写入Leveldb,只不过他的Key不是buf了,而是tran_seq,这样做的目的是?
The text was updated successfully, but these errors were encountered:
Hi, ssdb 的写入逻辑是把实际的数据操作(Put, Delete)与对应的 binlog 放到 WriteBatch 里, 作为一个原子操作写入磁盘.
Sorry, something went wrong.
也就是Binlog跟实际的数据一样,被写入到LevelDb里面,对吧?
这样的设计是不是不太好呢?主从同步的时候,正常情况是同步Binlog文件,在第一次通过tcp传文件即可,非常快,后面再做增量同步。但如果按照现在的设计,Binlog也在LeveDb的ldb文件里面,在同步的时候,你还得遍历出所有的Binlog数据,然后再一点点同步到从节点。
No branches or pull requests
binlogs->Put(buf, slice(val));
binlogs->add_log(log_type, BinlogCommand::KSET, buf);
其中 add_log中也存在一次写入Leveldb,只不过他的Key不是buf了,而是tran_seq,这样做的目的是?
The text was updated successfully, but these errors were encountered: