-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
一样的文件任然计算出了差异 #25
Comments
当前是这样的(即完全相同的数据也可能需要下载少量block块),这是算法为了优化速度而放弃了少量几率的匹配。 |
修改之后会大幅提高计算时间吧? |
是的,这也是权衡过后的设置。某些情况时间可能加倍或更多,你可以修改试试看。 |
支持原地更新吗,类似hdiffpatch那样,文件太大了,写入一次要很久;如果能原地更新少数几个块就快很多了 |
当前不支持原地更新,HDiffPatch也不支持原地更新啊! |
几年前看到这条 以为已经实现了 |
如果计算出需要同步块比率极低,能不能再次对比下old文件的对应位置的hash和index中是否一致,应该很快的。 |
谢谢你对此(相同数据也下载)的反馈,我尝试修改了一下匹配和跳过block块的逻辑,但最后没能找到更好的方案; |
有2个相同文件(30GB+),分别计算出hash是完全一样的,diff步骤仍然返回了差异
The text was updated successfully, but these errors were encountered: