Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error handling for ELOOP #591

Open
masiulaniec opened this issue Apr 13, 2019 · 2 comments
Open

Error handling for ELOOP #591

masiulaniec opened this issue Apr 13, 2019 · 2 comments

Comments

@masiulaniec
Copy link
Contributor

Dominator (subd) takes great care to maximize the probability of a successful convergence. Is it prepared to deal with a malicious user playing games with cycles in the file hierarchy? I don't have the time to investigate right now but am filing this lest I forget.

See ELOOP in https://pubs.opengroup.org/onlinepubs/009695399/functions/rename.html

@rgooch
Copy link
Contributor

rgooch commented Apr 13, 2019

I presume the scenario you're considering is if someone replaces an intermediate parent directory (for the new path) with a looping symlink? There is some protection against this as subd will replace the parent symlink with a real directory first.

If an attack is well-timed, a directory could be replaced by a symlink in the time between when the intermediate inode is scanned and when the leaf inode is replaced. This would need to be a sustained attack, as the next scan cycle will see that the intermediate is a symlink and will fix that in the subsequent update phase. An attacker would have to continuously replace the intermediate directory with a symlink.

This type of persistent attack does not seem significantly different from any other kind of persistent attack where someone is fighting with subd to maintain a change from the required image. In both types of attack, convergence would be blocked, which ideally would raise a flag and lead to an investigation.

@rgooch
Copy link
Contributor

rgooch commented Apr 13, 2019

We may make it harder for an attacker to essentially "win the race" against subd with another improvement I've been considering: an improved scanner which looks for metadata-only changes first/early and for regular files with changed metadata, compute those checksums. If there are no metadata changes, a slower complete checksum scanner would catch any data changes (like what the current scanner does). This would make detection of most changes much faster and cheaper.

The intent behind this change was to allow people to reduce the overheads of subd scanning further but still have fast detection and correction of most changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants