New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Efficient file change detection inside containers #1429
Comments
@noizwaves thanks for creating this issue! We chose polling as the way to go as it works universally (some volume types do not support file watching correctly) and does not have problems with setting The drawbacks of polling is the sometimes delayed notification of changes and increased cpu usage if there is a large amount of files to be scanned. If you do not need devspace to watch all files remotely the best way is to exclude them which also decreases CPU usage. If thats not possible we also have an option called In future I think we could also add an option to enable actual file watching in the container via inotify. |
Thanks for the reply @FabianKramm. I suspect there is some room for optimization in our current rules & configuration. We have noticed the slow change detection with our current rules (it will sometimes take ~10 seconds for a change to be downloaded). Faster detection would definitely lead to a better developer experience. In terms of file watching as a supported technology, it does not seem to be required by everyone. This could speak to some kind of optional / opt-in expert mode setting within I'll optimize our rules/configuration and get back to you with the reduced CPU usage numbers. Is inotify based file watching on your roadmap at all? It would be great to know when this might be available. |
Coincidentally, I just learned that Garden has shifted to Mutagen as a sync implementation. This could be an alternative to Watchman. |
@noizwaves I see, including a new option that allows you to set inotify as remote watcher instead of file polling would be fine for me and shouldn't be hard to implement as we use this already on the local side. However, we currently have a lot going on with vcluster and there is another new open-source project coming up as well this week, so I'll need to delay this for around a month, after which we can add this. |
Hey @FabianKramm, many thanks for being open to this request. That timeline works really well for us. Thanks again. |
@FabianKramm I think this can be marked resolved now that #1439 is merged 🙌 |
Is your feature request related to a problem?
When devspace is used to synchronize a large number of files (in the thousands, for our current application it's ~15,000), the file synchronization process running inside the container uses a large amount of CPU resources (~35% for our ~15,000 files).
When running on a remote cluster, this load forms the majority of the CPU load from the system. It also requires us to scale up purely to support idle file sync operations.
When running on a local cluster, it causes unnecessary resource utilization that slows down other development activities (IDEs, screen shares, browsing the web).
Which solution do you suggest?
From conversations in Slack, it sounds like the inefficient file poller could be replaced with an efficient file watcher. These are known to use much less CPU.
FWIW, we've had success in installing the Watchman file watcher within our
ubuntu18.04
based containers. This was required by the Sorbet gem to run the LSP typechecking process. It is able to power the change detection in the containerizedsorbet
process with minimal CPU usage:Installation of Watchman within our container was straightforward too:
Which alternative solutions exist?
I'm no expert in efficient file watcher technologies, but I'm sure theres other options out there. Especially ones not created by Facebook. We'd be very willing to install another one specific for devspace.
Additional context
In addition to this app (with ~15k files), we will soon be looking to containerize our biggest app (with ~60k files). Making file sync efficient will be essential for the success of that application.
/kind feature
The text was updated successfully, but these errors were encountered: