New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
max open file limit is always hitted #70
Comments
I have a system which exports small files in great amount! one folder have 62000 files and I export 16 of those folders currently. I export them to mounted NFS folder. max-open-files="1024" set vdedup1tb2-volume-cfg.xml |
in the xml set safe-close to true.
…On Tue, Apr 3, 2018 at 4:27 AM, richman1000000 ***@***.***> wrote:
I have a system which exports small files in great amount! one folder have
*62000* files and I export 16 of those folders currently. I export them
to mounted NFS folder.
If NFS is exported from ext4 - no problem, but exported from sdfs - and I
have this error.
max-open-files="1024" set vdedup1tb2-volume-cfg.xml
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#70 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ADa7qKbpvL4YKmMF9q8SlMX5OuG_UTTgks5tk1yygaJpZM4TDzKG>
.
|
you can try it yourself by the way. |
I've checked this one Datish Cloud Storage Gateway, - it has same problem |
I found resolution of my problem!! The source of problem is interaction of SDFS + NFS3 server ontop +NFS4 client. |
there is always this error "too many"
here the screenshot of running dedup process during backup.
The text was updated successfully, but these errors were encountered: