Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network timeout for federated shares causes repeated partial download of large files #917

Closed
Derkades opened this issue Nov 15, 2023 · 7 comments
Labels
bug Something isn't working
Milestone

Comments

@Derkades
Copy link

Describe the bug
Nextcloud seems to have a 30 second timeout for downloading files from federated shares. The Nextcloud developers don't consider this a bug, saying it is required for responsiveness.

This is also an issue during Memories indexing. It fails to index the file with the error message: "Failed to index file ...: Failed to get local file: cURL error 28: Operation timed out after 30000 milliseconds with ... out of ... bytes received". This error is visible when doing a manual index using occ, but the issue still occurs with only the Nextcloud default cron job.

Due to the error, it appears that Memories does not consider the file as indexed and will happily try again. This results in large network traffic load every 15 minutes as it attempts, and fails, to download the file.

While ideally I would like Memories to download the file and generate a thumbnail for it, that is probably not going to happen. So instead, it would be nice if Memories marked this file as indexed without a thumbnail.

Screenshots
image

Platform:

  • Memories Version: 6.0.1
  • Nextcloud Version: 27.1.3
  • PHP Version: 8.1
@Derkades Derkades added the needs triage To be triaged label Nov 15, 2023
@pulsejet pulsejet added bug Something isn't working and removed needs triage To be triaged labels Nov 15, 2023
@pulsejet pulsejet added this to the 6.2 milestone Nov 15, 2023
@pulsejet
Copy link
Owner

This is potentially a general problem with external storage. Memories needs the entire file to be available locally for indexing, which doesn't play nice with this.

So instead, it would be nice if Memories marked this file as indexed without a thumbnail.

We can't really mark it as "indexed" because no information can be inferred without the file itself. So essentially this will cause a bunch of empty thumbs at the top of the timeline (cause we don't know the date either). I think the best solution here is to have some configuration to limit the size of file to index from external storage.

@pulsejet
Copy link
Owner

The Nextcloud developers don't consider this a bug,

BTW, I wouldn't consider this a bug either. When we reach the 30s mark, it's hard to say if we're actually receiving the file we want or something terrible is happening. E.g. if downloads were allowed beyond 30s, it's possible that all the bandwidth would be clogged up forever because some large files are being downloaded (forever).

@pulsejet
Copy link
Owner

Related #933. We probably need another table here to track files that failed to be indexed, then have a flag to retry these in the index command.

@pulsejet
Copy link
Owner

We can't really mark it as "indexed" because no information can be inferred without the file itself.

I just realised this isn't true. The preview generator might have the preview for whatever reason (maybe they are / could be shared in federations?), and it's always possible to fall back to modification time for the date/time. This isn't great by any means but seems like a reasonable compromise. We still do need a flag on the index entry to allow "retrying".

@pulsejet pulsejet modified the milestones: 6.2, 6.3 Jan 10, 2024
@simonspa
Copy link

As far as I can see they indeed did not see it as a bug, but added a config parameter anyway. Does setting davstorage.request_timeout to something larger solve your issue?

@Derkades
Copy link
Author

Thanks, that indeed solves the issue

@pulsejet
Copy link
Owner

Fixed on master

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants