New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Nextcloud 28] New metadata background job can trigger error on 32-bit systems #2185
Comments
Thanks for the report @MichaIng
|
Reading the code again, I doubt that we can truly support 32 bit. Currently, the code expects an int, so any incompatible value will cause an issue. How do we handle incompatible mtime in our files code? @come-nc maybe? |
You have to catch the error and log in this case, you cannot do much more if the timestamp does not fit in an int. But for original date this should not happen until 2038. I am not sure we handle that correctly in all places, but incorrect mtimes are rare, usually mtime is set by Nextcloud itself. For calendar, we officially do not support events past 2038 on 32bits. |
It is
I saw this error on every job execution because the job failed at the same point every time, right? |
Good point. Here is a fix nextcloud/server#42198 |
Describe the bug
I face an "Epoch doesn't fit in a PHP integer" error when the new metadata background job runs. It appears once only, so it is probably a faulty timestamp of a particular file, but probably good to verify/debug this.
To Reproduce
Steps to reproduce the behavior:
GenerateMetadataJob
Expected behavior
No errors when the job runs.
Additional context
The client is irrelevant, instead I give server infos here:
Nextcloud log entry:
In case I did conclude correctly above, it is probably more a question whether such case should be handled gracefully, not throwing an error but at best a debug or info log entry, since invalid EXIF data among large image galleries are probably not uncommon and no reason to worry about the Nextcloud instance or data itself. A daily error log however should concern any serious admin.
Another (probably OOT) question is whether really all files should be checked every day by this job (which is currently the case as far as I see)? On my Raspberry Pi 2 with quite a lot of data this took 1.5h of continuous disk reads. If I understood correctly, the conversion/generation of metadata from all files is basically a one-time step, and afterwards it only needs to be done for new files. Probably it could be handled similarly to the general files cache table: Scan/generate only for new files, but have a CLI command like
files:scan --all
=>photos:scan --all
to trigger a manual rescan in case files were added manually.The text was updated successfully, but these errors were encountered: