You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The download system should save and map the urls of any download it does (not parse of metadata) and map it to the file-id it has internally.
This way you can more easily skip known urls. It could enable the system to skip known files that the user already has. If you want to verify if the file is in place or not then you could do that too.
This could also enable a way to re-download missing files or corrupted files.
If a file does not open for whatever reason then it could trigger a re-download of the file.
But doing it this way will speed up the time it takes to process already imported files instead of downloading the full file then only to realise it already has the file in place like it does now.
Looking up the url on the url database/list is faster than downloading the file again, also lighter on the server.
Even looking up the url AND verifying the file is in its correct location is faster than downloading the file again.
Think this is the way to go.
The text was updated successfully, but these errors were encountered:
I've been toying with a browser extension ala Hydrus Companion and do kind of need something like that -- I initially planned to just use the Source Finder script that's already bundled, but since it parses metadata for source: tags on each run it's just super slow.
I don't like adding extra stuff to the database since it means more maintenance and migration pains, but a basic URL<->ID index would indeed be useful here.
The download system should save and map the urls of any download it does (not parse of metadata) and map it to the file-id it has internally.
This way you can more easily skip known urls. It could enable the system to skip known files that the user already has. If you want to verify if the file is in place or not then you could do that too.
This could also enable a way to re-download missing files or corrupted files.
If a file does not open for whatever reason then it could trigger a re-download of the file.
But doing it this way will speed up the time it takes to process already imported files instead of downloading the full file then only to realise it already has the file in place like it does now.
Looking up the url on the url database/list is faster than downloading the file again, also lighter on the server.
Even looking up the url AND verifying the file is in its correct location is faster than downloading the file again.
Think this is the way to go.
The text was updated successfully, but these errors were encountered: