-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: custom file path templates and detecting/skipping duplicates #93
Comments
Not sure which OS you're using, but on Ubuntu I have a script that executes an instance of bulk downloader (so it proceeds to download all the user duplicates), then I use fdupes once it's done to remove duplicate files. Here's a part of that script for users:
This approach should work on mac/linux, if that's what you're using. Test these things out first as both fdupes and the find command will remove things without asking you (I've warned you). Not experienced with Windows, sorry. |
Since creating the issue, I've cloned the repo and got it running from source. I have a few ideas, I'd be happy to submit Pull requests if I get things written. First idea is to use md5sum's to detect duplicates. Could keep a dict in memory keyed by the md5sums, and after getting each file store it's md5 there. After finishing, save it to the target directory, and when starting load that md5sum file if it exists. And for now I've just hard-coded the target directory path to how I'd like it. Some sort of user-customizability would be sweet, but at least for now it's saving things where I want them :) |
Hey, it is really pleasing to see people use my tool :) I have been really busy for a few months. Now that I have free time I can work on the project and implement those brilliant features. |
It looks like a really useful tool! I'm happy to contribute if I can, even if it's just in feature requests ;) I'm also pretty short on time these days, but could probably make a few code changes. |
That's exactly why I use fdupes on Linux! It recognizes duplicates by comparing MD5 signatures between files, followed by a byte-to-byte comparison. It's really handy. But my technique requires everything be downloaded first, then it removes the duplicates. It would be awesome if the script could check itself before downloading, that way you get varied content instead of 10 duplicates. Especially if you set it to only download 10, and there's 10 duplicates, it'll download them all instead of downloading 9 other unique posts. |
@ymgenesis I have just tested that python can generate MD5 hash of a 100MB video file in milliseconds easily. I will implement this feature directly into the code. Thanks for bringing this up! |
No worries! @ahawks had the MD5 idea, I had just been using it externally. Thanks! |
The feature is up on the develop branch and it will soon make it to a new release with some other cool features such as custom file names and custom file paths. You can check it out! |
It seems that the develop branch uses a Yaml config file instead of JSON, which means having to re-authenticate with reddit and imgur. right? |
Nope. It still uses config.json file but structurw of the file is slightly
different and now program uses a different reddit API. But setup process is
a one time thing, still.
…On Mon, Jun 1, 2020, 05:24 ahawks ***@***.***> wrote:
It seems that the develop branch uses a Yaml config file instead of JSON,
which means having to re-authenticate with reddit and imgur. right?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#93 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AF5WMV7M3YSIVAQKDCPPG3TRUMGM7ANCNFSM4NLB3RXQ>
.
|
Requested features are live on the version 1.8.0. |
A lot of posters post the same content to multiple subs. If you're trying to download everything from 1 person, you end up with 50 folders and the same 10 pictures in each folder.
In this case, I'd much rather be able to specify a path of "USERNAME/TITLE" instead of the default SUBREDDIT/TITLE"
The text was updated successfully, but these errors were encountered: