Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: custom file path templates and detecting/skipping duplicates #93

Closed
ahawks opened this issue May 26, 2020 · 11 comments
Closed

Comments

@ahawks
Copy link

ahawks commented May 26, 2020

A lot of posters post the same content to multiple subs. If you're trying to download everything from 1 person, you end up with 50 folders and the same 10 pictures in each folder.

In this case, I'd much rather be able to specify a path of "USERNAME/TITLE" instead of the default SUBREDDIT/TITLE"

@ymgenesis
Copy link

ymgenesis commented May 26, 2020

Not sure which OS you're using, but on Ubuntu I have a script that executes an instance of bulk downloader (so it proceeds to download all the user duplicates), then I use fdupes once it's done to remove duplicate files.

Here's a part of that script for users:

# The script command
python3.6 $ /path/to/bulk-downloader-for-reddit/script.py -d /path/to/download/folder --submitted --user USER --sort SORT --limit NUMBER --time TIME

# search for and remove duplicates, be carefull with -d (removes duplicates without prompting you)
fdupes /path/to/download/folder -SArNd

# remove empty folders that fdupes may have created by removing files
find /path/to/download/folder -empty -type d -exec rm -r {} +

This approach should work on mac/linux, if that's what you're using. Test these things out first as both fdupes and the find command will remove things without asking you (I've warned you). Not experienced with Windows, sorry.

@ahawks
Copy link
Author

ahawks commented May 26, 2020

Since creating the issue, I've cloned the repo and got it running from source. I have a few ideas, I'd be happy to submit Pull requests if I get things written.

First idea is to use md5sum's to detect duplicates. Could keep a dict in memory keyed by the md5sums, and after getting each file store it's md5 there. After finishing, save it to the target directory, and when starting load that md5sum file if it exists.

And for now I've just hard-coded the target directory path to how I'd like it. Some sort of user-customizability would be sweet, but at least for now it's saving things where I want them :)

@aliparlakci
Copy link
Owner

Hey, it is really pleasing to see people use my tool :)

I have been really busy for a few months. Now that I have free time I can work on the project and implement those brilliant features.

@ahawks
Copy link
Author

ahawks commented May 26, 2020

It looks like a really useful tool! I'm happy to contribute if I can, even if it's just in feature requests ;) I'm also pretty short on time these days, but could probably make a few code changes.

@ymgenesis
Copy link

ymgenesis commented May 26, 2020

First idea is to use md5sum's to detect duplicates.

That's exactly why I use fdupes on Linux! It recognizes duplicates by comparing MD5 signatures between files, followed by a byte-to-byte comparison. It's really handy.

But my technique requires everything be downloaded first, then it removes the duplicates. It would be awesome if the script could check itself before downloading, that way you get varied content instead of 10 duplicates. Especially if you set it to only download 10, and there's 10 duplicates, it'll download them all instead of downloading 9 other unique posts.

@aliparlakci
Copy link
Owner

@ymgenesis I have just tested that python can generate MD5 hash of a 100MB video file in milliseconds easily. I will implement this feature directly into the code. Thanks for bringing this up!

@ymgenesis
Copy link

@ymgenesis I have just tested that python can generate MD5 hash of a 100MB video file in milliseconds easily. I will implement this feature directly into the code. Thanks for bringing this up!

No worries! @ahawks had the MD5 idea, I had just been using it externally. Thanks!

@aliparlakci
Copy link
Owner

aliparlakci commented May 30, 2020

@ymgenesis @ahawks

The feature is up on the develop branch and it will soon make it to a new release with some other cool features such as custom file names and custom file paths. You can check it out!

@ahawks
Copy link
Author

ahawks commented Jun 1, 2020

It seems that the develop branch uses a Yaml config file instead of JSON, which means having to re-authenticate with reddit and imgur. right?

@aliparlakci
Copy link
Owner

aliparlakci commented Jun 1, 2020 via email

@aliparlakci
Copy link
Owner

Requested features are live on the version 1.8.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants