Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do we need special consideration for large file downloads? #24

Closed
homebysix opened this issue Oct 4, 2015 · 3 comments
Closed

Do we need special consideration for large file downloads? #24

homebysix opened this issue Oct 4, 2015 · 3 comments

Comments

@homebysix
Copy link
Owner

Word on the street is that urllib2 isn't great at downloading large files. Our inspect_download_url() function is almost exclusively going to be handling large files, so is this something we need to consider?

One example of a "chunking" method: https://gist.github.com/gourneau/1430932

@sheagcraig
Copy link
Collaborator

I think that makes sense. Why this isn't just built-in I'm not sure.
But urllib2 opens a file-like object, so you can just while iterate over it and write chunks. There's an example in the urllib2 docs.

But yes, this would avoid Recipe Robot having to have the entire download in memory.

Alternate approaches would be similar to the SSL handshake suggestions. Any one of those solutions have ways to handle this too, that might just come along for the ride if/when we switch things up to handle tricky TLS.

@homebysix
Copy link
Owner Author

Yes, let's solve #19 first.

@homebysix
Copy link
Owner Author

I think I accidentally solved this with #84.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants