You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Word on the street is that urllib2 isn't great at downloading large files. Our inspect_download_url() function is almost exclusively going to be handling large files, so is this something we need to consider?
I think that makes sense. Why this isn't just built-in I'm not sure.
But urllib2 opens a file-like object, so you can just while iterate over it and write chunks. There's an example in the urllib2 docs.
But yes, this would avoid Recipe Robot having to have the entire download in memory.
Alternate approaches would be similar to the SSL handshake suggestions. Any one of those solutions have ways to handle this too, that might just come along for the ride if/when we switch things up to handle tricky TLS.
Word on the street is that urllib2 isn't great at downloading large files. Our
inspect_download_url()
function is almost exclusively going to be handling large files, so is this something we need to consider?One example of a "chunking" method: https://gist.github.com/gourneau/1430932
The text was updated successfully, but these errors were encountered: