New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid reading the whole uploaded file into memory at once in the WireUpload class #233

Closed
BitPoet opened this Issue Sep 28, 2018 · 7 comments

Comments

Projects
None yet
4 participants
@BitPoet

BitPoet commented Sep 28, 2018

Short description of the enhancement

Currently when using Ajax uploads, WireUpload slurps the whole uploaded file into memory. This is a problem in scenarios where very large files need to be uploaded. A chunked retrieval would make the script much more memory efficient.

Current vs. suggested behavior

Currently, everything happens in the following line:

		file_put_contents($tmpName, file_get_contents('php://input')); 

It would be nice to cut down on used memory and read/write the contents in chunks of a certain size, balancing between memory usage and cpu+disk cycles. This is a suggested solution using a chunk size of 8MB:

		$inputHandle = fopen("php://input", "rb");
		$outputHandle = fopen($tmpName, "wb");
		while(! feof($inputHandle)) {
			$filedata = fread($inputHandle, 8192 * 1024);
			fwrite($outputHandle, $filedata);
		}
		fclose($outputHandle);
		fclose($inputHandle);

Why would the enhancement be useful to users?

I have a practical case where movies (MP4) of more than 100MB need to be uploaded to pagefiles. These uploads sometimes break memory limits or seriously slow down the server. Others will likely experience similar issues when uploading large files (like vidoes or compressed archives).

@gmclelland

This comment has been minimized.

gmclelland commented Sep 28, 2018

FYI...I think this request sounds similar to #215 in case you want to add a note there.

Thanks for sharing your suggested solution

@jmartsch

This comment has been minimized.

jmartsch commented Sep 28, 2018

Yes, this seems do be a duplicate of #215

@BitPoet

This comment has been minimized.

BitPoet commented Sep 28, 2018

Nope, the similarity in the issue titles is a bit misleading, #215 is about chunked network transfers. This one is about transferring the already uploaded file from memory to the disk. I'll try and find a better title :-)

A chunked upload implementation will avoid the problem as it will implement its own routine for writing the target file, but as long as it isn't there as PW's default and sure to work under any circumstances, I believe my solution will still have value.

@gmclelland

This comment has been minimized.

gmclelland commented Sep 28, 2018

Thanks for the clarification

@BitPoet BitPoet changed the title from Make WireUpload read and write uploaded file contents in chunks to Avoid reading the whole uploaded file into memory at once in the WireUpload class Sep 28, 2018

@BitPoet

This comment has been minimized.

BitPoet commented Sep 28, 2018

No problem :-) I changed the title to avoid confusion. It really looked quite similar at first glance.

ryancramerdesign added a commit to processwire/processwire that referenced this issue Nov 8, 2018

Add @BitPoet processwire/processwire-requests#233 to make ajax file u…
…pload use chunked method for potential reduced memory usage and support of larger file uploads
@ryancramerdesign

This comment has been minimized.

Contributor

ryancramerdesign commented Nov 8, 2018

Thanks, this seems like a good idea. Thanks for the suggested code, this helps a lot, I have added it.

@BitPoet

This comment has been minimized.

BitPoet commented Nov 8, 2018

Brilliant, thank you for the quick adaption!

@BitPoet BitPoet closed this Nov 8, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment