You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an issue which occurs frequently, but I can't seem to replicate it consistently. When uploading a batch of files, occasionally I end up with an extra copy or two of the file in the upload directory. For example, if I upload ten files in a row, maybe one or two of them will leave an extra copy or two that doesn't get removed by processing.
Let's say one of those files uploaded is called xxx.pdf. I will end up with something like 000000222,000000223 in my upload directory after all the uploading and processing. Each of those files are identical to the uploaded file (xxx.pdf). The successfully processed version of the file is 000000224. So in this case, three copies of the uploaded file were put in the upload directory, the first two did not get processed and the third one did. The other nine files in that ten file group processed without any issues.
My processing script (doc_upload.php) simply checks for file validity, sets up a processing job in a job queue to have processing done asynchronously from a different script controlled by a daemon (not nginx) and then terminates. The moving of the uploaded file from the upload directory is not done by the doc_upload.php script, but the separate asynchronous daemon. I don't appear to have an issue of where a file is uploaded and does not get processed at all.
doc_upload.php (launched by upload plugin to process file) --> sets up a processing job and places it a queue for processing by a daemon (does not remove file from upload directory)
daemon --> reads the job queue, processes the uploaded file, removes it from the upload directory
This is the only thing I see that seems to apply from the nginx error.log (I get these occasionally)
I am running nginx 1.1.11 with the upload being done with SSL. I am using the 2.2 version of the plugin. Here is my conf script section involving the uploading.
client_max_body_size 50m; # maximum file upload size
upload module specific parameters
upload_pass /doc_upload.php;
upload_store /usr/local/nginx/html/uploads;
upload_pass_args on;
upload_limit_rate 1m; # limit the upload to
Set specified fields in request body
upload_set_form_field $upload_field_name.name "$upload_file_name";
upload_set_form_field $upload_field_name.content_type "$upload_content_type";
upload_set_form_field $upload_field_name.path "$upload_tmp_path";
upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size";
upload_aggregate_form_field "$upload_field_name.number" "$upload_file_number";
upload_aggregate_form_field "$upload_field_name.external" "$1";
upload_aggregate_form_field "$upload_field_name.args" "$args";
upload_pass_form_field "^name$"; # this if for the unique_name field in routing
upload_cleanup 400 404 499 500-505;
Any suggestions?
The text was updated successfully, but these errors were encountered:
I have an issue which occurs frequently, but I can't seem to replicate it consistently. When uploading a batch of files, occasionally I end up with an extra copy or two of the file in the upload directory. For example, if I upload ten files in a row, maybe one or two of them will leave an extra copy or two that doesn't get removed by processing.
Let's say one of those files uploaded is called xxx.pdf. I will end up with something like 000000222,000000223 in my upload directory after all the uploading and processing. Each of those files are identical to the uploaded file (xxx.pdf). The successfully processed version of the file is 000000224. So in this case, three copies of the uploaded file were put in the upload directory, the first two did not get processed and the third one did. The other nine files in that ten file group processed without any issues.
My processing script (doc_upload.php) simply checks for file validity, sets up a processing job in a job queue to have processing done asynchronously from a different script controlled by a daemon (not nginx) and then terminates. The moving of the uploaded file from the upload directory is not done by the doc_upload.php script, but the separate asynchronous daemon. I don't appear to have an issue of where a file is uploaded and does not get processed at all.
doc_upload.php (launched by upload plugin to process file) --> sets up a processing job and places it a queue for processing by a daemon (does not remove file from upload directory)
daemon --> reads the job queue, processes the uploaded file, removes it from the upload directory
This is the only thing I see that seems to apply from the nginx error.log (I get these occasionally)
2012/03/08 13:27:52 [alert] 12907#0: *92047 aborted uploading file "xxx.pdf" to "/usr/local/nginx/uploads/0000000181", dest file removed, client: 0.0.0.0, server: www.xxxx.com, request: "POST /xxxx HTTP/1.1", host: "www.xxxx.com"
I am running nginx 1.1.11 with the upload being done with SSL. I am using the 2.2 version of the plugin. Here is my conf script section involving the uploading.
client_max_body_size 50m; # maximum file upload size
upload module specific parameters
upload_pass /doc_upload.php;
upload_store /usr/local/nginx/html/uploads;
upload_pass_args on;
upload_limit_rate 1m; # limit the upload to
Set specified fields in request body
upload_set_form_field $upload_field_name.name "$upload_file_name";
upload_set_form_field $upload_field_name.content_type "$upload_content_type";
upload_set_form_field $upload_field_name.path "$upload_tmp_path";
upload_aggregate_form_field "$upload_field_name.size" "$upload_file_size";
upload_aggregate_form_field "$upload_field_name.number" "$upload_file_number";
upload_aggregate_form_field "$upload_field_name.external" "$1";
upload_aggregate_form_field "$upload_field_name.args" "$args";
upload_pass_form_field "^name$"; # this if for the unique_name field in routing
upload_cleanup 400 404 499 500-505;
Any suggestions?
The text was updated successfully, but these errors were encountered: