New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't upload a gcode file larger than 100MB #455

Closed
makerthink opened this Issue Apr 26, 2014 · 8 comments

Comments

4 participants
@makerthink

makerthink commented Apr 26, 2014

Upload failed

Could not upload the file. Make sure that it is a GCODE file and has the extension ".gcode" or ".gco" or that it is an STL file with the extension ".stl" and Cura support is enabled and configured.

Server reported:

undefined

I have a big printer so I often get large files to be printed. Each time, I met this.
Anyone else has the similar problem?

I tried raspberry pi, and cubietruck, all the same.
stable version and experimental version, both keep this issue.

@nhfoley

This comment has been minimized.

Show comment
Hide comment
@nhfoley

nhfoley Jul 27, 2014

Yes, discovered this issue as well... was trying to upload a 190mb gcode file today and got the same error.

I was able to workaround the problem by manually uploading the file to the Pi using Putty, though there were a few non-obvious steps:

  1. You have to create a new upload folder in the Octopring settings, as the standard one seems to be inaccessible from Putty. Or I suck at linux and couldn't find it. Either way, I created a new upload folder in Home/Pi.

  2. Your manually uploaded gcode can't contain any spaces or weird characters in the filename, or it doesn't show up in the browser interface.

  3. You have to reboot your Pi after changing the upload folder location.

nhfoley commented Jul 27, 2014

Yes, discovered this issue as well... was trying to upload a 190mb gcode file today and got the same error.

I was able to workaround the problem by manually uploading the file to the Pi using Putty, though there were a few non-obvious steps:

  1. You have to create a new upload folder in the Octopring settings, as the standard one seems to be inaccessible from Putty. Or I suck at linux and couldn't find it. Either way, I created a new upload folder in Home/Pi.

  2. Your manually uploaded gcode can't contain any spaces or weird characters in the filename, or it doesn't show up in the browser interface.

  3. You have to reboot your Pi after changing the upload folder location.

@foosel

This comment has been minimized.

Show comment
Hide comment
@foosel

foosel Jul 27, 2014

Owner

If you are running the devel branch, a quick workaround is scping large files to ~/.octoprint/watched or optionally another folder to be configured as watched folder in config.yaml, see here (I just realized I apparently didn't yet add this to the settings UI, shame on me).

Owner

foosel commented Jul 27, 2014

If you are running the devel branch, a quick workaround is scping large files to ~/.octoprint/watched or optionally another folder to be configured as watched folder in config.yaml, see here (I just realized I apparently didn't yet add this to the settings UI, shame on me).

foosel added a commit that referenced this issue Aug 5, 2014

Finalizing upload streaming support
Major refactoring of octoprint.server.util (divided into smaller submodules), extended Tornado to allow for request-specific max content lengths, introduced settings parameters to configure maximum upload size, maximum request body size and file suffixes

See #455
@foosel

This comment has been minimized.

Show comment
Hide comment
@foosel

foosel Aug 5, 2014

Owner

So... I created a couple of custom implementations of Tornado functionality to circumvent this stupid problem and allow for proper file streaming.

OctoPrint now basically has the functionality of the nginx upload module built in, meaning that when tornado encounters a file upload in a multipart request body it buffers the file to disk and then supplies the flask application it wraps with a couple of new form parameters containing filename, path, content type and such instead of the actual file. The upload API then just moves the file from the path it gets supplied by tornado to the upload folder with the usual checks. This is all implemented in a way that it should also be compatible to an nginx installation in front of the OctoPrint server with the nginx upload module in action (have yet to test this, but should work in theory).

I've also rewritten the max content length checks for request body validation so that it's now possible to define a separate max content length for the file uploads only (currently set to 1GB by default, can be set to 0 or -1 for unlimited size via server.uploads.maxSize in config.yaml) while keeping the rest of the whole server on a differente default max content length so that everything should be a bit more robust against too large requests.

I've only pushed this stuff into the dev/largeFileUpload branch for now since I had to refactor and chance a lot on the basic server layer, will test it a bit more (you are invited to help here :)) and then merge it onto devel ASAP.

Owner

foosel commented Aug 5, 2014

So... I created a couple of custom implementations of Tornado functionality to circumvent this stupid problem and allow for proper file streaming.

OctoPrint now basically has the functionality of the nginx upload module built in, meaning that when tornado encounters a file upload in a multipart request body it buffers the file to disk and then supplies the flask application it wraps with a couple of new form parameters containing filename, path, content type and such instead of the actual file. The upload API then just moves the file from the path it gets supplied by tornado to the upload folder with the usual checks. This is all implemented in a way that it should also be compatible to an nginx installation in front of the OctoPrint server with the nginx upload module in action (have yet to test this, but should work in theory).

I've also rewritten the max content length checks for request body validation so that it's now possible to define a separate max content length for the file uploads only (currently set to 1GB by default, can be set to 0 or -1 for unlimited size via server.uploads.maxSize in config.yaml) while keeping the rest of the whole server on a differente default max content length so that everything should be a bit more robust against too large requests.

I've only pushed this stuff into the dev/largeFileUpload branch for now since I had to refactor and chance a lot on the basic server layer, will test it a bit more (you are invited to help here :)) and then merge it onto devel ASAP.

@CapnBry

This comment has been minimized.

Show comment
Hide comment
@CapnBry

CapnBry Aug 5, 2014

Contributor

I've always wondered how this was going to work out (getting in-place upload streaming to work with tornado) and I've been sitting back, tenting my fingers, and saying "excellent" as you've been slogging through it, Gina.

I'm not sure if this is intended behavior: Using the dev/largeFileUpload branch, when a large file is uploaded the api request for /api/files at upload completion returns 0 bytes of content. This means that the file doesn't show up in the file list at all. A page refresh, or waiting until processing is complete and receiving the notification to refresh the file list shows it, as well as directly calling the /api/files url. For some reason the /api/files request right after the upload "fails" (returns 0 bytes) every time.

  • Make sure destination file isn't on list
  • Drag & drop a 20MB file onto the upload space
  • File uploads and progress bar clears
  • File does not appear in list
  • Refresh page: file does appear in list

image

Test system is RaspberryPi Model A @700 Mhz

Contributor

CapnBry commented Aug 5, 2014

I've always wondered how this was going to work out (getting in-place upload streaming to work with tornado) and I've been sitting back, tenting my fingers, and saying "excellent" as you've been slogging through it, Gina.

I'm not sure if this is intended behavior: Using the dev/largeFileUpload branch, when a large file is uploaded the api request for /api/files at upload completion returns 0 bytes of content. This means that the file doesn't show up in the file list at all. A page refresh, or waiting until processing is complete and receiving the notification to refresh the file list shows it, as well as directly calling the /api/files url. For some reason the /api/files request right after the upload "fails" (returns 0 bytes) every time.

  • Make sure destination file isn't on list
  • Drag & drop a 20MB file onto the upload space
  • File uploads and progress bar clears
  • File does not appear in list
  • Refresh page: file does appear in list

image

Test system is RaspberryPi Model A @700 Mhz

@foosel

This comment has been minimized.

Show comment
Hide comment
@foosel

foosel Aug 5, 2014

Owner

I first wanted to complain, I really did, because:

image

But you know what? On the Pi I observed the same issue (the above one was my development machine):

image

Gna...

Owner

foosel commented Aug 5, 2014

I first wanted to complain, I really did, because:

image

But you know what? On the Pi I observed the same issue (the above one was my development machine):

image

Gna...

foosel added a commit that referenced this issue Aug 6, 2014

Stop customized RequestHandler from sending another set of headers af…
…ter the response

This was causing the GET request for the list of files following directly after a successful response from an upload to get associated with the second empty response sent directly after the upload and such lead to a funny timing issue causing the file list not to update correctly since the response to THAT request -- while received by the client -- could then not be processed.

See #455
@foosel

This comment has been minimized.

Show comment
Hide comment
@foosel

foosel Aug 6, 2014

Owner

@CapnBry I currently can't decide if I owe you a beer for spotting this or should hit your for spotting this ;)

It just cost me most of this day to figure out what was going on there, discovering IntelliJ's/PyCharm's wonderful support for remote debugging and logging breakpoints in the process and also discovering that two HTTP responses sent in the same body with a slight pause in between causes jquery to use that second response as a response for a totally unrelated GET request that follows directly after the double-response causing POST... Yikes, that was an evil one.

So apparently you need to make bloody sure that self.request._headers_written is set to True somewhere in your RequestHandler's get/put/post whatever method, either by setting the headers on your handler directly and then just calling finish or -- what I did not do until the fixing commit -- by just setting that damned variable to True if you already took care of everything. Otherwise tornado will happily sent out another set of default headers, breaking things in the process.

tldr: Please pull, that should be fixed now too ;)

Owner

foosel commented Aug 6, 2014

@CapnBry I currently can't decide if I owe you a beer for spotting this or should hit your for spotting this ;)

It just cost me most of this day to figure out what was going on there, discovering IntelliJ's/PyCharm's wonderful support for remote debugging and logging breakpoints in the process and also discovering that two HTTP responses sent in the same body with a slight pause in between causes jquery to use that second response as a response for a totally unrelated GET request that follows directly after the double-response causing POST... Yikes, that was an evil one.

So apparently you need to make bloody sure that self.request._headers_written is set to True somewhere in your RequestHandler's get/put/post whatever method, either by setting the headers on your handler directly and then just calling finish or -- what I did not do until the fixing commit -- by just setting that damned variable to True if you already took care of everything. Otherwise tornado will happily sent out another set of default headers, breaking things in the process.

tldr: Please pull, that should be fixed now too ;)

@CapnBry

This comment has been minimized.

Show comment
Hide comment
@CapnBry

CapnBry Aug 6, 2014

Contributor
            _
           /(|
          (  :
         __\  \  _____
       (____)  `|
      (____)|   |
       (____).__|
        (___)__.|_____
                   SSt

Story checks out, file this as "works for me" now. Large files upload with lickedy speed* and no giant memory usage on the Pi. I mean the memory usage doesn't even change (Virtual or Reserved) during the upload. I love to love this branch.

Lickedy speed(tm) constrained by wireless network on raspberry pi writing to a class 6 SD card, actual throughput ~18mbit.

Contributor

CapnBry commented Aug 6, 2014

            _
           /(|
          (  :
         __\  \  _____
       (____)  `|
      (____)|   |
       (____).__|
        (___)__.|_____
                   SSt

Story checks out, file this as "works for me" now. Large files upload with lickedy speed* and no giant memory usage on the Pi. I mean the memory usage doesn't even change (Virtual or Reserved) during the upload. I love to love this branch.

Lickedy speed(tm) constrained by wireless network on raspberry pi writing to a class 6 SD card, actual throughput ~18mbit.

@foosel

This comment has been minimized.

Show comment
Hide comment
@foosel

foosel Aug 7, 2014

Owner

Declaring this closed then, merged onto devel :)

Owner

foosel commented Aug 7, 2014

Declaring this closed then, merged onto devel :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment