Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large file uploads cause socket to disconnect and have a "transport error" #38

Closed
ewanwalk opened this issue Jan 28, 2016 · 4 comments
Closed

Comments

@ewanwalk
Copy link

I cant seem to get this package to work with anything larger than 50MB. It essentially starts off strong and then disconnects the socket and keeps trying to transmit on a closed connection.

@ewanwalk
Copy link
Author

This seemed to be a bug with socket.io v1.3.7 and is resolved as of 1.4.5.. however you cannot seem to handle more than one or two large files at one time.

@vote539
Copy link
Collaborator

vote539 commented Jan 28, 2016

It might be worthwhile fiddling with the chunkSize parameter. The server writes the buffers directly to disk as soon as it receives them, so memory shouldn't be a problem. Bandwidth is most likely going to be the bottleneck when handling multiple simultaneous uploads. It would also be useful to do stress-testing on Socket.IO directly to see how much data it can handle being passed through Web Sockets at once.

@ewanwalk
Copy link
Author

it actually seemed to be CPU being the bottleneck, I'm trying a method where I use sticky-sessions to cluster socket.io and then uploading to workers essentially giving me more throughput altogether.

The problem I seem to face is not being able to set a limit on upload (currently it maxes out at about 30MB/s across my network) which works on up to approximately 2-3 files before the server stops being able to handle it and errors on reading the size of a file. I'd suspect limiting this would greatly reduce the overall cpu usage due to less data being sent.

I will note that small files are no issue.

Another note, possibly adding a built in queue would be optimal e.g. maxParallelUploads = 2

@raulrene
Copy link

raulrene commented Nov 9, 2021

It might be worthwhile fiddling with the chunkSize parameter. The server writes the buffers directly to disk as soon as it receives them, so memory shouldn't be a problem. Bandwidth is most likely going to be the bottleneck when handling multiple simultaneous uploads. It would also be useful to do stress-testing on Socket.IO directly to see how much data it can handle being passed through Web Sockets at once.

I actually managed to get it working using this suggestion. I was using 64kb chunk-size and reduced it way down to 5kb and now I'm not getting the transport closed error anymore. Used to happen on large files, and especially if more than 1 large file was uploaded at once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants