New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: the request entity is too large #18

Closed
mojoaxel opened this Issue Jan 5, 2018 · 7 comments

Comments

Projects
None yet
3 participants
@mojoaxel

mojoaxel commented Jan 5, 2018

Copying from replicate.npmjs.com results in an too_large error:

ohnoes! errorz! { [Error: {"error":"too_large","reason":"the request entity is too large"}

Who throws this error?
Is there anything that could be done to handle this better?

@asaf400

This comment has been minimized.

Show comment
Hide comment
@asaf400

asaf400 Jan 9, 2018

Hello mojoaxel,

I am a fellow replicator, and I've stumbled upon the same error just a few days ago as well..
Since the error is a generic http status 413 code, It was challenging to find the culprit, although luckily for you, I have found it...

It's the CouchDB http server responding with 413..

Using the Following Resources:
http://docs.couchdb.org/en/2.1.1/config/http.html#httpd/max_http_request_size
http://docs.couchdb.org/en/2.1.1/config/couchdb.html#couchdb/max_document_size

I was able to overcome the issue, raising the defaults to 4GB for both values.. (4294967296 bytes)
in /opt/couchdb/etc/default.ini

asaf400 commented Jan 9, 2018

Hello mojoaxel,

I am a fellow replicator, and I've stumbled upon the same error just a few days ago as well..
Since the error is a generic http status 413 code, It was challenging to find the culprit, although luckily for you, I have found it...

It's the CouchDB http server responding with 413..

Using the Following Resources:
http://docs.couchdb.org/en/2.1.1/config/http.html#httpd/max_http_request_size
http://docs.couchdb.org/en/2.1.1/config/couchdb.html#couchdb/max_document_size

I was able to overcome the issue, raising the defaults to 4GB for both values.. (4294967296 bytes)
in /opt/couchdb/etc/default.ini

@mojoaxel

This comment has been minimized.

Show comment
Hide comment
@mojoaxel

mojoaxel commented Jan 9, 2018

Thx @asaf400

@mojoaxel mojoaxel closed this Jan 9, 2018

@mojoaxel

This comment has been minimized.

Show comment
Hide comment
@mojoaxel

mojoaxel commented Jan 9, 2018

@asaf400

This comment has been minimized.

Show comment
Hide comment
@asaf400

asaf400 Jan 11, 2018

asaf400 commented Jan 11, 2018

@asaf400

This comment has been minimized.

Show comment
Hide comment
@asaf400

asaf400 Jan 12, 2018

As promised in my last comment,

I am having issues replicating change 'actor-sdk-357997' and it's attachments...

asaf400 commented Jan 12, 2018

As promised in my last comment,

I am having issues replicating change 'actor-sdk-357997' and it's attachments...

@sigsegv13

This comment has been minimized.

Show comment
Hide comment
@sigsegv13

sigsegv13 May 3, 2018

@asaf400 I ran into an issue replicating 'actor-sdk' as well. My couchdb request size was set to 4GB when it initially failed, ratcheted that up to 8GB... no joy... then up to 16GB before it finally replicated. Didn't bother to find out whether a value somewhere between 8GB and 16GB would have worked.

For reference and anyone that's interested, here is the error thrown:
-> actor-sdk-1.0.385.tgz
ERROR { Error: {"error":"too_large","reason":"the request entity is too large"}

at IncomingMessage.<anonymous> (/data/npm-fullfat-registry/node_modules/parse-json-response/parse.js:38:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9) statusCode: 413 }

/data/npm-fullfat-registry/node_modules/npm-fullfat-registry/bin/fullfat.js:80
throw er
^

Error: {"error":"too_large","reason":"the request entity is too large"}

at IncomingMessage.<anonymous> (/data/npm-fullfat-registry/node_modules/parse-json-response/parse.js:38:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)

fullfat.js crashed with exit code 1. Respawning
START npm FullFat/1.1.4 node/v6.14.0 pid=23413
357997: actor-sdk

sigsegv13 commented May 3, 2018

@asaf400 I ran into an issue replicating 'actor-sdk' as well. My couchdb request size was set to 4GB when it initially failed, ratcheted that up to 8GB... no joy... then up to 16GB before it finally replicated. Didn't bother to find out whether a value somewhere between 8GB and 16GB would have worked.

For reference and anyone that's interested, here is the error thrown:
-> actor-sdk-1.0.385.tgz
ERROR { Error: {"error":"too_large","reason":"the request entity is too large"}

at IncomingMessage.<anonymous> (/data/npm-fullfat-registry/node_modules/parse-json-response/parse.js:38:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9) statusCode: 413 }

/data/npm-fullfat-registry/node_modules/npm-fullfat-registry/bin/fullfat.js:80
throw er
^

Error: {"error":"too_large","reason":"the request entity is too large"}

at IncomingMessage.<anonymous> (/data/npm-fullfat-registry/node_modules/parse-json-response/parse.js:38:12)
at emitNone (events.js:91:20)
at IncomingMessage.emit (events.js:185:7)
at endReadableNT (_stream_readable.js:974:12)
at _combinedTickCallback (internal/process/next_tick.js:80:11)
at process._tickCallback (internal/process/next_tick.js:104:9)

fullfat.js crashed with exit code 1. Respawning
START npm FullFat/1.1.4 node/v6.14.0 pid=23413
357997: actor-sdk

@asaf400

This comment has been minimized.

Show comment
Hide comment
@asaf400

asaf400 May 4, 2018

asaf400 commented May 4, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment