Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[11.0] Truncated/malformed long responses in multiprocessing mode with Python 3.5 #20158

Closed
fiddler0305 opened this issue Oct 14, 2017 · 34 comments
Assignees
Labels
11.0 Framework General frontend/backend framework issues

Comments

@fiddler0305
Copy link

fiddler0305 commented Oct 14, 2017

Impacted versions:
11.0 community
ubuntu 17.04 package install

Steps to reproduce:
add
workers = 8
on bottom of /etc/odoo/odoo.conf

Current behavior:
after doing above operation, it get lots of error (as images shown)

Expected behavior:
working normally

Video/Screenshot link (optional):

  1. login
    01_login

  2. after login
    02_logedin

@fiddler0305
Copy link
Author

Debian 9.2 package install has the same problem

@ejprice
Copy link

ejprice commented Oct 16, 2017

I'm having the same issue on Debian 9.2 with the following setup:

Base Debian 9.2 install
odoo_11.0.20171013_all.deb as well as 20171004 (and dependancies)

Additional deb packages installed:
python3-pyldap
python3-gevent
python3-setuptools
python3-pip

Plus these from pip3:
num2words wheel psycogreen

I have also tried git cloning the source and using pip3 to install dependencies from requirements.txt with the same results.

The workers don't want to work ;-)

@gustavovalverde
Copy link
Contributor

@Yenthe666, I can confirm this, I closed a duplicated issue which I made with more details and using the information here #20176

Install Odoo with this specifications:

Current behavior:

20:59:27.057 (index):37 GET https://v11.iterativo.do/web/content/240-ba36516/web.assets_backend.0.css net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.289 (index):38 GET https://v11.iterativo.do/web/content/241-ba36516/web.assets_backend.1.css net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.311 (index):35 GET https://v11.iterativo.do/web/content/239-0a9c277/web.assets_common.0.css net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.435 (index):47 GET https://v11.iterativo.do/web/content/245-ba36516/web.assets_backend.js net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.491 (index):45 GET https://v11.iterativo.do/web/content/244-0a9c277/web.assets_common.js net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.675 (index):49 GET https://v11.iterativo.do/web/content/246-ab351a2/web_editor.summernote.js net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.676 (index):51 GET https://v11.iterativo.do/web/content/247-a129a5f/web_editor.assets_editor.js net::ERR_INCOMPLETE_CHUNKED_ENCODING
20:59:27.684 (index):61 Uncaught TypeError: odoo.define is not a function
    at (index):61

Debug LOG

2017-10-16 00:59:30,500 364 INFO pruebas odoo.modules.loading: Modules loaded.
2017-10-16 00:59:30,502 364 INFO pruebas odoo.addons.base.ir.ir_http: Generating routing map
2017-10-16 00:59:30,512 363 INFO pruebas odoo.modules.registry: At least one model cache has been invalidated, signaling through the database.
2017-10-16 00:59:30,526 359 INFO pruebas odoo.modules.loading: Modules loaded.
2017-10-16 00:59:30,527 363 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/241-ba36516/web.assets_backend.1.css HTTP/1.0" 200 -
2017-10-16 00:59:30,531 364 INFO pruebas odoo.modules.registry: At least one model cache has been invalidated, signaling through the database.
2017-10-16 00:59:30,532 364 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/239-0a9c277/web.assets_common.0.css HTTP/1.0" 200 -
2017-10-16 00:59:30,533 359 INFO pruebas odoo.addons.base.ir.ir_http: Generating routing map
2017-10-16 00:59:30,538 366 INFO pruebas odoo.modules.loading: Modules loaded.
2017-10-16 00:59:30,540 366 INFO pruebas odoo.addons.base.ir.ir_http: Generating routing map
2017-10-16 00:59:30,596 365 INFO pruebas odoo.modules.loading: Modules loaded.
2017-10-16 00:59:30,599 365 DEBUG pruebas odoo.modules.registry: Multiprocess signaling check: [Registry - 2 -> 2] [Cache - 3 -> 5]
2017-10-16 00:59:30,599 365 INFO pruebas odoo.modules.registry: Invalidating all model caches after database signaling.
2017-10-16 00:59:30,602 365 INFO pruebas odoo.addons.base.ir.ir_http: Generating routing map
2017-10-16 00:59:30,629 363 DEBUG pruebas odoo.modules.registry: Multiprocess signaling check: [Registry - 2 -> 2] [Cache - 4 -> 5]
2017-10-16 00:59:30,629 363 INFO pruebas odoo.modules.registry: Invalidating all model caches after database signaling.
2017-10-16 00:59:30,640 363 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/245-ba36516/web.assets_backend.js HTTP/1.0" 200 -
2017-10-16 00:59:30,700 366 INFO pruebas odoo.modules.registry: At least one model cache has been invalidated, signaling through the database.
2017-10-16 00:59:30,703 366 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/242-ab351a2/web_editor.summernote.0.css HTTP/1.0" 200 -
2017-10-16 00:59:30,712 359 INFO pruebas odoo.modules.registry: At least one model cache has been invalidated, signaling through the database.
2017-10-16 00:59:30,713 359 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/244-0a9c277/web.assets_common.js HTTP/1.0" 200 -
2017-10-16 00:59:30,724 365 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/243-a129a5f/web_editor.assets_editor.0.css HTTP/1.0" 200 -
2017-10-16 00:59:30,846 366 DEBUG pruebas odoo.modules.registry: Multiprocess signaling check: [Registry - 2 -> 2] [Cache - 6 -> 7]
2017-10-16 00:59:30,846 366 INFO pruebas odoo.modules.registry: Invalidating all model caches after database signaling.
2017-10-16 00:59:30,850 366 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/246-ab351a2/web_editor.summernote.js HTTP/1.0" 200 -
2017-10-16 00:59:30,875 364 DEBUG pruebas odoo.modules.registry: Multiprocess signaling check: [Registry - 2 -> 2] [Cache - 5 -> 7]
2017-10-16 00:59:30,875 364 INFO pruebas odoo.modules.registry: Invalidating all model caches after database signaling.
2017-10-16 00:59:30,879 364 INFO pruebas werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:30] "GET /web/content/247-a129a5f/web_editor.assets_editor.js HTTP/1.0" 200 -
2017-10-16 00:59:31,489 365 INFO ? werkzeug: 192.168.2.201 - - [16/Oct/2017 00:59:31] "GET /web_enterprise/static/src/img/mobile-icons/android-192x192.png HTTP/1.0" 200 -

In Console with debug assets

21:08:23.786 ?debug=assets:70 GET https://v11.iterativo.do/web/static/src/less/web.assets_backend/import_bootstrap.less.css net::ERR_INCOMPLETE_CHUNKED_ENCODING

@gustavovalverde
Copy link
Contributor

This one could also be related #20175

@Yenthe666 Yenthe666 added 11.0 Framework General frontend/backend framework issues labels Oct 16, 2017
@xmo-odoo xmo-odoo assigned pimodoo and unassigned xmo-odoo Oct 19, 2017
@gustavovalverde
Copy link
Contributor

Ticket 777033

@mlaitinen
Copy link
Contributor

I encountered the same issue. The weird thing is that if you connect to the longpolling port 8072 all the assets work just fine. With 8069 the UI is broken.

@kirca
Copy link
Contributor

kirca commented Oct 24, 2017

I ran into this also while testing with the odoo:11.0 docker image. The problem seems to be that the assets are not fetched completely, i.e. they are truncated. When running it behind nginx this error is shown:

2017/10/24 21:48:52 [error] 4354#4354: *1901 upstream prematurely closed connection while reading upstream, client: 192.168.99.1, server: _, request: "GET /web/content/668-7aa3962/web.assets_frontend.0.css HTTP/1.1", upstream: "http://172.17.0.10:8069/web/content/668-7aa3962/web.assets_frontend.0.css", host: "192.168.99.100"

It seems the workers close the connection before sending all data.

@xmo-odoo xmo-odoo assigned odony and unassigned xmo-odoo and pimodoo Oct 25, 2017
@xmo-odoo
Copy link
Collaborator

So… according to research by @odony and @julienlegros it seems to be an issue specifically on Python 3.5 on Linux, above a certain quantity of data sent over a non-blocking socket, so:

  • Non linux apparently not an issue (not reproducible on OSX which explains why I couldn't repro it on my box)
  • Python 3.6 apparently not an issue
  • Small files (e.g. debug=assets) not an issue
  • Blocking socket (default threaded socket) not an issue, only workers= triggers the problem.

@xmo-odoo
Copy link
Collaborator

xmo-odoo commented Oct 25, 2017

The issue is likely https://bugs.python.org/issue24291, it is indeed an issue on non-blocking sockets, fixed in 3.6, it can apparently happen on any platform but it's going to depend when the platform's sockets do partial writes (so probably depend on the kernel's buffers & tuning & all).

@odony
Copy link
Contributor

odony commented Oct 25, 2017

You can try 253d49f7a404ab5c889c90446bf6e87efd4ce026 as a possible fix, but it is not fully validated yet.

@odony
Copy link
Contributor

odony commented Oct 25, 2017

@gustavovalverde #20175 looks unrelated

@gustavovalverde
Copy link
Contributor

I'm wondering, based on @xmo-odoo sentence:

(so probably depend on the kernel's buffers & tuning & all).

Could containerization be a root cause of this issue? As I'm using LXC containers for Odoo and #20158 (comment) is using Docker; maybe the kernel behavior (limitations) could also trigger this.

@gustavovalverde
Copy link
Contributor

@odony, tested 253d49f and it's working great 👍

@kirca
Copy link
Contributor

kirca commented Oct 26, 2017

@odony, patch 253d49f works 👍 .

@gustavovalverde it seems to be an issue with python's socket library as it was referenced in the commit
https://bugs.python.org/issue24291

@odony odony closed this as completed in d9b721c Oct 26, 2017
@odony
Copy link
Contributor

odony commented Oct 26, 2017

Thanks for the feedback! 253d49f has been merged in 11.0 at d9b721c.

Hopefully it should have no adverse effect. I did what I could to limit the effects to the specific combination of Python 3.5 and multi-process mode, which are known required conditions to trigger the bug.
As we're unsure about actual OS settings that could make the bug appear in different conditions, the fix is not platform-specific.

(See the commit message of d9b721c for more details)

@odony odony changed the title Multiprocessing not working [11.0] Truncated/malformed long responses in multiprocessing mode with Python 3.5 Oct 26, 2017
fw-bot pushed a commit to odoo-dev/odoo that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

X-original-commit: d78ea12
fw-bot pushed a commit to odoo-dev/odoo that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

X-original-commit: d78ea12
fw-bot pushed a commit to odoo-dev/odoo that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

X-original-commit: d78ea12
fw-bot pushed a commit to odoo-dev/odoo that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

X-original-commit: d78ea12
fw-bot pushed a commit to odoo-dev/odoo that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

X-original-commit: d78ea12
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51978

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51974

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51971

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51968

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51965

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51982

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
robodoo pushed a commit that referenced this issue May 27, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes #51972

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
@mamiu
Copy link

mamiu commented Dec 7, 2020

Edit

This seems to be fixed with this PR: #51824 (Thanks @odony!)

To summarize: It's indeed recommended to perform response buffering at the reverse proxy level (as described in @ejprice's comment) but the socket timeout environment variable ODOO_HTTP_SOCKET_TIMEOUT could be increased as well (in cases where response buffering isn't possible).

Here Oliviers comment on it:

The ODOO_HTTP_SOCKET_TIMEOUT environment variable allows to control socket timeout for extreme latency situations. It's generally better to use a good buffering reverse proxy to quickly free workers rather than increasing this timeout to accomodate high network latencies & b/w saturation. This timeout is also essential to protect against accidental DoS due to idle HTTP connections.

Would be great if this was mentioned in the deployment docs.

Original comment

@odony This issue is still present in the latest version of Odoo v14. I debugged it for three full days until I figured out that it's due to the workers configuration. If the workers are set to a value greater or equal to 1 I can reproduce this issue reliably.

The environment

Odoo version: v14.0 release 20201123 (CE and EE)
Hosting environment: Kubernetes with Traefik Proxy (so basically the official Odoo Docker container)
Odoo config:

[options]
db_maxconn = 128
db_name = my-db-name
dbfilter = ^my-db-name$
limit_time_cpu = 600
limit_time_real = 1800
proxy_mode = True
workers = 3

Reproduction steps

  1. Run the latest Odoo version in a docker container on a REMOTE host (The remote part is important, since localhost has some special behavior on buffering. So don't install it on localhost or a VM running on localhost (even if it's using the bridge network adapter), but run it on another machine. Doesn't matter if it's a server, a laptop or a Raspberry Pi as long as it is a different machine)
  2. Install the website addon
  3. curl a file like the web.assets_backend.js with a limited rate and output only the last two lines (so that our terminal is not printing the whole file but still are able to see whether we got everything or just a part of the file). E.g.
curl 'http://<IP address or URL>/path/to/the/web.assets_backend.js' \
    -H 'Cookie: session_id=2a572f8fba3d721bdbb18a5de6cd0536d5f8e3f5' \
    --limit-rate 10K | tail -n 2

The easiest way to get the full curl command is to go to the Chrome DevToolsNetwork tab → right click the web.assets_backend.jsCopyCopy as cURL

  1. If the file fetching doesn't break in the middle just decrease the --limit-rate until it will break before the full content was delivered. (This can take 10 to 15 minutes but usually happens earlier and it is independent from the limit_time_cpu or limit_time_real configuration values. It only takes that long in an ideal debug setting, but in my real situation (sitting in Australia and having our Odoo servers in Europe) the connection tends to break after 2-3 seconds, which happens for the web.assets_backend.js file in 50% of the cases (once it's loaded it'll be cached by the browser but on the first load it often crashes))

The solution suggested by @ejprice is working but only a workaround and doesn't fix the Odoo issue. Anyway, thanks for it!

jvandri pushed a commit to Niboo/odoo that referenced this issue Dec 10, 2020
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

closes odoo#51824

Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
AdriaGForgeFlow pushed a commit to ForgeFlow/odoo that referenced this issue Jan 28, 2021
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: odoo#20158

closes odoo#51968

X-original-commit: d78ea12
Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
BT-tbaechle referenced this issue in brain-tec/odoo Jul 13, 2022
As indicated in the comment, it's much preferred to perform response
buffering at the reverse proxy level than to increase the socket
timeout. It will free up HTTP workers for other requests faster, while
the proxy does the work of buffering the stream on disk as needed.

/!\ The timeout is also used to protect from accidental DoS effects
in situations of low worker availability, due to idle connections
caused e.g. by wkhtmltopdf's connection pooling.
Setting a high timeout will make the protection less effective, so
ensuring you have enough free HTTP workers at all times becomes critical.

In our tests with nginx's defaut buffering on a typical hardware with
SSD storage, buffering up to 1GB responses did not require any change
of the socket timeout on the Odoo side, though your mileage may vary.
See also nginx's `proxy_buffering` and `proxy_max_temp_file_size` config
directives.

OPW-2247730
See also: #20158

closes odoo#51824

Signed-off-by: Olivier Dony (odo) <odo@openerp.com>
(cherry picked from commit d78ea12)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
11.0 Framework General frontend/backend framework issues
Projects
None yet
Development

No branches or pull requests