Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Edits fail if internet connection is spotty #24038

Open
svenssonaxel opened this issue Jul 16, 2023 · 5 comments
Open

Edits fail if internet connection is spotty #24038

svenssonaxel opened this issue Jul 16, 2023 · 5 comments
Labels
Bug Report/Open Bug report/issue

Comments

@svenssonaxel
Copy link

Summary

When using the desktop client with a spotty internet connection, edits will fail frequently and silently.

Steps to reproduce

Get on an internet connection with frequent packet drops and failed/dropped connections.
Use the desktop client to type out and then send messages.
Some will succeed and some will fail.

Expected behavior

When sending a message fails, it does so visibly, allowing the user to retry and/or edit.

Observed behavior

When sending a message fails, it does so silently, losing the content of the message.

@amyblais
Copy link
Member

amyblais commented Jul 17, 2023

Hi @svenssonaxel ,

  1. What Mattermost server version and Mattermost desktop app version are you currently on?
  2. Do you see any errors either in the server logs or in the developer console when the issue happens?
  3. Do you see the same issue on webapp/browser?

@svenssonaxel
Copy link
Author

1. What Mattermost server version and Mattermost desktop app version are you currently on?

Server

  • Mattermost Version: 7.1.5
  • Database Schema Version: 95
  • Database: postgres

Desktop app
Mattermost desktop client for MacOS, version 5.4.0

2. Do you see any errors either in the server logs or in the developer console when the issue happens?

The server logs accessible via the web admin UI contains hundreds of errors on the following form:

{"timestamp":"2023-DATETIME Z","level":"error","msg":"websocket.NextReader: closing websocket","caller":"app/web_conn.go:828","user_id":"3jumREDACTED","error":"read tcp 172.24.0.3:8065->172.24.0.4:PORT: i/o timeout"}

The user_id indeed matches the affected user.

The mattermost docker container STDOUT contains similar errors to the above, I assume they are the same. The STDOUT contains lots of output, the latest being:


goroutine 11552 [IO wait]:
runtime.gopark(0xc004ee7040?, 0xc000054f00?, 0x90?, 0xb9?, 0x484be2?)
        runtime/proc.go:361 +0xd6 fp=0xc0068fb920 sp=0xc0068fb900 pc=0x43ccd6
runtime.netpollblock(0xc00538a000?, 0x1000?, 0x0?)
        runtime/netpoll.go:522 +0xf7 fp=0xc0068fb958 sp=0xc0068fb920 pc=0x435957
internal/poll.runtime_pollWait(0x7fbc203cef48, 0x72)
        runtime/netpoll.go:302 +0x89 fp=0xc0068fb978 sp=0xc0068fb958 pc=0x4679c9
internal/poll.(*pollDesc).wait(0xc008a26280?, 0xc00538a000?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc0068fb9a0 sp=0xc0068fb978 pc=0x4a6612
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc008a26280, {0xc00538a000, 0x1000, 0x1000})
        internal/poll/fd_unix.go:167 +0x25a fp=0xc0068fba20 sp=0xc0068fb9a0 pc=0x4a797a
net.(*netFD).Read(0xc008a26280, {0xc00538a000?, 0x4a6b06?, 0x7fbc203cf030?})
        net/fd_posix.go:55 +0x29 fp=0xc0068fba68 sp=0xc0068fba20 pc=0x606549
net.(*conn).Read(0xc001ac2060, {0xc00538a000?, 0x0?, 0x0?})
        net/net.go:183 +0x45 fp=0xc0068fbab0 sp=0xc0068fba68 pc=0x61a3a5
net/http.(*connReader).Read(0xc01031e180, {0xc00538a000, 0x1000, 0x1000})
        net/http/server.go:780 +0x16d fp=0xc0068fbb00 sp=0xc0068fbab0 pc=0x713f4d
bufio.(*Reader).fill(0xc001880540)
        bufio/bufio.go:106 +0x103 fp=0xc0068fbb38 sp=0xc0068fbb00 pc=0x58c103
bufio.(*Reader).Peek(0xc001880540, 0x4)
        bufio/bufio.go:144 +0x5d fp=0xc0068fbb58 sp=0xc0068fbb38 pc=0x58c25d
net/http.(*conn).serve(0xc0028ac500, {0x2e297a0, 0xc001e6f1a0})
        net/http/server.go:1992 +0x716 fp=0xc0068fbfb8 sp=0xc0068fbb58 pc=0x719c76
net/http.(*Server).Serve.func3()
        net/http/server.go:3071 +0x2e fp=0xc0068fbfe0 sp=0xc0068fbfb8 pc=0x71f48e
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1 fp=0xc0068fbfe8 sp=0xc0068fbfe0 pc=0x46d7c1
created by net/http.(*Server).Serve
        net/http/server.go:3071 +0x4db

goroutine 13558 [IO wait]:
runtime.gopark(0xc00c47c000?, 0xc00004e000?, 0x90?, 0x19?, 0x484be2?)
        runtime/proc.go:361 +0xd6 fp=0xc005a61920 sp=0xc005a61900 pc=0x43ccd6
runtime.netpollblock(0xc005d9f000?, 0x1000?, 0x0?)
        runtime/netpoll.go:522 +0xf7 fp=0xc005a61958 sp=0xc005a61920 pc=0x435957
internal/poll.runtime_pollWait(0x7fbbec36f0d0, 0x72)
        runtime/netpoll.go:302 +0x89 fp=0xc005a61978 sp=0xc005a61958 pc=0x4679c9
internal/poll.(*pollDesc).wait(0xc004a70180?, 0xc005d9f000?, 0x0)
        internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc005a619a0 sp=0xc005a61978 pc=0x4a6612
internal/poll.(*pollDesc).waitRead(...)
        internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc004a70180, {0xc005d9f000, 0x1000, 0x1000})
        internal/poll/fd_unix.go:167 +0x25a fp=0xc005a61a20 sp=0xc005a619a0 pc=0x4a797a
net.(*netFD).Read(0xc004a70180, {0xc005d9f000?, 0x4a6b06?, 0x7fbbec36f1b8?})
        net/fd_posix.go:55 +0x29 fp=0xc005a61a68 sp=0xc005a61a20 pc=0x606549
net.(*conn).Read(0xc001434330, {0xc005d9f000?, 0x0?, 0x0?})
        net/net.go:183 +0x45 fp=0xc005a61ab0 sp=0xc005a61a68 pc=0x61a3a5
net/http.(*connReader).Read(0xc00a04b740, {0xc005d9f000, 0x1000, 0x1000})
        net/http/server.go:780 +0x16d fp=0xc005a61b00 sp=0xc005a61ab0 pc=0x713f4d
bufio.(*Reader).fill(0xc0057d48a0)
        bufio/bufio.go:106 +0x103 fp=0xc005a61b38 sp=0xc005a61b00 pc=0x58c103
bufio.(*Reader).Peek(0xc0057d48a0, 0x4)
        bufio/bufio.go:144 +0x5d fp=0xc005a61b58 sp=0xc005a61b38 pc=0x58c25d
net/http.(*conn).serve(0xc0024c2dc0, {0x2e297a0, 0xc001e6f1a0})
        net/http/server.go:1992 +0x716 fp=0xc005a61fb8 sp=0xc005a61b58 pc=0x719c76
net/http.(*Server).Serve.func3()
        net/http/server.go:3071 +0x2e fp=0xc005a61fe0 sp=0xc005a61fb8 pc=0x71f48e
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1 fp=0xc005a61fe8 sp=0xc005a61fe0 pc=0x46d7c1
created by net/http.(*Server).Serve
        net/http/server.go:3071 +0x4db

goroutine 11685 [select]:
runtime.gopark(0xc004919f28?, 0x2?, 0x4?, 0x0?, 0xc004919f14?)
        runtime/proc.go:361 +0xd6 fp=0xc004919da0 sp=0xc004919d80 pc=0x43ccd6
runtime.selectgo(0xc004919f28, 0xc004919f10, 0x7fbc494923c8?, 0x0, 0xc000a00000?, 0x1)
        runtime/select.go:328 +0x772 fp=0xc004919ee0 sp=0xc004919da0 pc=0x44c9b2
github.com/hashicorp/yamux.(*Session).AcceptStream(0xc005681290)
        github.com/hashicorp/yamux@v0.0.0-20211028200310-0bc27b27de87/session.go:237 +0x6f fp=0xc004919f58 sp=0xc004919ee0 pc=0x104d40f
github.com/hashicorp/go-plugin.(*MuxBroker).Run(0xc0023f03c0)
        github.com/hashicorp/go-plugin@v1.4.4/mux_broker.go:139 +0x30 fp=0xc004919fc8 sp=0xc004919f58 pc=0x1082270
github.com/hashicorp/go-plugin.NewRPCClient.func1()
        github.com/hashicorp/go-plugin@v1.4.4/rpc_client.go:88 +0x26 fp=0xc004919fe0 sp=0xc004919fc8 pc=0x1083086
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1 fp=0xc004919fe8 sp=0xc004919fe0 pc=0x46d7c1
created by github.com/hashicorp/go-plugin.NewRPCClient
        github.com/hashicorp/go-plugin@v1.4.4/rpc_client.go:88 +0x225

goroutine 11589 [chan receive]:
runtime.gopark(0x23b1940?, 0xc00b1e8768?, 0xa8?, 0x59?, 0x7fbc494931d8?)
        runtime/proc.go:361 +0xd6 fp=0xc00b1e8658 sp=0xc00b1e8638 pc=0x43ccd6
runtime.chanrecv(0xc006702f00, 0xc00b1e8780, 0x1)
        runtime/chan.go:577 +0x56c fp=0xc00b1e86e8 sp=0xc00b1e8658 pc=0x40862c
runtime.chanrecv2(0xc00548de80?, 0x0?)
        runtime/chan.go:445 +0x18 fp=0xc00b1e8710 sp=0xc00b1e86e8 pc=0x408098
github.com/mattermost/mattermost-server/v6/app.(*WebConn).pluginPostedConsumer(0xc001c7bc00, 0xc00b1e87b8?)
        github.com/mattermost/mattermost-server/v6/app/web_conn.go:236 +0xa5 fp=0xc00b1e87c0 sp=0xc00b1e8710 pc=0x1ada0a5
github.com/mattermost/mattermost-server/v6/app.(*WebConn).Pump.func3()
        github.com/mattermost/mattermost-server/v6/app/web_conn.go:318 +0x2a fp=0xc00b1e87e0 sp=0xc00b1e87c0 pc=0x1adad4a
runtime.goexit()
        runtime/asm_amd64.s:1571 +0x1 fp=0xc00b1e87e8 sp=0xc00b1e87e0 pc=0x46d7c1
created by github.com/mattermost/mattermost-server/v6/app.(*WebConn).Pump
        github.com/mattermost/mattermost-server/v6/app/web_conn.go:318 +0xf0

I'm using mattermost/docker. The nginx log seems to have nothing pertinent in mm.error.log, while error.log only warns about:

DATETIME [warn] 28#28: no resolver defined to resolve r3.o.lencr.org while requesting certificate status, responder: r3.o.lencr.org, certificate: "/cert.pem"

Also note that I have adjusted the nginx config as follows:

diff --git a/nginx/conf.d/default.conf b/nginx/conf.d/default.conf
index d24a624..8263115 100644
--- a/nginx/conf.d/default.conf
+++ b/nginx/conf.d/default.conf
@@ -93,7 +93,7 @@ server {
     location ~ /api/v[0-9]+/(users/)?websocket$ {
         proxy_set_header Upgrade $http_upgrade;
         proxy_set_header Connection "upgrade";
-        client_max_body_size 50M;
+        client_max_body_size 10000M;
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
@@ -102,18 +102,18 @@ server {
         proxy_set_header Early-Data $ssl_early_data;
         proxy_buffers 256 16k;
         proxy_buffer_size 16k;
-        client_body_timeout 60;
-        send_timeout 300;
-        lingering_timeout 5;
+        client_body_timeout 6000;
+        send_timeout 7200;
+        lingering_timeout 60;
         proxy_connect_timeout 90;
-        proxy_send_timeout 300;
-        proxy_read_timeout 90s;
+        proxy_send_timeout 7200;
+        proxy_read_timeout 7200s;
         proxy_http_version 1.1;
         proxy_pass http://backend;
     }

     location / {
-        client_max_body_size 50M;
+        client_max_body_size 10000M;
         proxy_set_header Connection "";
         proxy_set_header Host $http_host;
         proxy_set_header X-Real-IP $remote_addr;
@@ -123,7 +123,7 @@ server {
         proxy_set_header Early-Data $ssl_early_data;
         proxy_buffers 256 16k;
         proxy_buffer_size 16k;
-        proxy_read_timeout 600s;
+        proxy_read_timeout 7200s;
         proxy_cache mattermost_cache;
         proxy_cache_revalidate on;
         proxy_cache_min_uses 2;

3. Do you see the same issue on webapp/browser?

Haven't tried that, and unfortunately am no longer able to.

@amyblais
Copy link
Member

@svenssonaxel Would you be open to or do you have an ability to upgrade to a more recent version (v7.8 or newer) to see if the issue reproduces on a supported version? v7.1 is currently an unsupported version.

@svenssonaxel
Copy link
Author

I'd love to do that, but it's difficult for us to reproduce this very soon, as we no longer have access to the problematic connection.

@amyblais
Copy link
Member

@amyblais amyblais added the Bug Report/Open Bug report/issue label Jul 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Report/Open Bug report/issue
Projects
None yet
Development

No branches or pull requests

2 participants