Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Websocket failed to deliver big json obj when compression = true #2287

Closed
luan007 opened this issue Jun 17, 2021 · 8 comments · Fixed by #2300
Closed

Websocket failed to deliver big json obj when compression = true #2287

luan007 opened this issue Jun 17, 2021 · 8 comments · Fixed by #2300

Comments

@luan007
Copy link

luan007 commented Jun 17, 2021

Hi,
First of all please let me say thank you for such awesome open source project, it's simply brilliant!

  • Large JSON Silently fails to transfer.
    Lately, when testing nats directly from browser (using nats.ws), when I tried to send an JSON object that's a bit large (>3000bytes, consists of random numbers), the other party (browser) yields invalid frame header error & disconnects.

  • No visible error on server log
    By observing nats log, the initial message (PUB) is successfully delivered to server, but then something went wrong, & there's no visible 'error' in server side.

  • ws-to-tcp and other transparent ws-to-tcp proxy works
    Upon further testing: when used ws-to-tcp (npm i ws-to-tcp -g) instead of the built in ws, with same code, same nats-server, only difference is the ws is now supplied by 3rd party proxy (connect directly to 4222), everything works.

  • compression = false fixes the problem
    Thus I came to suspect something might be improperly handled inside ws related logic of nats-server.
    Finally: I've tried messed around the conf, & found out when compression = false, the problem disappears.

My config & log are attached.

Logs

compression-off.log
compression-on.log

Snapshot when one side fails

snap

Config

listen: "0.0.0.0:4222"
accounts: {
    base: { }
}
system_account: base

websocket {
    port: 8000
    no_tls: true
    same_origin: false
    compression: false #or true
}

Versions of nats-server and affected client libraries used:

nats-server: v2.2.6
nats.ws: ^1.1.4

OS/Container environment:

macOS Catalina 10.15.7
Darwin Kernel Version 19.6.0: Thu May 6 00:48:39 PDT 2021;

Steps or code to reproduce the issue:

Attached above.

Expected result:

Large JSON (<max_payload_size) should deliver via ws when compression = true

Actual result:

Fails to deliver when compression = true

@aricart
Copy link
Member

aricart commented Jun 17, 2021

@luan007 are you by any chance using data dog dd-trace?

@aricart
Copy link
Member

aricart commented Jun 17, 2021

@luan007 I did a small test on the nats.ws side, encoding a message that fills the available payload, with just raw data and JSON and was not running into the issue:

  const sc = StringCodec();
  const jc = JSONCodec();
  
  // generate a config for the server that uses compression
  const conf = wsConfig();
  conf.websocket.compression = true;
  const ns = await NatsServer.start(conf);
  
  // connect a client
  let nc = await connect({ servers: `ws://127.0.0.1:${ns.websocket}` });
  
  // crate a payload that is the max size allowed
  const buf = new Uint8Array(nc.info.max_payload);
  for (let i = 0; i < buf.length; i++) {
    buf[i] = 97;
  }
  
  // create a subscription, and publish it
  const subj = createInbox();
  nc.subscribe(subj, {max: 1, callback: (err, msg) => {
    t.log("checking values", msg.data.length)
    t.deepEqual(msg.data, buf);
  }});
  nc.publish(subj, buf);

  // try the same with JSON
  // remove characters reserved for the quotes or we exceed the payload
  const ddbuf = jc.encode(sc.decode(buf).slice(2))
  const jsubj = createInbox();
  nc.subscribe(jsubj, {max: 1, callback: (err, msg) => {
      t.log("checking values", msg.data.length)
      t.deepEqual(msg.data, ddbuf);
    }});
  nc.publish(jsubj, ddbuf);

  await nc.flush();
  await nc.close();
  await ns.stop();
> node node_modules/.bin/ava --verbose -T 6500000 --match "ws - compression and payload"
  ✔ jetstream › ws - compression and payload (1.6s)
    ℹ checking values 1048576
    ℹ checking values 1048576
  ─

  1 test passed

@aricart
Copy link
Member

aricart commented Jun 17, 2021

I am using the latest released server 2.2.6

@kozlovic
Copy link
Member

@luan007 I am seeing some issue with the server not able to receive compressed data sent by a client (Go gorilla/ws), but not seeing the issue for the consumer, but this may all be related, not sure yet.

@luan007
Copy link
Author

luan007 commented Jun 18, 2021

@aricart sorry I've not used any tracer yet but will try soon, for data itself, when sending uniformed (repeating) data sometimes it works, what I did was simply generate a big [] with 3000 random floats in it..

for now I'm keeping compression:false & the system has been working fine for 20hr+
will do more tests when needed

@kozlovic
Copy link
Member

@luan007 Yes, disabling compression is a workaround for now. I think I have found what the issue is and working on a fix. Thank you for your patience.

kozlovic added a commit that referenced this issue Jun 21, 2021
…rames

For compression, continuation frames had the compress bit set, which is
wrong since only the first frame should.
For decompression, continuation frames were decompressed individually
instead of assembling the full payload and then decompressing.

Resolves #2287

Signed-off-by: Ivan Kozlovic <ivan@synadia.com>
@aricart
Copy link
Member

aricart commented Jun 21, 2021

@luan007 yes I was able to reproduce it with your data. All browsers but Safari seem to perform the frame check, Safari completes successfully. Ivan has a fix for the server that I have verified.

@luan007
Copy link
Author

luan007 commented Jun 22, 2021

thanks!

@bruth bruth removed the 🐞 bug label Aug 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants