Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Napi::Error #18

Closed
mkliu opened this issue Jul 22, 2018 · 22 comments
Closed

Napi::Error #18

mkliu opened this issue Jul 22, 2018 · 22 comments

Comments

@mkliu
Copy link

mkliu commented Jul 22, 2018

I started 5 clients and immediately I'm seeing this error:

libc++abi.dylib: terminating with uncaught exception of type Napi::Error
Abort trap: 6

Any suggestions?

@eilvelia

This comment has been minimized.

@mkliu

This comment has been minimized.

@IlyaSemenov
Copy link
Contributor

I am having this too. If I start 3 clients at once it always crashes. If I start them one after another it works but then eventually it will crash anyway (the more clients the faster it fails).

ffi-napi documentation says: "It is recommended to avoid any multi-threading usage of this library if possible." so chances are it's not really possible to avoid other than going multi-process.

@IlyaSemenov
Copy link
Contributor

IlyaSemenov commented Aug 6, 2018

I added a wrapper around tdl that forks and transfers updates/event/input requests via process messages and it all started to work reliably. 5 clients start simultaneously without problems. I can't share it as I built it application specific rather than reusable (shame on me) but it was pretty straightforward.

@Bannerets Honestly I believe that it should go into the library itself (or, the ff-napi should be replaced with something else), or at the very least it should be documented in README that you will need to fork in order to maintain several telegram connections.

@eilvelia
Copy link
Owner

eilvelia commented Aug 6, 2018

@IlyaSemenov I think creating process for each client is bad idea.

in the Telegram Bot API, each TDLib instance handles more than 19000 active bots simultaneously.

If you create 19000 processes, you'll get a big overhead.
We have to find a workaround like creating a microservice in another language that works with tdlib.

Also, maybe using node addon instead of ffi solves this problem.
Like this https://github.com/wfjsw/node-tdlib

@IlyaSemenov
Copy link
Contributor

I think creating process for each client is bad idea.

Sure thing it is; what I was trying to say is that it's still much better than crashing.

I just thought perhaps it will be enough to add a global lock on invoke and other napi calls? That is, if it's crashing on calls and not on callbacks.

@ip
Copy link

ip commented Aug 8, 2018

I noticed that this crash happens in different cases for reasons not always clear. One reproducible example is when I send a message consisting only of spaces or tabs. I use this code:

exports.sendMessage = function sendMessage(tdlClient, text) {
  tdlClient.invoke({
    _: 'sendMessage',
    chat_id: config.telegram.chatId,
    input_message_content: {
      _: 'inputMessageText',
      text: {
        _: 'formattedText',
        text,
      }
    }
  })
}

tdl client's error event isn't fired before the crash, nor I see any log messages. I guess TDLib signals about the error in that case somehow. How can we fix this? Thanks.

@eilvelia
Copy link
Owner

eilvelia commented Aug 9, 2018

@ip if you use only one client, you shouldn't get a crash (I received { _: 'error', code: 3, message: 'Message must be non-empty' } error)

@eilvelia
Copy link
Owner

eilvelia commented Aug 9, 2018

@IlyaSemenov I tested with ffi instead of ffi-napi. It also works very unstable with multiple clients.
Sometimes I see strange errors like this:

node(4701,0x700013e1c000) malloc: *** error for object 0x102413e00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6

Sometimes I just get random bytes instead of response from tdlib.

After several tries tdlib database just broke and I can't use it anymore

@IlyaSemenov
Copy link
Contributor

IlyaSemenov commented Sep 22, 2018

@Bannerets I see you published 5.0 and some of commits in history refer to ffi. Have there been attempts to fix the multi threading issue? Is it worth upgrading / getting rid of forking per connected client?

@eilvelia
Copy link
Owner

@IlyaSemenov This issue has not been fixed. And I think it's not possible with node-ffi.

@mnb3000
Copy link

mnb3000 commented Sep 23, 2018

@IlyaSemenov can you provide an example with 1 process per client?

@IlyaSemenov
Copy link
Contributor

I'll update my code to use tdl v.5 and extract the multi process client as a reusable library.

@eilvelia
Copy link
Owner

In version 5.1.0 I added pause() and resume() functions.
You can use multiple clients, but not at the same time.
Example:

const client1 = new Client(...)
const client2 = new Client(...)

;(async () => {
  await client1.connect()
  await client1.login(...)

  console.log('Client 1', await client1.invoke({ _: 'getMe' }))
  client1.pause()

  await client2.connect()
  await client2.login(...)

  console.log('Client 2', await client2.invoke({ _: 'getMe' }))
  client2.pause()

  client1.resume()
  console.log('Client 1 again', await client1.invoke({ _: 'getMe' }))
})()

@CryptoSharon
Copy link

CryptoSharon commented Oct 13, 2018

How feasible is it to poll updates from all clients in 1-second intervals for each? Would I get flood-waited by Telegram for connecting too much? Would this slow down the rate at which I can perform actions with each account?

Edit: The current situation certainly doesn't allow for an Electron multi-account client with concurrent notification polling.

@eilvelia
Copy link
Owner

Would I get flood-waited by Telegram for connecting too much?

client.pause() does not disconnect client, so no.

You can do something like this, but it looks like ugly hack.

const client1 = new Client(...)
const client2 = new Client(...)

client1.on('update', u => console.log('From client #1', u))
client2.on('update', u => console.log('From client #2', u))

;(async () => {
  await client1.connect()
  await client1.login()
  client1.pause()
  await client2.connect()
  await client2.login()

  let cl1 = false
  setInterval(() => {
    if (cl1) { client1.pause(); client2.resume() }
    else { client2.pause(); client1.resume() }
    cl1 = !cl1
  }, 1000)
})()

@mnb3000
Copy link

mnb3000 commented Oct 14, 2018

@IlyaSemenov so are there any examples of multiprocess?

@david-benes
Copy link

@mnb3000 @IlyaSemenov I just published a package to make this library multiprocess. You can use it right away without any extra coding. The library is a wrapper for tdl with identical interface. It is meant as a drop-in replacement for tdl in your code. However, you must provide tdl in your own package.json to meet peer dependency.

https://www.npmjs.com/package/tdl-multiprocess

@eilvelia
Copy link
Owner

eilvelia commented Dec 2, 2018

Just tested with node addons instead of ffi and faced with the same errors that I described in #18 (comment).

image
It seems, problem in this part: but shouldn't be called simultaneously from two different threads.
So you can't do Promise.all([ tdlib.receive(), tdlib.receive() ]). Internally libuv calls receive() in different threads.

You can use some kind of mutexes, as in #18 (comment).

upd: Also this may be solved using more low-level ClientActor API instead of td_json_client.

@eilvelia
Copy link
Owner

It turned out that shouldn't be called simultaneously from two different threads is applicable only for a fixed client:

Different instances are completely Independent. You shouldn't run receive simultameously only for a fixed client.

You should consume result of td_json_client_receive and copy it somewhere or parse before next call to td_json_client_receive in the same thread. There are no other restrictions.

So you can use multiple clients with node addon (addon example), but not with ffi, as it requires modification of node-ffi (or node-ffi-napi) library code.
But there is a limitation, every td_receive call consumes a thread in libuv threadpool, you can't use more than UV_THREADPOOL_SIZE+1 clients in parallel (by default UV_THREADPOOL_SIZE is 4, maximum is 128).
Ideally ClientActor should be used instead of td_json_client somehow.

@denisakov
Copy link

denisakov commented Mar 31, 2019

@Bannerets, great stuff! Thank you for all the efforts.
Could you please clarify what would be the installation order of all the modules?

npm install tdl tdl-tdlib-addon
results in:
npm WARN tdl-tdlib-addon@0.4.0 requires a peer of tdl@>= 6.0.0-rc.0 but none is installed. You must install peer dependencies yourself.
while tdl@5.2.0

if I try to run the node, I get:
Error: Cannot find module './dist'
which seem to come from here:
tdl/tdl-tdlib-addon/index.js

a bit lost here, I am:) Ah and the error is actually the same as the original.

@eilvelia
Copy link
Owner

eilvelia commented Apr 1, 2019

@denisakov You need tdl v6 (npm install tdl@6.0.0-rc.2).
tdl-tdlib-addon is mostly just a proof-of-concept, and it most likely won't work from npm. You can copy it from the repository into your app.

Also there is no big profit in using it, as it limited to libuv threadpool (better implementation could create a thread per client without using libuv threadpool).
You can just create a process for each client instance (described above in the issue).
Otherwise someone need to write tdlib-client-actor<->node.js binding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants