[RPC] Add RPC long poll notifications #7949

Open
wants to merge 2 commits into
from

Conversation

Projects
None yet
5 participants
@jonasschnelli
Member

jonasschnelli commented Apr 26, 2016

Reasons for another notification interface

  • Currently there is no interface that could be extended to "private" notification secured behind the authorization (like peers connected/disconnected or a new wallet relevant transaction notification)
  • HTTP long poll notifications are very easy to set up and require almost no dependencies
  • HTTP long poll notifications can easily pushed over the internet using httpd reverse proxy together with a popper authentication method (certs or http auth digest) together with TLS.
  • HTTP long poll would allow connecting applications to do all kinds of things with just a single communication channel (currently you need RPC & ZMQ for most use cases which would require VPN or a fancy multi port stunnel connection to broadcast the notification over the internet)

How does it work

  • The listener calls the pollnotification RPC command.
  • If no notifications are available, the RPC thread will idle for a given timeout (30s by default)
  • If a notification was fired during the 30 seconds, the longpoll call will be responded with the new notification(s)
  • The client/listener can immediately reconnect and wait again
  • If notifications are already in the queue, the pollnotification command will immediately response.
  • Notifications can't get lost (possible to lose them during http transfer and if one exceed the queue limit)

Downsides

  • JSON encoding overhead

New RPC calls

setregisterednotifications [<notificationtype>] (possible types are hashtx and hashblock)
getregisterednotifications
pollnotifications

Missing

  • More tests
  • Documentation

I'd like to use a such interface to work on a remote GUI (use case: GUI on your local desktop, node on a VPS).

@laanwj

This comment has been minimized.

Show comment
Hide comment
@laanwj

laanwj Apr 26, 2016

Member

I like the concept of being able to listen for events through http, however I think this is severely limited by having server-side state, limiting the number of listeners to only one.

What I'd personally prefer is, instead of longpolling, to subscribe to a 'stream' of events (e.g. websocket or just chunked encoding), where the set of events to listen to is in the request. This avoids having to store any client-state in the server - at least for longer than the request lasts.

Member

laanwj commented Apr 26, 2016

I like the concept of being able to listen for events through http, however I think this is severely limited by having server-side state, limiting the number of listeners to only one.

What I'd personally prefer is, instead of longpolling, to subscribe to a 'stream' of events (e.g. websocket or just chunked encoding), where the set of events to listen to is in the request. This avoids having to store any client-state in the server - at least for longer than the request lasts.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Apr 26, 2016

Member

[... ] having server-side state, limiting the number of listeners to only one

Right. The current implementation limits to only one listener. Extending this PR so it would support a client chosen UUID would not be very complicated (a set of queues and a set of registered notification types). Clients could register notification types along with a client-chosen UUID.
I might extend this PR to support multiple listeners.

Member

jonasschnelli commented Apr 26, 2016

[... ] having server-side state, limiting the number of listeners to only one

Right. The current implementation limits to only one listener. Extending this PR so it would support a client chosen UUID would not be very complicated (a set of queues and a set of registered notification types). Clients could register notification types along with a client-chosen UUID.
I might extend this PR to support multiple listeners.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Apr 26, 2016

Member

Added a commit that allows multiple clients at the same time.

The new RPC commands require now a clientUUID parameter (a per client unique string, ideally a UUID after RFC 4122). Bitcoind keeps a queue, sequence numbers and registered types per client.

There is currently not max client limit and no way to remove clients (though you can unregister all notification types but not empty the current queue).

Member

jonasschnelli commented Apr 26, 2016

Added a commit that allows multiple clients at the same time.

The new RPC commands require now a clientUUID parameter (a per client unique string, ideally a UUID after RFC 4122). Bitcoind keeps a queue, sequence numbers and registered types per client.

There is currently not max client limit and no way to remove clients (though you can unregister all notification types but not empty the current queue).

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 6, 2016

Member

Rebased.
Would be nice to get some concept NACKs/ACKs.

Member

jonasschnelli commented May 6, 2016

Rebased.
Would be nice to get some concept NACKs/ACKs.

@MarcoFalke

View changes

qa/rpc-tests/rpcsignals.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python2

This comment has been minimized.

@MarcoFalke

MarcoFalke May 6, 2016

Member

Nit: python3

@MarcoFalke

MarcoFalke May 6, 2016

Member

Nit: python3

This comment has been minimized.

@jonasschnelli

jonasschnelli May 6, 2016

Member

Fixed.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli May 12, 2016

Member

Rebased.

Member

jonasschnelli commented May 12, 2016

Rebased.

@laanwj

This comment has been minimized.

Show comment
Hide comment
@laanwj

laanwj Jan 11, 2017

Member

Gah we need to take a look at this again after 0.14 is released.

Member

laanwj commented Jan 11, 2017

Gah we need to take a look at this again after 0.14 is released.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Jan 11, 2017

Member

Yes. Sure. I'll try to re-base and overhaul this soon.

Member

jonasschnelli commented Jan 11, 2017

Yes. Sure. I'll try to re-base and overhaul this soon.

@laanwj laanwj added this to the 0.15.0 milestone Mar 5, 2017

@laanwj laanwj modified the milestones: 0.16.0, 0.15.0 Jul 11, 2017

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Sep 28, 2017

Contributor

Plan on rebasing this, or should just be closed?

Contributor

TheBlueMatt commented Sep 28, 2017

Plan on rebasing this, or should just be closed?

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Sep 28, 2017

Member

I'm currently rewriting this... will be ready soon.

Member

jonasschnelli commented Sep 28, 2017

I'm currently rewriting this... will be ready soon.

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Oct 20, 2017

Member

Overhauled and rebased.

This is still server based (server keeps track of what the client has) queue max size is currently 1024^2 and does only contain hashes of blocks or transactions.
Each notification comes with a sequence number to detect lost transactions (which then should trigger a "full-client-sync").

Member

jonasschnelli commented Oct 20, 2017

Overhauled and rebased.

This is still server based (server keeps track of what the client has) queue max size is currently 1024^2 and does only contain hashes of blocks or transactions.
Each notification comes with a sequence number to detect lost transactions (which then should trigger a "full-client-sync").

jonasschnelli added some commits Sep 28, 2017

@TheBlueMatt

Concept ACK, though should definitely get more general concept review. I think this could use a more explicit register/deregister process eg registernewclient [ThingsClientCaresAbout] -> provides UUID (instead of registering taking a client-provided UUID), then an explicit deregister which removes queues for that client, instead of setting notifications to 0 and the queues for that client simply leaking.

+ void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override
+ {
+ LOCK(m_cs_queue_manager);
+ BOOST_FOREACH (NotificationQueue* queue, m_map_sequence_numbers | boost::adaptors::map_values) {

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Ugh, can we not use more BOOST_FOREACH garbage? Should be really easy to rewrite this without, no?

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Ugh, can we not use more BOOST_FOREACH garbage? Should be really easy to rewrite this without, no?

+ }
+ }
+
+ void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I dont think this is the notification we want here - dont we want to use BlockConnected/Disconnected to notify clients of all connected blocks, not just new tips after reorgs which potentially connect multiple blocks?

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I dont think this is the notification we want here - dont we want to use BlockConnected/Disconnected to notify clients of all connected blocks, not just new tips after reorgs which potentially connect multiple blocks?

+{
+public:
+ std::deque<NotificationEntry> m_queue;
+ std::map<NotificationType, int32_t> m_map_sequence_numbers;

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I generally prefer 64 bit ints here - sure, its unlikely you'd overflow 32 bits, but if you're online for 3 or 4 years you may start getting closeish.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I generally prefer 64 bit ints here - sure, its unlikely you'd overflow 32 bits, but if you're online for 3 or 4 years you may start getting closeish.

+ size_t queueSize = m_queue.size();
+ if (queueSize > MAX_QUEUE_SIZE) {
+ m_queue.pop_front();
+ LogPrintf("RPC Notification limit has been reached, dropping oldest element\n");

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I'd be somwhat worried we'd fill someone's drive with debug log entries in case they forget to deregister a listener, here.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

I'd be somwhat worried we'd fill someone's drive with debug log entries in case they forget to deregister a listener, here.

+{
+public:
+ CCriticalSection m_cs_queue_manager;
+ std::map<clientUUID_t, NotificationQueue*> m_map_sequence_numbers;

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

It'd be great if we could de-duplicate the queues here - no need to have a queue per client, just have a global queue and keep track of how far delayed all the clients are in terms of the sequence number and just clean things up to the furthest-back client.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

It'd be great if we could de-duplicate the queues here - no need to have a queue per client, just have a global queue and keep track of how far delayed all the clients are in terms of the sequence number and just clean things up to the furthest-back client.

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Also, please use unique_ptr here instead of manual management and maybe remove the queue when there are no registered types (and, I suppose, the client is caught up) instead of keeping around a null queue.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Also, please use unique_ptr here instead of manual management and maybe remove the queue when there are no registered types (and, I suppose, the client is caught up) instead of keeping around a null queue.

+ }
+ if (startTime + timeOut + (500 / 1000.0) < GetTime())
+ break;
+ MilliSleep(500);

This comment has been minimized.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Should probably use a CV and a way for Interrupt to interrupt it instead of calling ShutdownRequested in a 500ms loop.

@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Should probably use a CV and a way for Interrupt to interrupt it instead of calling ShutdownRequested in a 500ms loop.

@TheBlueMatt

This comment has been minimized.

Show comment
Hide comment
@TheBlueMatt

TheBlueMatt Nov 6, 2017

Contributor

Also, looks like the test is failing.

Contributor

TheBlueMatt commented Nov 6, 2017

Also, looks like the test is failing.

@NicolasDorier

This comment has been minimized.

Show comment
Hide comment
@NicolasDorier

NicolasDorier Jan 10, 2018

Member

Strong Concept ACK for this one !
@jonasschnelli any idea if you will bring this one from the dead?

Member

NicolasDorier commented Jan 10, 2018

Strong Concept ACK for this one !
@jonasschnelli any idea if you will bring this one from the dead?

@jonasschnelli

This comment has been minimized.

Show comment
Hide comment
@jonasschnelli

jonasschnelli Jan 10, 2018

Member

@NicolasDorier
I think there is no consensus about an additional push channel... also, @sipa brought up the idea of having a push channel (could be long poll) that acts similar then listsinceblock where the server doesn't need to keep track of clients (keep a queue).
I haven't looked closer at this approach.

Member

jonasschnelli commented Jan 10, 2018

@NicolasDorier
I think there is no consensus about an additional push channel... also, @sipa brought up the idea of having a push channel (could be long poll) that acts similar then listsinceblock where the server doesn't need to keep track of clients (keep a queue).
I haven't looked closer at this approach.

@laanwj laanwj modified the milestones: 0.16.0, 0.17.0 Jan 11, 2018

@NicolasDorier

This comment has been minimized.

Show comment
Hide comment
@NicolasDorier

NicolasDorier Jan 14, 2018

Member

I implemented a similar solution in NBXplorer. Basically there is a GetUTXOs(xPub) call, this call replay all the transactions of the xpub in topological order to create the current utxo for this xpub. While playing the transactions, it hashes them along the way (the hash after each transaction is effectively the equivalent of a bookmark). Then the bookmark + the UTXO is sent back to the client.

The client process the result, then call again GetUTXOs(xPub, bookmark). The server does the same operation, replaying all transactions while calculating bookmarks along the way, when it reaches the bookmark passed by the client, it knows that what is after is a differential to the previous bookmark. If there is no differential, it just long poll. If there is a differential, it sends it back to the client.

If the bookmark in parameter was not reached, then the full UTXO is sent back again to the client, with a flag indicating it is not a differential.

This solution does not involve server side state.

Member

NicolasDorier commented Jan 14, 2018

I implemented a similar solution in NBXplorer. Basically there is a GetUTXOs(xPub) call, this call replay all the transactions of the xpub in topological order to create the current utxo for this xpub. While playing the transactions, it hashes them along the way (the hash after each transaction is effectively the equivalent of a bookmark). Then the bookmark + the UTXO is sent back to the client.

The client process the result, then call again GetUTXOs(xPub, bookmark). The server does the same operation, replaying all transactions while calculating bookmarks along the way, when it reaches the bookmark passed by the client, it knows that what is after is a differential to the previous bookmark. If there is no differential, it just long poll. If there is a differential, it sends it back to the client.

If the bookmark in parameter was not reached, then the full UTXO is sent back again to the client, with a flag indicating it is not a differential.

This solution does not involve server side state.

@NicolasDorier NicolasDorier referenced this pull request in nopara73/MagicalCryptoWallet Mar 8, 2018

Merged

Add waitfor RPC methods #65

@jonasschnelli jonasschnelli modified the milestones: 0.17.0, 0.18.0 Jul 19, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment