[RPC] Add RPC long poll notifications #7949
Conversation
I like the concept of being able to listen for events through http, however I think this is severely limited by having server-side state, limiting the number of listeners to only one. What I'd personally prefer is, instead of longpolling, to subscribe to a 'stream' of events (e.g. websocket or just chunked encoding), where the set of events to listen to is in the request. This avoids having to store any client-state in the server - at least for longer than the request lasts. |
Right. The current implementation limits to only one listener. Extending this PR so it would support a client chosen UUID would not be very complicated (a set of queues and a set of registered notification types). Clients could register notification types along with a client-chosen UUID. |
Added a commit that allows multiple clients at the same time. The new RPC commands require now a There is currently not max client limit and no way to remove clients (though you can unregister all notification types but not empty the current queue). |
Rebased. |
@@ -0,0 +1,64 @@ | |||
#!/usr/bin/env python2 |
MarcoFalke
May 6, 2016
Member
Nit: python3
Nit: python3
jonasschnelli
May 6, 2016
Author
Member
Fixed.
Fixed.
Rebased. |
Gah we need to take a look at this again after 0.14 is released. |
Yes. Sure. I'll try to re-base and overhaul this soon. |
Plan on rebasing this, or should just be closed? |
I'm currently rewriting this... will be ready soon. |
Overhauled and rebased. This is still server based (server keeps track of what the client has) queue max size is currently 1024^2 and does only contain hashes of blocks or transactions. |
Concept ACK, though should definitely get more general concept review. I think this could use a more explicit register/deregister process eg registernewclient [ThingsClientCaresAbout] -> provides UUID (instead of registering taking a client-provided UUID), then an explicit deregister which removes queues for that client, instead of setting notifications to 0 and the queues for that client simply leaking. |
void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override | ||
{ | ||
LOCK(m_cs_queue_manager); | ||
BOOST_FOREACH (NotificationQueue* queue, m_map_sequence_numbers | boost::adaptors::map_values) { |
TheBlueMatt
Nov 6, 2017
Contributor
Ugh, can we not use more BOOST_FOREACH garbage? Should be really easy to rewrite this without, no?
Ugh, can we not use more BOOST_FOREACH garbage? Should be really easy to rewrite this without, no?
} | ||
} | ||
|
||
void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override |
TheBlueMatt
Nov 6, 2017
Contributor
I dont think this is the notification we want here - dont we want to use BlockConnected/Disconnected to notify clients of all connected blocks, not just new tips after reorgs which potentially connect multiple blocks?
I dont think this is the notification we want here - dont we want to use BlockConnected/Disconnected to notify clients of all connected blocks, not just new tips after reorgs which potentially connect multiple blocks?
{ | ||
public: | ||
std::deque<NotificationEntry> m_queue; | ||
std::map<NotificationType, int32_t> m_map_sequence_numbers; |
TheBlueMatt
Nov 6, 2017
Contributor
I generally prefer 64 bit ints here - sure, its unlikely you'd overflow 32 bits, but if you're online for 3 or 4 years you may start getting closeish.
I generally prefer 64 bit ints here - sure, its unlikely you'd overflow 32 bits, but if you're online for 3 or 4 years you may start getting closeish.
size_t queueSize = m_queue.size(); | ||
if (queueSize > MAX_QUEUE_SIZE) { | ||
m_queue.pop_front(); | ||
LogPrintf("RPC Notification limit has been reached, dropping oldest element\n"); |
TheBlueMatt
Nov 6, 2017
Contributor
I'd be somwhat worried we'd fill someone's drive with debug log entries in case they forget to deregister a listener, here.
I'd be somwhat worried we'd fill someone's drive with debug log entries in case they forget to deregister a listener, here.
{ | ||
public: | ||
CCriticalSection m_cs_queue_manager; | ||
std::map<clientUUID_t, NotificationQueue*> m_map_sequence_numbers; |
TheBlueMatt
Nov 6, 2017
Contributor
It'd be great if we could de-duplicate the queues here - no need to have a queue per client, just have a global queue and keep track of how far delayed all the clients are in terms of the sequence number and just clean things up to the furthest-back client.
It'd be great if we could de-duplicate the queues here - no need to have a queue per client, just have a global queue and keep track of how far delayed all the clients are in terms of the sequence number and just clean things up to the furthest-back client.
TheBlueMatt
Nov 6, 2017
Contributor
Also, please use unique_ptr here instead of manual management and maybe remove the queue when there are no registered types (and, I suppose, the client is caught up) instead of keeping around a null queue.
Also, please use unique_ptr here instead of manual management and maybe remove the queue when there are no registered types (and, I suppose, the client is caught up) instead of keeping around a null queue.
} | ||
if (startTime + timeOut + (500 / 1000.0) < GetTime()) | ||
break; | ||
MilliSleep(500); |
TheBlueMatt
Nov 6, 2017
Contributor
Should probably use a CV and a way for Interrupt to interrupt it instead of calling ShutdownRequested in a 500ms loop.
Should probably use a CV and a way for Interrupt to interrupt it instead of calling ShutdownRequested in a 500ms loop.
Also, looks like the test is failing. |
Strong Concept ACK for this one ! |
@NicolasDorier |
I implemented a similar solution in NBXplorer. Basically there is a The client process the result, then call again If the bookmark in parameter was not reached, then the full UTXO is sent back again to the client, with a flag indicating it is not a differential. This solution does not involve server side state. |
static const char* MSG_HASHBLOCK = "hashblock"; | ||
static const char* MSG_HASHTX = "hashtx"; | ||
|
||
/* keep the max queue size large becase we don't |
practicalswift
Sep 2, 2018
Contributor
Typo found by codespell
: becase
Typo found by codespell
: becase
|
||
// populates a json object with all notifications in the queue | ||
// returns a range to allow removing the elements from the queue | ||
// after successfull transmitting |
practicalswift
Sep 2, 2018
Contributor
Typo found by codespell
: successfull
Typo found by codespell
: successfull
@jonasschnelli do you plan to pick this again? |
"\" ,...\n" | ||
"\"]\"\n" | ||
"\nExamples:\n" | ||
"\nCreate a transaction\n" + |
Reasons for another notification interface
How does it work
pollnotification
RPC command.pollnotification
command will immediately response.Downsides
New RPC calls
setregisterednotifications [<notificationtype>]
(possible types arehashtx
andhashblock
)getregisterednotifications
pollnotifications
Missing
I'd like to use a such interface to work on a remote GUI (use case: GUI on your local desktop, node on a VPS).