Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RPC] Add RPC long poll notifications #7949

Closed

Conversation

jonasschnelli
Copy link
Contributor

Reasons for another notification interface

  • Currently there is no interface that could be extended to "private" notification secured behind the authorization (like peers connected/disconnected or a new wallet relevant transaction notification)
  • HTTP long poll notifications are very easy to set up and require almost no dependencies
  • HTTP long poll notifications can easily pushed over the internet using httpd reverse proxy together with a popper authentication method (certs or http auth digest) together with TLS.
  • HTTP long poll would allow connecting applications to do all kinds of things with just a single communication channel (currently you need RPC & ZMQ for most use cases which would require VPN or a fancy multi port stunnel connection to broadcast the notification over the internet)

How does it work

  • The listener calls the pollnotification RPC command.
  • If no notifications are available, the RPC thread will idle for a given timeout (30s by default)
  • If a notification was fired during the 30 seconds, the longpoll call will be responded with the new notification(s)
  • The client/listener can immediately reconnect and wait again
  • If notifications are already in the queue, the pollnotification command will immediately response.
  • Notifications can't get lost (possible to lose them during http transfer and if one exceed the queue limit)

Downsides

  • JSON encoding overhead

New RPC calls

setregisterednotifications [<notificationtype>] (possible types are hashtx and hashblock)
getregisterednotifications
pollnotifications

Missing

  • More tests
  • Documentation

I'd like to use a such interface to work on a remote GUI (use case: GUI on your local desktop, node on a VPS).

@laanwj
Copy link
Member

laanwj commented Apr 26, 2016

I like the concept of being able to listen for events through http, however I think this is severely limited by having server-side state, limiting the number of listeners to only one.

What I'd personally prefer is, instead of longpolling, to subscribe to a 'stream' of events (e.g. websocket or just chunked encoding), where the set of events to listen to is in the request. This avoids having to store any client-state in the server - at least for longer than the request lasts.

@jonasschnelli
Copy link
Contributor Author

[... ] having server-side state, limiting the number of listeners to only one

Right. The current implementation limits to only one listener. Extending this PR so it would support a client chosen UUID would not be very complicated (a set of queues and a set of registered notification types). Clients could register notification types along with a client-chosen UUID.
I might extend this PR to support multiple listeners.

@jonasschnelli
Copy link
Contributor Author

jonasschnelli commented Apr 26, 2016

Added a commit that allows multiple clients at the same time.

The new RPC commands require now a clientUUID parameter (a per client unique string, ideally a UUID after RFC 4122). Bitcoind keeps a queue, sequence numbers and registered types per client.

There is currently not max client limit and no way to remove clients (though you can unregister all notification types but not empty the current queue).

@jonasschnelli jonasschnelli force-pushed the 2016/04/rpc_signals branch 2 times, most recently from e5734a6 to 6958509 Compare April 26, 2016 17:47
@jonasschnelli
Copy link
Contributor Author

Rebased.
Would be nice to get some concept NACKs/ACKs.

@@ -0,0 +1,64 @@
#!/usr/bin/env python2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: python3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@jonasschnelli jonasschnelli force-pushed the 2016/04/rpc_signals branch 2 times, most recently from df5034a to 6dce696 Compare May 12, 2016 09:01
@jonasschnelli
Copy link
Contributor Author

Rebased.

@laanwj
Copy link
Member

laanwj commented Jan 11, 2017

Gah we need to take a look at this again after 0.14 is released.

@jonasschnelli
Copy link
Contributor Author

Yes. Sure. I'll try to re-base and overhaul this soon.

@laanwj laanwj added this to the 0.15.0 milestone Mar 5, 2017
@laanwj laanwj modified the milestones: 0.16.0, 0.15.0 Jul 11, 2017
@TheBlueMatt
Copy link
Contributor

Plan on rebasing this, or should just be closed?

@jonasschnelli
Copy link
Contributor Author

I'm currently rewriting this... will be ready soon.

@jonasschnelli jonasschnelli force-pushed the 2016/04/rpc_signals branch 2 times, most recently from 7258d7c to 2813628 Compare October 20, 2017 05:59
@jonasschnelli
Copy link
Contributor Author

Overhauled and rebased.

This is still server based (server keeps track of what the client has) queue max size is currently 1024^2 and does only contain hashes of blocks or transactions.
Each notification comes with a sequence number to detect lost transactions (which then should trigger a "full-client-sync").

Copy link
Contributor

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK, though should definitely get more general concept review. I think this could use a more explicit register/deregister process eg registernewclient [ThingsClientCaresAbout] -> provides UUID (instead of registering taking a client-provided UUID), then an explicit deregister which removes queues for that client, instead of setting notifications to 0 and the queues for that client simply leaking.

void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override
{
LOCK(m_cs_queue_manager);
BOOST_FOREACH (NotificationQueue* queue, m_map_sequence_numbers | boost::adaptors::map_values) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh, can we not use more BOOST_FOREACH garbage? Should be really easy to rewrite this without, no?

}
}

void UpdatedBlockTip(const CBlockIndex* pindexNew, const CBlockIndex* pindexFork, bool fInitialDownload) override
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont think this is the notification we want here - dont we want to use BlockConnected/Disconnected to notify clients of all connected blocks, not just new tips after reorgs which potentially connect multiple blocks?

{
public:
std::deque<NotificationEntry> m_queue;
std::map<NotificationType, int32_t> m_map_sequence_numbers;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I generally prefer 64 bit ints here - sure, its unlikely you'd overflow 32 bits, but if you're online for 3 or 4 years you may start getting closeish.

size_t queueSize = m_queue.size();
if (queueSize > MAX_QUEUE_SIZE) {
m_queue.pop_front();
LogPrintf("RPC Notification limit has been reached, dropping oldest element\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be somwhat worried we'd fill someone's drive with debug log entries in case they forget to deregister a listener, here.

{
public:
CCriticalSection m_cs_queue_manager;
std::map<clientUUID_t, NotificationQueue*> m_map_sequence_numbers;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be great if we could de-duplicate the queues here - no need to have a queue per client, just have a global queue and keep track of how far delayed all the clients are in terms of the sequence number and just clean things up to the furthest-back client.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please use unique_ptr here instead of manual management and maybe remove the queue when there are no registered types (and, I suppose, the client is caught up) instead of keeping around a null queue.

}
if (startTime + timeOut + (500 / 1000.0) < GetTime())
break;
MilliSleep(500);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably use a CV and a way for Interrupt to interrupt it instead of calling ShutdownRequested in a 500ms loop.

@TheBlueMatt
Copy link
Contributor

Also, looks like the test is failing.

@NicolasDorier
Copy link
Contributor

NicolasDorier commented Jan 10, 2018

Strong Concept ACK for this one !
@jonasschnelli any idea if you will bring this one from the dead?

@jonasschnelli
Copy link
Contributor Author

@NicolasDorier
I think there is no consensus about an additional push channel... also, @sipa brought up the idea of having a push channel (could be long poll) that acts similar then listsinceblock where the server doesn't need to keep track of clients (keep a queue).
I haven't looked closer at this approach.

@laanwj laanwj modified the milestones: 0.16.0, 0.17.0 Jan 11, 2018
@NicolasDorier
Copy link
Contributor

NicolasDorier commented Jan 14, 2018

I implemented a similar solution in NBXplorer. Basically there is a GetUTXOs(xPub) call, this call replay all the transactions of the xpub in topological order to create the current utxo for this xpub. While playing the transactions, it hashes them along the way (the hash after each transaction is effectively the equivalent of a bookmark). Then the bookmark + the UTXO is sent back to the client.

The client process the result, then call again GetUTXOs(xPub, bookmark). The server does the same operation, replaying all transactions while calculating bookmarks along the way, when it reaches the bookmark passed by the client, it knows that what is after is a differential to the previous bookmark. If there is no differential, it just long poll. If there is a differential, it sends it back to the client.

If the bookmark in parameter was not reached, then the full UTXO is sent back again to the client, with a flag indicating it is not a differential.

This solution does not involve server side state.

static const char* MSG_HASHBLOCK = "hashblock";
static const char* MSG_HASHTX = "hashtx";

/* keep the max queue size large becase we don't
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo found by codespell: becase


// populates a json object with all notifications in the queue
// returns a range to allow removing the elements from the queue
// after successfull transmitting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo found by codespell: successfull

Copy link
Member

@promag promag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jonasschnelli do you plan to pick this again?

"\" ,...\n"
"\"]\"\n"
"\nExamples:\n"
"\nCreate a transaction\n" +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants