Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: describe protocol version 2.0 #90

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

SomberNight
Copy link
Member

@SomberNight SomberNight commented Dec 20, 2020

This PR describes an updated Electrum Protocol, version 2.0 (formerly named version 1.5).

Some of these ideas have already been discussed and implemented as part of #80, however this PR includes even more changes, most notably pagination of long histories (taking into consideration ideas from kyuupichan/electrumx#348 and kyuupichan/electrumx#82).

While the changes described in #80 are already non-trivial and would be useful in themselves, I have realised I would also like to fix the long-standing issue of serving addresses with long histories, and IMO it would be better to only bump the protocol version once (and to minimise the number of db upgrades/times operators have to resync).

This PR is only documentation, I would like feedback before implementing it.

@romanz, @chris-belcher, @shesek, @cculianu, @ecdsa, @kyuupichan


Compared to the existing Electrum Protocol (1.4.2), the changes are:

  • Breaking change for the version negotiation: we now mandate that the server.version message must be the first message sent by the client.
    That is, version negotiation must happen before any other messages.
  • The status of a scripthash has been re-defined. The new definition is recursive and makes it possible not to redo all hashing for most updates.
  • blockchain.scripthash.get_history changed significantly to allow pagination of long histories.
  • new method: blockchain.outpoint.subscribe to subscribe to a transaction outpoint, lookup its spender tx, and get a notification when it gets spent.
  • new method: blockchain.outpoint.unsubscribe to unsubscribe from a TXO.
  • The previously required height argument for blockchain.transaction.get_merkle is now optional. (related: Compressed block headers & merkle proofs over low bandwidth communications #43 )
  • Optional mode argument added to blockchain.estimatefee. (see allow passing estimate_mode to estimatefee kyuupichan/electrumx#1001 )
  • blockchain.scripthash.get_mempool previously did not define an order for mempool transactions. We now mandate a specific ordering.

Re pagination of long histories:

  • The definition of the status hash and the get_history methods changed to accommodate serving long histories.
  • The existing Electrum Protocol is extremely inefficient for addresses with long histories; to mitigate DOS, e-x refuses to serve the history of an address if it contains more than ~10k transactions (by default).
  • Ideally, it should not be more expensive to serve a client that has a single address with 1 million txs than a client that has 10000 addresses each with 100 txs. Distribution of txs over addresses should not matter.
  • The main goal is to optimise for well-behaved non-malicious clients. The server can ban/disconnect clients that use too much resources. With this design, I believe, well-behaved clients under typical scenarios should use similar resources at runtime regardless of their tx distribution over addresses.
  • As for implementation, I think the server could store in the db the status hash of a scripthash every ~10k transactions.
    • note that if a scripthash has fewer txs than that, it uses no extra disk space
    • to compute the partial status up to any block height, at most 10k hashes would need to be calculated
    • the server could calculate and store these partial statuses when requests come in, so not during block processing
      • only store a status if it is reorg-safe, e.g. at least 200 blocks deep

@chris-belcher
Copy link

I read the definition of the new status hash and I think EPS should be able to implement it. The changes to blockchain.scripthash.get_history seem fine from the point of view of EPS.

@cculianu
Copy link
Collaborator

cculianu commented Dec 21, 2020

@SomberNight Thank you so much for taking the time to think about this deeply, research it, and do PoC implementations to test feasibility. After having discussed this with your on IRC I am pretty optimistic about these changes, and the new status_hash mechanism which solves a long-standing problem. Note that there is a small performance hit for the first time you compute the status hash for a huge history of like 1 million items (in Python 300msec versus 2 seconds or something like that on my machine).

But this cost is paid only once, since the 10k tx checkpointing thing will solve having to compute too long a set of hashes.

That cost can also be mitigated with additional design (such as pushing off the response to such a request to some low priority background task)... or it can be simply paid upfront since it will only be incurred once. And there aren't that many 500k or 1 million tx addresses (although there are more than one would expect).

Anyway, I'm a fan of these changes. I haven't yet tried a PoC implementation to see if there are any gotchas but reading the new spec it seems very sane and reasonable.

@cculianu
Copy link
Collaborator

cculianu commented Dec 21, 2020

One additional comment and/or question: I know for BTC you guys definitely need blockchain.outpoint.subscribe -- but it may not be needed for BCH immediately.

On Fulcrum, one thought I had was that in the BCH case (but definitely not in the BTC case), I may opt to not offer that by default (or maybe do offer it by default but make it potentially opt-out).

This makes it less painful for server admins to update to the new version since the (very slow to build) spent_txo index won't need to be built for them in that case the first time they run the updated version.

Now, maybe I am overcomplicating things -- and maybe I should just make them eat the cost. But aside from them having to wait X hours for it to build that index, it may also be unpopular due to the additional space requirements.

So.. my thinking is that maybe in the BCH case I will "extend" this protocol to add additional keys to server.features, so that a server can advertise if it lacks the index (if key is missing, can assume it has the index).

What are your thoughts on this? I know this is not your primary concern, and since this is a BCH issue mostly, I know you have plenty to do already -- but I was wondering if you had recommendations on what to call this key or .. what. i was thinking as a BCH extension, in the server.features map, have an optional additional key: "optional_flags" or something and values such as "no_spent_txo" for a server where that index is missing...

@shesek
Copy link

shesek commented Dec 21, 2020

blockchain.outpoint.subscribe introduces some challenges for personal servers (eps/bwt):

  • Bitcoin Core does not allow subscribing to outpoints, only addresses. This means that watching an output using the Bitcoin Core wallet functionality requires importing its associated address.

  • This will cause Bitcoin Core to import everything related to that address, which isn't ideal. There's also currently no way to remove an imported address. This could be a DoS vector if the server is exposed to the world, but should be workable as long as the server is kept private.

  • If blockchain.outpoint.subscribe is issued before the outpoint is funded, the address is still unknown and therefore cannot be imported. The only way to watch for the funding transaction is to occasionally poll gettxout until the output appears, but there's a chance that you'll miss it if it gets spent quickly enough.

    (edited following @chris-belcher's comment that this could be done with gettxout instead of getrawtransaction)

    The only way to watch for the funding transaction is to occasionally poll the mempool with getrawtransaction <txid> until a transaction appears (or alternatively, via ZMQ push notifications, which would complicate the setup). This can work ok if you catch the tx while its still in the mempool. If you miss this (or if it goes straight into a block without going through the mempool), the client will also have to occasionally check recent blocks with getrawtransaction <txid> <blockhash>. (I'm assuming pruning and no txindex, which means that you can only search within specific blocks.)

  • If blockchain.outpoint.subscribe is issued after the outpoint was already spent, there isn't really an easy way to tell which transaction spent it, or distinguish whether its spent or non-existent. One would have to check every tx in every block in order to find out (which also wouldn't work if it got pruned in the meanwhile).


The status of a scripthash has been re-defined. The new definition is recursive and makes it possible not to redo all hashing for most updates.

Something very similar could be achieved with the current protocol using SHA256 midstate. But making it recursive would make things easier on the implementation's side.

@chris-belcher
Copy link

chris-belcher commented Dec 21, 2020

My thoughts on your post @shesek

  • EPS/BWT servers can already never be safely exposed to the public, because otherwise an attacker can figure out which addresses are being watched by the server (by slowly querying every address on the blockchain and seeing which ones the server sends a reply for). Also, importing many many addresses isn't much of a DoS attack, I've experimented with importing one million addresses into Core and it works fine except that the wallet file becomes very large.
  • Querying the UTXO set with gettxout is much better than getrawtransaction <txid>. That continues to work even if the tx gets mined straight away. It also solves the shutting down problem because gettxout will still return the correct result after a shutdown.
  • Regarding blockchain.outpoint.subscribe after the outpoint is spent, when EPS/BWT servers receive that message they should immediately obtain the corresponding address using gettxout and import it to the Core wallet. Then if the outpoint becomes spent the Core wallet will store the spending transaction too. From my understanding of Lightning and how Electrum is likely to work, it also seems pretty unlikely that Electrum will send blockchain.outpoint.subscribe too late after the LN outpoint is spent. Electrum should request blockchain.outpoint.subscribe immediately before or after it broadcasts the lightning funding transaction.

Edit: Another thought on the new status hash, I think EPS/BWT servers won't even take advantage of the possibility of caching hashes, but just recalculate them from the start each time. This should be fine because the client would just be attacking themselves if they DoS their own personal server. Plus it helps keep the server stateless.

@shesek
Copy link

shesek commented Dec 21, 2020

EPS/BWT servers can already never be safely exposed to the public

Agreed, of course. But this still adds another vector of attack for users that have an insecure setup (I suspect there are quite a few of these, unfortunately) that should be taken into account.

Querying the UTXO set with gettxout is much better than getrawtransaction <txid>.

Oh, yes, nice! That is much better. The electrum server will still have to occasionally poll for this, but it doesn't require checking each block separately.

But what happens if the output gets funded then spent immediately after, before the electrum server had a chance to poll gettxout? This could happen if the funding and spending transactions show up in the same block, but also for mempool transactions if the spend happens quickly enough.

From my understanding of Lightning and how Electrum is likely to work, it also seems pretty unlikely that Electrum...

I would consider that this RPC command could be used in the future for other things too, either by Electrum itself or by third party software that leverages Electrum servers.

But I agree that if its expected that the Electrum Lightning implementation wouldn't normally subscribe to spent outpoints, then it could be good enough for now.

@ecdsa
Copy link
Member

ecdsa commented Dec 21, 2020

But I agree that if its expected that the Electrum Lightning implementation wouldn't normally subscribe to spent outpoints, then it could be good enough for now.

You cannot expect that. Electrum needs to know if an outpoint has been spent, so the server needs to distinguish between 3 different cases: utxo does not exist, utxo exists and is unspent, and utxo was spent.

@chris-belcher
Copy link

chris-belcher commented Dec 22, 2020

Here are two possible ways to solve the edge case of an outpoint being immediately spent after it is created:

The node could check getrawtransaction <txid> or getmempoolentry <txid> in the case that gettxout returns nothing. Those former two RPC calls will still find the transaction even if the outpoint was immediately spent, and from there be able to import its address into Core's wallet. The server can obtain the lightning channel address from is the method blockchain.transaction.broadcast, because the client will broadcast the lightning funding transaction via the server (unless the other peer broadcasts the funding transaction, which I think happens with open_channel --push_amount). This leaves another rare edge case: If the transaction is broadcasted by the other peer instead of our client and the transaction is immediately spent in the mempool before the server has a chance to see it and the node is running blocksonly and the user is pruning and the user shuts down their server and node before the transaction is confirmed, and then starts them up again after enough time that the user's node prunes the relevant block, then the server won't be able to find the funding transaction.

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested in blockchain.outpoint.subscribe. Call it something like blockchain.outpoint.address_notify and the client sends it immediately before subscribing to the outpoint. EPS/BWT servers will import that address into Core which will be able to keep track and know if the outpoint was created and then immediately spent. I believe that would completely solve the edge-case.

@shesek
Copy link

shesek commented Dec 22, 2020

the server needs to distinguish between 3 different cases: utxo does not exist, utxo exists and is unspent, and utxo was spent.

Distinguishing between spent txos and non-existent txos in a generic manner that works for any txo is inherently incompatible with pruning.

It seems to me that this could only work with pruning if we loosen the requirements by making some assumptions about electrum's specific usage patterns, and tailoring the electrum server-side solution to work specifically for this.

The server can obtain the lightning channel address from is the method blockchain.transaction.broadcast

How could it tell that its a lightning transaction? Wouldn't it have to import all p2wsh addresses to be sure?

and the node is running blocksonly

blocksonly isn't necessarily a condition, this could happen if the funding and spending transaction appear for the first time in a block, or even if they appear briefly in the mempool but get mined before polling manages to catch it.

the user is pruning and the user shuts down their server and node before the transaction is confirmed, and then starts them up again after enough time that the user's node prunes the relevant block

For the pruning / no txindex case, is this assuming that the electrum server is also checking individual blocks with getrawtransaction <txid> <blockhash>?

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested

This would indeed make things easier. The server will simply have to import the addresses, and all the information for the relevant txos will be available in the Bitcoin Core wallet, without any specialized logic for tracking txos.

If we can guarantee that the address notification is always sent before the funding transaction confirms, then this becomes trivial. But even if not, because the address is known, the server could more easily issue a rescan to look for recent funding/spending transactions (say, in the last 144 blocks or so?), without having to check individual blocks manually.

@SomberNight
Copy link
Member Author

Another way is to add a method to this protocol which does nothing but notify the server about what address will be later requested in blockchain.outpoint.subscribe. Call it something like blockchain.outpoint.address_notify and the client sends it immediately before subscribing to the outpoint. EPS/BWT servers will import that address into Core which will be able to keep track and know if the outpoint was created and then immediately spent. I believe that would completely solve the edge-case.

This would indeed make things easier. The server will simply have to import the addresses, and all the information for the relevant txos will be available in the Bitcoin Core wallet, without any specialized logic for tracking txos.

I quite like it that the protocol no longer uses addresses (but script hashes). I refuse to reintroduce them! :D
Anyway, it seems pointless to send an extra request.
If you think it would be helpful, maybe we could add an optional arg spk_hint to blockchain.outpoint.subscribe, which should be set to the scriptPubKey corresponding to the outpoint.
Electrum could then always set this.
e-x could just ignore the field completely.

@chris-belcher
Copy link

How could it tell that its a lightning transaction? Wouldn't it have to import all p2wsh addresses to be sure?

Yes, or rather than importing straight away it could save every p2wsh address in a 'txid' -> 'address' map or dict.

If you think it would be helpful, maybe we could add an optional arg spk_hint to blockchain.outpoint.subscribe, which should be set to the scriptPubKey corresponding to the outpoint.

Yes(!) This is a much better idea than a separate protocol method. That should totally solve the edge case.

docs/protocol-basics.rst Outdated Show resolved Hide resolved
docs/protocol-basics.rst Outdated Show resolved Hide resolved
If the history contains ``n`` txs, the status is ``status_n``. The ``tx_n``
series consists of, first the confirmed txs touching the script hash,
followed by the mempool txs touching it; formatted as described above, as
bytearrays of length 40 or 48.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see long-term problems mixing mempool and confirmed history here; I would recommend moving to separate tracking of mempool and confirmed histories. Of course that is more complexity in the short-term.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean specifically for the status, for get_history, or in general?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general

docs/protocol-methods.rst Outdated Show resolved Hide resolved
**Signature**

.. function:: blockchain.scripthash.get_history(scripthash)
.. function:: blockchain.scripthash.get_history(scripthash, from_height=0, to_height=-1,
client_statushash=None, client_height=None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems wrong to be putting the burden of figuring out where the client is (presumably the point of the client_statushash and client_height) on the server. I would have preferred to see something where client state doesn't enter the protocol.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. This is the best I could come up with though. There are too many constraints when trying to support long histories...
Still, I think it is possible to implement this server-side in a way where it is relatively cheap to figure out the state of the client using these fields for well-behaving clients. By properly crediting resource usage cost to sessions, malicious or buggy clients can be banned.

docs/protocol-methods.rst Outdated Show resolved Hide resolved
@sunnyking
Copy link
Contributor

sunnyking commented Feb 15, 2021

Regarding a new get_history, from wallet point of view what we really need is to get the most recent history. Gemmer wallet already runs into 'history too large' error on multiple occasions with normal use. But from Gemmer's point of view, it only needs to fetch the most recent 3 transactions(including unconfirmed) from electrumx server, regardless of how many historic transactions there are for a given address. So the question is If the api allows for a parameter for how many recent transactions client wants, that would actually reduce a lot of server burden on long history. In fact, we would have no problem with the server hard limiting the parameter to a max of e.g. 1k for performance considerations. So basically what we are suggesting is something like

get_history(scripthash, limit=100, last_tx=None) # returns tx summary from most recent

(last_tx maybe for possible pagination purpose, but for our use case we really don't need it)

Then for typical wallet and explorer, it would not burden the server on long history addresses. For example, Gemmer would do

get_history(scripthash, limit=3)

which could be very efficient on the server for long history address compared to current situation where they run into 'history too large'.

@SomberNight
Copy link
Member Author

from wallet point of view what we really need is to get the most recent history
it only needs to fetch the most recent 3 transactions(including unconfirmed) from electrumx server, regardless of how many historic transactions there are for a given address

Maybe your altcoin needs that but it is useless for Bitcoin.
At the very least, for Bitcoin, we would need to know about all UTXOs, and they can be arbitrarily old.

There are many considerations to keep in mind:

  • a public server might be malicious, obviously
  • the client can SPV confirmed txs it knows about but is vulnerable to lying by omission
  • if a client is lied to by omission, it should recover when changing servers

One way to achieve this, is what the pre-1.5 protocol does: define status hash in a way that the client will notice it is missing txs. This proposal here does the same. However, the client then will necessarily have to have downloaded all txs to calculate the status hash itself.

So I see no point in designing an API that allows fetching just the most recent txs. If you want to know if there are txs you are missing; that's already how it works; you compare the status hashes. Then, to be able to recalc the status hash yourself again, trustlessly, you need to obtain all missing txs.

@cculianu
Copy link
Collaborator

cculianu commented Feb 15, 2021

So I see no point in designing an API that allows fetching just the most recent txs. If you want to know if there are txs you are missing; that's already how it works; you compare the status hashes. Then, to be able to recalc the status hash yourself again, trustlessly, you need to obtain all missing txs.

One can imagine an optimization in the "happy" case where you just are able to download only the tx's you believe you are missing. You don't actually need the entire history to calculate the status hash in the case where everything lines up... only as a fallback if you cannot reconcile would you go ahead and try the full download, and then if that fails, decide which server you believe and try again.

Most of the time nobody is lying to anybody -- and being able to detect omission is already captured by the status hash. The "download last few tx's" thing would be probably enough 99% of the time... and may save some load...

Although already I believe with the changes in 1.5 it should be possible now retrieve a partial history towards the "end" now.. right?

@SomberNight
Copy link
Member Author

Although already I believe with the changes in 1.5 it should be possible now retrieve a partial history towards the "end" now.. right?

Yes. You can call blockchain.scripthash.get_history with a recent from_height param.

So I see no point in designing an API that allows fetching just the most recent txs

One can imagine an optimization in the "happy" case where you just are able to download only the tx's you believe you are missing.

The issue is that you have no idea how many txs you are missing: the status either matches or differs.
If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered.
An API that allowed "get most recent 100 txs" seems less useful.

@cculianu
Copy link
Collaborator

If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered.

Yeah, just start with what you last saw, and if that fails to reconcile.. get a full history.

Anyway I was merely pointing out that it's currently possible to do it with the 1.5 spec and that it is a useful optimization for wallet implementors to consider in their interaction with the server...

@sunnyking
Copy link
Contributor

sunnyking commented Feb 16, 2021

The issue is that you have no idea how many txs you are missing: the status either matches or differs.
If you wanted to, for the happy path, with the current proposal, you could set from_height to the last block you believe you have covered.
An API that allowed "get most recent 100 txs" seems less useful.

In most our usage scenarios, including wallets and explorers, servers are generally trusted, and applications don't try to remember transaction history it has downloaded from server. I see where the design is coming from, however it seems to me that the focus on SPV has really made an impact on its usability in general.

The problem with the from_height parameter is that server is expecting client to be stateful and store past transaction history even full transaction history on the address. Otherwise it's quite tricky for client to know which height it should supply to the get_history api. It appears to me that with this proposed api, large history addresses would continue to plague non-SPV uses of ElectrumX

@SomberNight
Copy link
Member Author

SomberNight commented Feb 16, 2021

In most our usage scenarios, including wallets and explorers, servers are generally trusted

applications don't try to remember transaction history it has downloaded from server

I see. Indeed if you wanted to create a block explorer (using a trusted Electrum server as backend) that can efficiently show the (e.g.) 100 most recent txs for an address that would need a different get_history API. (and as a continuation of the idea, I guess you might want the 100 txs before that, etc) Scripthash status is not even needed for this "trusted block explorer" use case.

Indeed the use case we had in mind here is different. get_history and the scripthash status hash have been redesigned together; specifically so that a well-behaving stateful client can use get_history in an efficient way and recalculate the status hash itself. I guess this could be called the "stateful SPV client" use case.

I am not sure if the currently proposed get_history could be changed in a way that made it useful for both.
Having implemented get_history I have the impression it is already of significant complexity. I think it might even be reasonable to add a separate history RPC that handles the block explorer use case. (which then could be done later, in a future PR)

I suppose one thing the proposed get_history RPC is missing for the block explorer use case is (1) to allow requesting txs in reverse order (most recent first); and another is (2) that you would want to paginate based on number of txs and not blocks...

@SomberNight
Copy link
Member Author

Ok, sure, we can call it 2.0. We can try to use semver for the protocol going forward.

@SomberNight SomberNight changed the title docs: describe protocol version 1.5 docs: describe protocol version 2.0 Dec 19, 2022
@cculianu
Copy link
Collaborator

Ok, sure, we can call it 2.0. We can try to use semver for the protocol going forward.

Wow thank you so much man! I really appreciate it. This means I can rename my BCH-specific extensions that I'm about to push out as 1.5.0 :)

@cculianu
Copy link
Collaborator

cculianu commented Dec 19, 2022

IMO non-upstream extensions have no reason to be versioned. Those should go into features field.

Yeah except we do some version negotiation to change the "personality" of the server depending on what the client is.. (in order to behave as older clients expect without limiting things for newer clients).

@Kixunil
Copy link

Kixunil commented Dec 19, 2022

I'm not sure if I understand your goal but maybe protocol extensions themselves should be versioned?

@cculianu
Copy link
Collaborator

I'm not sure if I understand your goal but maybe protocol extensions themselves should be versioned?

Correct. That's what version negotiation is for ....

@Kixunil
Copy link

Kixunil commented Dec 19, 2022

Then I suggest you name the extensions my-extension-1.2. Is there a value in standardizing the format?

@cculianu
Copy link
Collaborator

Dude don't worry about it seriously. This is handled correctly and doesn't need to be discussed here. :)

@RCasatta
Copy link
Contributor

RCasatta commented Feb 6, 2023

For each mempool tx, form a bytearray: tx_hash+height+fee, where:

Why the fee is used here?

@SomberNight
Copy link
Member Author

For each mempool tx, form a bytearray: tx_hash+height+fee, where:

Why the fee is used here?

Please ask such specific questions in-line (comment on a specific line, instead of in the main thread), so that discussions can be tracked better.

from it, and those funding it), both confirmed and unconfirmed (in mempool).

2. Order confirmed transactions by increasing height (and position in the
block if there are more than one in a block).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The script hash status is dependent on the transaction ordering.
If I order transactions incorrectly, I get the wrong script status hash and I'll do some extra requests which increase the server load.
This is more likely to happen if I have multiple transactions involving a script in the same block.

How the client should know the (relative) position of a transaction in the block?
Should the client infer it from the order in the returned list by previous calls to get_history?

Does it make sense to use the same ordering that is used for mempool transactions for confirmed transactions too?

AFAICT it would be simpler for client libraries to implement correctly, and indirectly it should help the server too.

output of :func:`blockchain.scripthash.get_mempool` appended to the
list. Each confirmed transaction is a dictionary with the following
keys:
The output is a dictionary, always containing all three of the following items:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has been considered to return also the statushash computed by the server?

@@ -499,6 +581,172 @@ Unsubscribe from a script hash, preventing future notifications if its :ref:`sta
Note that :const:`False` might be returned even for something subscribed to earlier,
because the server can drop subscriptions in rare circumstances.

blockchain.outpoint.subscribe

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if i do not want to subscribe but only get the status once, i would have to blockchain.outpoint.subscribe and blockchain.outpoint.unsubscribe, are there any plans on adding a blockchain.outpoint.status (or a similar name) that would return the same as blockchain.outpoint.subscribe but without the notifications and the subscription?

@Kixunil
Copy link

Kixunil commented Mar 28, 2023

One thing I found kinda annoying when implementing electrum client is there's no way to know which request the message is for without accessing internal state which makes deserialization annoying. It doesn't seem to be solvable but just in case is there any way to modify the protocol to allow it?

@torkelrogstad
Copy link

One thing I found kinda annoying when implementing electrum client is there's no way to know which request the message is for without accessing internal state which makes deserialization annoying. It doesn't seem to be solvable but just in case is there any way to modify the protocol to allow it?

If the name of the type of the response was included somehow, you could know how to deserialize the message without looking up internal state.


A suggestion/wish-list for the new protocol version: some way of detecting if the address is unused or not. We all want to practice good address hygiene and avoid address reuse. A protocol method for checking if an address is unused would help with this.

My use case: I collect XPUBs from users, allowing them to make withdrawals from my service to new addresses each time they withdraw. These XPUBs could also receive funds from other services, so I need to check if the next address in the derivation path is unused, before sending to it.

I can do this by looking up the history. However, this is inefficient if the address has received lots of transactions. Checking if the address is unused is really just a special-case of looking up the history, where we short circuit when finding the first history element and return early.

I originally brought this up in romanz/electrs#920

@SomberNight SomberNight mentioned this pull request Jan 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet