-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible loss? of funds bug when using htlc_accepted hook #6045
Comments
Is the channel still being listed in Also, if you didn't settle the HTLC using the preimage before the HTLC timeout gut, then the HTLC value still belongs to the sender, not you, so they'll sweep eventually |
No they are no longer in listpeers and since these are rebalances i'm the sender right? |
to be clear: i looked for this because i'm missing aroung 600k sats and this fits too perfectly for it to be a coincidence |
i managed to reproduce this on my testnet node using the htlc_accepted hook version of the plugin, which i modified to also not delete pays immediately and i'll post this as an example of one of the force closed channels:
and the force close tx: here are the logs filtered by the pubkey of the force closed channel (is that enough?): with my layman knowledge reading the logs it looks like the htlc got fully accepted but then the channel decided to force close the channel anyways because it hit a deadline(which it didn't actually) also: the same might have happened with the invoice version of my plugin (have to wait for the timeout to see) so maybe this is not connected to the hook, but can't say for sure yet. |
on testnet i was using 23.02 and on mainnet the hook related fc's were on 22.11 and the invoice one on 23.02 aswell. On testnet i was using much smaller amounts and they already got swept to my wallet including the htlcs. So maybe this got fixed in 23.02. But these fc's shouldn't have happened in the first place. |
the htlc on the invoice related fc is also not getting swept by my node: https://mempool.space/address/bc1qpc5uxfaykrpa7dvx7kk99md0yv7jgefj9lzpeqe4pl7p07wfg0pq7k80lq |
if i understand this correctly: lightning/lightningd/peer_htlcs.c Line 2646 in bfc6fed
it makes sense why a force close was triggered since my cltv-delta was 140 and the last delay on the route was 50. So it was probably always a race between accepting the htlc and the deadline check. What i dont understand yet is why on mainnet they did not get swept but on testnet they did. Also why getroute and pay get away with routes that have 9 as the default last delay. |
I'm currently missing about 600k sats from listfunds. Please look at these force close txs:
https://mempool.space/address/bc1q78n64w0q5ec6m86j7m9fupz0fqjt03kzlj7zsjgyuezfp00tmres9fp720
https://mempool.space/address/bc1qts3g5rfsf0ance3tx072vm0qwe5q2uf6gvkeafu9kzw0hue9yh2qx0u972
https://mempool.space/address/bc1qacsdf9npstwuaz3qgqvlx7trwv9ek8dndgslj8r30w6pyed49zmsh2lq4z
The stuck htlcs with around 200k sats are still on that script address and are missing from my listfunds despite the time lock being over. Any help to recover these would be highly appreciated!
This is what happened on all 3 of these:
I wrote a plugin for rebalancing and used my htlc_accepted hook instead of invoices. I craft the routes myself and on the final hop i set a 20 blocks delay. This worked fine for a while and i probably had 1000+ successfull rebalances without any problems. But then these force closes happened. When i looked in the logs this is what i saw in the normal case where my plugin resolved a rebalance (from memory):
lightningd: calling plugin hook
plugin: log line with payment_hash and preimage
plugin returned from hook with result "resolved"
In the force close cases i saw this, all within 1s:
lightningd: calling plugin hook
plugin: log line with payment_hash and preimage
lightningd: htlc about to expire, force closing (???)
So right before the
Ok(json!({"result":"resolve","payment_key":pi}))
return of my plugin cln rather force closed the channel and now has apparently lost track (?) of these htlcs.The text was updated successfully, but these errors were encountered: