Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tor pre0 #1

Closed
wants to merge 3 commits into from
Closed

Tor pre0 #1

wants to merge 3 commits into from

Conversation

tadeusr
Copy link

@tadeusr tadeusr commented Apr 24, 2018

hi @rustyrussell those minor changes should make your tor branch a full working c-ln tor node.
Tested so far connect / fund channel / pay / close / reopen
but i guess the Travis CI 👧 will lammer about new ccan functions

Signed-off-by: tadeusr <tadeus.rodde@protonmail.ch>
Signed-off-by: tadeusr <tadeus.rodde@protonmail.ch>
Signed-off-by: tadeusr <tadeus.rodde@protonmail.ch>
rustyrussell added a commit that referenced this pull request May 7, 2018
Our closingd doesn't handle it:

lightningd(2968): 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1:
 Peer permanent failure in CLOSINGD_SIGEXCHANGE: lightning_closingd: sent ERROR Expected closing_signed:
 0103ff54517293892ec3f214f2343c54cbfbf24aa6ffb8d5585d3bc1b543eae0a272000067000001000146390e0c043c777226927eacd2186a03f064e4bdc30f891cb6e4990af49967d34b338755e99d728987e3d49227815e17f3ab40092434a59e33548e870071176d26d19a4e4d8f7715c13ac2d6bf3238608a1ccf9afd91f774d84d170d9edddebf7460c54d49bd6cd81410bc3eeeba2b7278b1b5f7e748d77d793f31086847d582

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@tadeusr
Copy link
Author

tadeusr commented May 7, 2018

close this now since the ElementsProject#1471 is on track, Thanks! :-)

@tadeusr tadeusr closed this May 7, 2018
rustyrussell added a commit that referenced this pull request May 8, 2018
Our closingd doesn't handle it:

lightningd(2968): 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1:
 Peer permanent failure in CLOSINGD_SIGEXCHANGE: lightning_closingd: sent ERROR Expected closing_signed:
 0103ff54517293892ec3f214f2343c54cbfbf24aa6ffb8d5585d3bc1b543eae0a272000067000001000146390e0c043c777226927eacd2186a03f064e4bdc30f891cb6e4990af49967d34b338755e99d728987e3d49227815e17f3ab40092434a59e33548e870071176d26d19a4e4d8f7715c13ac2d6bf3238608a1ccf9afd91f774d84d170d9edddebf7460c54d49bd6cd81410bc3eeeba2b7278b1b5f7e748d77d793f31086847d582

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jun 26, 2018
==1224== Uninitialised byte(s) found during client check request
==1224==    at 0x152CAD: memcheck_ (mem.h:247)
==1224==    by 0x152D18: towire (towire.c:17)
==1224==    by 0x152DA1: towire_u16 (towire.c:28)
==1224==    by 0x142189: towire_failed_htlc (htlc_wire.c:29)
==1224==    by 0x16343F: towire_channel_init (gen_channel_wire.c:596)
==1224==    by 0x115C2C: peer_start_channeld (channel_control.c:249)
==1224==    by 0x131701: peer_connected (peer_control.c:503)
==1224==    by 0x117820: gossip_msg (gossip_control.c:182)
==1224==    by 0x139D97: sd_msg_read (subd.c:500)
==1224==    by 0x139676: read_fds (subd.c:327)
==1224==    by 0x179D52: next_plan (io.c:59)
==1224==    by 0x17A84F: do_plan (io.c:387)
==1224==  Address 0x1ffefffabe is on thread 1's stack
==1224==  in frame #2, created by towire_u16 (towire.c:26)

Followed by:

2018-06-18T21:53:04.129Z lightningd(1224): 03933884aaf1d6b108397e5efe5c86bcf2d8ca8d2f700eda99db9214fc2712b134 chan #1: Peer permanent failure in CHANNELD_NORMAL: lightning_channeld: received ERROR channel d0101486543e1a8b6871556a4fe1fba4ad4d83ce7f6f92919fd17bd1545d2fd5: UpdateFailMalformedHtlc message doesn't have BADONION bit set

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 9, 2018
==1224== Uninitialised byte(s) found during client check request
==1224==    at 0x152CAD: memcheck_ (mem.h:247)
==1224==    by 0x152D18: towire (towire.c:17)
==1224==    by 0x152DA1: towire_u16 (towire.c:28)
==1224==    by 0x142189: towire_failed_htlc (htlc_wire.c:29)
==1224==    by 0x16343F: towire_channel_init (gen_channel_wire.c:596)
==1224==    by 0x115C2C: peer_start_channeld (channel_control.c:249)
==1224==    by 0x131701: peer_connected (peer_control.c:503)
==1224==    by 0x117820: gossip_msg (gossip_control.c:182)
==1224==    by 0x139D97: sd_msg_read (subd.c:500)
==1224==    by 0x139676: read_fds (subd.c:327)
==1224==    by 0x179D52: next_plan (io.c:59)
==1224==    by 0x17A84F: do_plan (io.c:387)
==1224==  Address 0x1ffefffabe is on thread 1's stack
==1224==  in frame #2, created by towire_u16 (towire.c:26)

Followed by:

2018-06-18T21:53:04.129Z lightningd(1224): 03933884aaf1d6b108397e5efe5c86bcf2d8ca8d2f700eda99db9214fc2712b134 chan #1: Peer permanent failure in CHANNELD_NORMAL: lightning_channeld: received ERROR channel d0101486543e1a8b6871556a4fe1fba4ad4d83ce7f6f92919fd17bd1545d2fd5: UpdateFailMalformedHtlc message doesn't have BADONION bit set

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 26, 2018
We shouldn't unconditionally free msg in enqueue_peer_msg:

DEBUG: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: dev_disconnect: @WIRE_REVOKE_AND_ACK
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: FATAL SIGNAL 6 (version 8aae6a8)
...
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:98 (call_error) 0x80855d1
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:170 (check_bounds) 0x8085730
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:181 (to_tal_hdr) 0x8085791
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:504 (tal_free) 0x8085fe6
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: channeld/channel.c:2651 (main) 0x8050639

For additional safety, handle each msg allocation separately, rather than
freeing at bottom of large branch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 26, 2018
We shouldn't unconditionally free msg in enqueue_peer_msg:

DEBUG: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: dev_disconnect: @WIRE_REVOKE_AND_ACK
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: FATAL SIGNAL 6 (version 8aae6a8)
...
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:98 (call_error) 0x80855d1
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:170 (check_bounds) 0x8085730
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:181 (to_tal_hdr) 0x8085791
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:504 (tal_free) 0x8085fe6
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: channeld/channel.c:2651 (main) 0x8050639

For additional safety, handle each msg allocation separately, rather than
freeing at bottom of large branch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 27, 2018
We shouldn't unconditionally free msg in enqueue_peer_msg:

DEBUG: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: dev_disconnect: @WIRE_REVOKE_AND_ACK
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: FATAL SIGNAL 6 (version 8aae6a8)
...
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:98 (call_error) 0x80855d1
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:170 (check_bounds) 0x8085730
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:181 (to_tal_hdr) 0x8085791
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: ccan/ccan/tal/tal.c:504 (tal_free) 0x8085fe6
BROKEN: lightning_channeld-0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518 chan #1: backtrace: channeld/channel.c:2651 (main) 0x8050639

For additional safety, handle each msg allocation separately, rather than
freeing at bottom of large branch.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Sep 11, 2018
We free the peers explicitly, but we don't free the unconfirmed channel:
the result is that it gets freed twice.

The workaround is to free the unconfirmed channel explicitly, but really
the peer should be tal_link'ed as it's basically a reference counted
structure.

1.974911451 lightningd(17906):INFO: 03b4bca72572889d4b44cd0f194f73d54972af367e1917579283122ee10fa05f54 chan #1: Owning subdaemon lightning_openingd died (62464)
1.980118094 lightningd(17906):BROKEN: FATAL SIGNAL 6
1.980150447 lightningd(17906):BROKEN: backtrace: common/daemon.c:42 (crashdump) 0x432ba0
1.980161268 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb18ff4af
1.980167045 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb18ff428
1.980171271 lightningd(17906):BROKEN: backtrace: (null):0 ((null)) 0x7faeb1901029
1.980175847 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:98 (call_error) 0x47543e
1.980181814 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:170 (check_bounds) 0x4755fb
1.980188065 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:180 (to_tal_hdr) 0x475649
1.980193756 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:504 (tal_free) 0x47600d
1.980199402 lightningd(17906):BROKEN: backtrace: lightningd/peer_control.c:118 (delete_peer) 0x423990
1.980205498 lightningd(17906):BROKEN: backtrace: lightningd/opening_control.c:574 (destroy_uncommitted_channel) 0x419df3
1.980212380 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:240 (notify) 0x4757b0
1.980218052 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:400 (del_tree) 0x475c61
1.980223398 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:511 (tal_free) 0x476093
1.980229174 lightningd(17906):BROKEN: backtrace: lightningd/opening_control.c:549 (opening_channel_errmsg) 0x419d1a
1.980236227 lightningd(17906):BROKEN: backtrace: lightningd/subd.c:590 (destroy_subd) 0x42cf43
1.980242348 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:240 (notify) 0x4757b0
1.980247771 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:400 (del_tree) 0x475c61
1.980252814 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:410 (del_tree) 0x475cb1
1.980258356 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:410 (del_tree) 0x475cb1
1.980263311 lightningd(17906):BROKEN: backtrace: ccan/ccan/tal/tal.c:511 (tal_free) 0x476093
1.980269189 lightningd(17906):BROKEN: backtrace: lightningd/lightningd.c:412 (main) 0x4144ed

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Sep 28, 2018
That's what BOLT #1 calls them; make it easier for people to grep.

Reported-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Sep 28, 2018
That's what BOLT #1 calls them; make it easier for people to grep.

Reported-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Sep 28, 2018
Use the BOLT #1 naming.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Sep 28, 2018
Use the BOLT #1 naming.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Sep 29, 2018
Use the BOLT #1 naming.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Oct 9, 2018
Use the BOLT #1 naming.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jan 15, 2019
Don't do this:
  (gdb) bt
  #0  0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #1  0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #2  0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
  #3  0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
  #4  0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
  #5  0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
  #6  0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
  #7  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
  #8  0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
  #9  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
  #10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
  #11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
  #12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
  #13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
  ElementsProject#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
  ElementsProject#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
  ElementsProject#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
  ElementsProject#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
  ElementsProject#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
  ElementsProject#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
  ElementsProject#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
  ElementsProject#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
  ElementsProject#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
  ElementsProject#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
  ElementsProject#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
  ElementsProject#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
  ElementsProject#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
  ElementsProject#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
  ElementsProject#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
  ElementsProject#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
  ElementsProject#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
  ElementsProject#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
  ElementsProject#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
  ElementsProject#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
  ElementsProject#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
  ElementsProject#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
  ElementsProject#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625

The cause was that converting a block number to an scid truncates it
at 24 bits.  When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.

This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number.  Instead, we just tack the empty blocks on to then end of the
final query.

Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jan 15, 2019
Don't do this:
  (gdb) bt
  #0  0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #1  0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #2  0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
  #3  0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
  #4  0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
  #5  0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
  #6  0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
  #7  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
  #8  0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
  #9  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
  #10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
  #11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
  #12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
  #13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
  ElementsProject#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
  ElementsProject#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
  ElementsProject#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
  ElementsProject#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
  ElementsProject#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
  ElementsProject#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
  ElementsProject#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
  ElementsProject#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
  ElementsProject#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
  ElementsProject#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
  ElementsProject#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
  ElementsProject#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
  ElementsProject#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
  ElementsProject#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
  ElementsProject#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
  ElementsProject#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
  ElementsProject#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
  ElementsProject#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
  ElementsProject#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
  ElementsProject#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
  ElementsProject#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
  ElementsProject#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
  ElementsProject#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625

The cause was that converting a block number to an scid truncates it
at 24 bits.  When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.

This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number.  Instead, we just tack the empty blocks on to then end of the
final query.

(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).

Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Jan 15, 2019
Don't do this:
  (gdb) bt
  #0  0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #1  0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #2  0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
  #3  0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
  #4  0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
  #5  0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
  #6  0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
  #7  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
  #8  0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
  #9  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
  #10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
  #11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
  #12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
  #13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
  ElementsProject#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
  ElementsProject#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
  ElementsProject#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
  ElementsProject#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
  ElementsProject#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
  ElementsProject#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
  ElementsProject#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
  ElementsProject#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
  ElementsProject#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
  ElementsProject#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
  ElementsProject#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
  ElementsProject#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
  ElementsProject#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
  ElementsProject#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
  ElementsProject#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
  ElementsProject#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
  ElementsProject#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
  ElementsProject#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
  ElementsProject#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
  ElementsProject#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
  ElementsProject#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
  ElementsProject#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
  ElementsProject#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625

The cause was that converting a block number to an scid truncates it
at 24 bits.  When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.

This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number.  Instead, we just tack the empty blocks on to then end of the
final query.

(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).

Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Jan 15, 2019
Don't do this:
  (gdb) bt
  #0  0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #1  0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #2  0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
  #3  0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
  #4  0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
  #5  0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
  #6  0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
  #7  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
  #8  0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
  #9  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
  #10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
  #11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
  #12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
  #13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
  ElementsProject#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
  ElementsProject#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
  ElementsProject#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
  ElementsProject#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
  ElementsProject#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
  ElementsProject#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
  ElementsProject#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
  ElementsProject#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
  ElementsProject#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
  ElementsProject#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
  ElementsProject#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
  ElementsProject#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
  ElementsProject#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
  ElementsProject#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
  ElementsProject#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
  ElementsProject#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
  ElementsProject#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
  ElementsProject#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
  ElementsProject#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
  ElementsProject#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
  ElementsProject#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
  ElementsProject#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
  ElementsProject#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625

The cause was that converting a block number to an scid truncates it
at 24 bits.  When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.

This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number.  Instead, we just tack the empty blocks on to then end of the
final query.

(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).

Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Jan 17, 2019
Don't do this:
  (gdb) bt
  #0  0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #1  0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
  #2  0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
  #3  0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
  #4  0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
  #5  0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
  #6  0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
  #7  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
  #8  0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
  #9  0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
  #10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
  #11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
  #12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
  #13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
  ElementsProject#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
  ElementsProject#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
  ElementsProject#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
  ElementsProject#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
  ElementsProject#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
  ElementsProject#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
  ElementsProject#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
  ElementsProject#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
  ElementsProject#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
  ElementsProject#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
  ElementsProject#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
  ElementsProject#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
  ElementsProject#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
  ElementsProject#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
  ElementsProject#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
  ElementsProject#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
  ElementsProject#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
  ElementsProject#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
  ElementsProject#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
  ElementsProject#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
  ElementsProject#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
  ElementsProject#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
  ElementsProject#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625

The cause was that converting a block number to an scid truncates it
at 24 bits.  When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.

This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number.  Instead, we just tack the empty blocks on to then end of the
final query.

(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).

Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request May 15, 2019
My raspberry pi node hung up on my other node:
   lightning_openingd-... chan #1: Got bad message from gossipd: 0db1

This is because we didn't handle that message in one path.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request May 18, 2019
My raspberry pi node hung up on my other node:
   lightning_openingd-... chan #1: Got bad message from gossipd: 0db1

This is because we didn't handle that message in one path.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jun 12, 2019
Direct leak of 1024 byte(s) in 2 object(s) allocated from:
    #0 0x7f4c84ce4448 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x55d11b782c96 in timer_default_alloc ccan/ccan/timer/timer.c:16
    #2 0x55d11b7832b7 in add_level ccan/ccan/timer/timer.c:166
    #3 0x55d11b783864 in timer_fast_forward ccan/ccan/timer/timer.c:334
    #4 0x55d11b78396a in timers_expire ccan/ccan/timer/timer.c:359
    #5 0x55d11b774993 in io_loop ccan/ccan/io/poll.c:395
    #6 0x55d11b72322f in plugins_init lightningd/plugin.c:1013
    #7 0x55d11b7060ea in main lightningd/lightningd.c:664
    #8 0x7f4c84696b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jun 12, 2019
Indirect leak of 48 byte(s) in 1 object(s) allocated from:
    #0 0x7f4c84ce4448 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c448)
    #1 0x55d11b77d270 in strmap_add_ ccan/ccan/strmap/strmap.c:90
    #2 0x55d11b704603 in command_set_usage lightningd/jsonrpc.c:891
    #3 0x55d11b733cb5 in param common/param.c:295
    #4 0x55d11b6f7b37 in json_connect lightningd/connect_control.c:96
    #5 0x55d11b7042ef in setup_command_usage lightningd/jsonrpc.c:841
    #6 0x55d11b70443b in jsonrpc_command_add_perm lightningd/jsonrpc.c:863
    #7 0x55d11b704533 in jsonrpc_setup lightningd/jsonrpc.c:876
    #8 0x55d11b705695 in new_lightningd lightningd/lightningd.c:210
    #9 0x55d11b706062 in main lightningd/lightningd.c:644
    #10 0x7f4c84696b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jun 12, 2019
Direct leak of 64 byte(s) in 1 object(s) allocated from:
    #0 0x7f4dc279163e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c63e)
    #1 0x564ee8a24bb1 in htable_default_alloc ccan/ccan/htable/htable.c:19
    #2 0x564ee8a2551b in double_table ccan/ccan/htable/htable.c:226
    #3 0x564ee8a259e5 in htable_add_ ccan/ccan/htable/htable.c:331
    #4 0x564ee89a5300 in block_map_add lightningd/chaintopology.h:83
    #5 0x564ee89a6ece in add_tip lightningd/chaintopology.c:626
    #6 0x564ee89a72c3 in have_new_block lightningd/chaintopology.c:694
    #7 0x564ee89a3ab0 in process_rawblock lightningd/bitcoind.c:466
    #8 0x564ee89a2fb4 in bcli_finished lightningd/bitcoind.c:214
    #9 0x564ee8a284d6 in destroy_conn ccan/ccan/io/poll.c:244
    #10 0x564ee8a284f6 in destroy_conn_close_fd ccan/ccan/io/poll.c:250
    #11 0x564ee8a34a0d in notify ccan/ccan/tal/tal.c:235
    #12 0x564ee8a34efc in del_tree ccan/ccan/tal/tal.c:397
    #13 0x564ee8a35288 in tal_free ccan/ccan/tal/tal.c:481
    ElementsProject#14 0x564ee8a26cf5 in io_close ccan/ccan/io/io.c:450
    ElementsProject#15 0x564ee8a28c11 in io_loop ccan/ccan/io/poll.c:449
    ElementsProject#16 0x564ee89b3c3b in io_loop_with_timers lightningd/io_loop_with_timers.c:24
    ElementsProject#17 0x564ee89ba540 in main lightningd/lightningd.c:822
    ElementsProject#18 0x7f4dc2143b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jun 12, 2019
Direct leak of 32 byte(s) in 1 object(s) allocated from:
    #0 0x7f7678ee863e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c63e)
    #1 0x55f8c7b0fce5 in htable_default_alloc ccan/ccan/htable/htable.c:19
    #2 0x55f8c7b1064f in double_table ccan/ccan/htable/htable.c:226
    #3 0x55f8c7b10b19 in htable_add_ ccan/ccan/htable/htable.c:331
    #4 0x55f8c7afac63 in scriptpubkeyset_add wallet/txfilter.c:30
    #5 0x55f8c7afafce in txfilter_add_scriptpubkey wallet/txfilter.c:77
    #6 0x55f8c7afb05f in txfilter_add_derkey wallet/txfilter.c:91
    #7 0x55f8c7aa4d67 in init_txfilter lightningd/lightningd.c:482
    #8 0x55f8c7aa52d8 in main lightningd/lightningd.c:721
    #9 0x7f767889ab6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)

Direct leak of 16 byte(s) in 1 object(s) allocated from:
    #0 0x7f05f389563e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c63e)
    #1 0x55cac1e6bc99 in htable_default_alloc ccan/ccan/htable/htable.c:19
    #2 0x55cac1e6c603 in double_table ccan/ccan/htable/htable.c:226
    #3 0x55cac1e6cacd in htable_add_ ccan/ccan/htable/htable.c:331
    #4 0x55cac1e56e48 in outpointset_add wallet/txfilter.c:61
    #5 0x55cac1e57162 in outpointfilter_add wallet/txfilter.c:116
    #6 0x55cac1e5ea3a in wallet_utxoset_add wallet/wallet.c:2365
    #7 0x55cac1deddc2 in topo_add_utxos lightningd/chaintopology.c:603
    #8 0x55cac1dedeac in add_tip lightningd/chaintopology.c:620
    #9 0x55cac1dee2de in have_new_block lightningd/chaintopology.c:694
    #10 0x55cac1deaab0 in process_rawblock lightningd/bitcoind.c:466
    #11 0x55cac1de9fb4 in bcli_finished lightningd/bitcoind.c:214
    #12 0x55cac1e6f5be in destroy_conn ccan/ccan/io/poll.c:244
    #13 0x55cac1e6f5de in destroy_conn_close_fd ccan/ccan/io/poll.c:250
    ElementsProject#14 0x55cac1e7baf5 in notify ccan/ccan/tal/tal.c:235
    ElementsProject#15 0x55cac1e7bfe4 in del_tree ccan/ccan/tal/tal.c:397
    ElementsProject#16 0x55cac1e7c370 in tal_free ccan/ccan/tal/tal.c:481
    ElementsProject#17 0x55cac1e6dddd in io_close ccan/ccan/io/io.c:450
    ElementsProject#18 0x55cac1e6fcf9 in io_loop ccan/ccan/io/poll.c:449
    ElementsProject#19 0x55cac1dfac66 in io_loop_with_timers lightningd/io_loop_with_timers.c:24
    ElementsProject#20 0x55cac1e0156b in main lightningd/lightningd.c:822
    ElementsProject#21 0x7f05f3247b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 17, 2020
Also, remove fuzz caused by varint->bigsize change.

For some reason my build machine sorts patches into another order, and fails
to patch:

	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 succeeded at 52 with fuzz 1 (offset -19 lines).
	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 FAILED at 8.
	1 out of 1 hunk FAILED -- saving rejects to file wire/gen_onion_wire_csv.104951.rej
	make: *** [wire/Makefile:60: wire/gen_onion_wire_csv] Error 1

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 17, 2020
Also, remove fuzz caused by varint->bigsize change.

For some reason my build machine sorts patches into another order, and fails
to patch:

	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 succeeded at 52 with fuzz 1 (offset -19 lines).
	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 FAILED at 8.
	1 out of 1 hunk FAILED -- saving rejects to file wire/gen_onion_wire_csv.104951.rej
	make: *** [wire/Makefile:60: wire/gen_onion_wire_csv] Error 1

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Jul 20, 2020
Also, remove fuzz caused by varint->bigsize change.

For some reason my build machine sorts patches into another order, and fails
to patch:

	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 succeeded at 52 with fuzz 1 (offset -19 lines).
	patching file wire/gen_onion_wire_csv.104951
	Hunk #1 FAILED at 8.
	1 out of 1 hunk FAILED -- saving rejects to file wire/gen_onion_wire_csv.104951.rej
	make: *** [wire/Makefile:60: wire/gen_onion_wire_csv] Error 1

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Apr 7, 2021
e.g. in test_closing_id we can get a spend from the first (closed) channel
in the same block as the open of the second.  Half the time, we'll choose
the wrong one as scid.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Apr 8, 2021
e.g. in test_closing_id we can get a spend from the first (closed) channel
in the same block as the open of the second.  Half the time, we'll choose
the wrong one as scid.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Apr 16, 2021
e.g. in test_closing_id we can get a spend from the first (closed) channel
in the same block as the open of the second.  Half the time, we'll choose
the wrong one as scid.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell pushed a commit that referenced this pull request Aug 23, 2021
The variable `block` (instace of `struct block`) is
allocated on the stack without being initialized, i.e. its
member `prev` points to nowhere. This causes a segmentation
fault on my machine on the binding of "prev_hash" on running
`wallet_block_add`, as the following core-dump analysis
shows:

    $ egdb ./wallet/test/run-wallet ./run-wallet.core
    [...]
    Core was generated by `run-wallet'.
    Program terminated with signal SIGSEGV, Segmentation fault.
    ---Type <return> to continue, or q <return> to quit---
    #0  0x000008f67a04b660 in memcpy (dst0=<optimized out>, src0=0x100007f8c, length=32) at /usr/src/lib/libc/string/memcpy.c:97
    97                      TLOOP1(*dst++ = *src++);
    (gdb) bt
    #0  0x000008f67a04b660 in memcpy (dst0=<optimized out>, src0=0x100007f8c, length=32) at /usr/src/lib/libc/string/memcpy.c:97
    #1  0x000008f73e838f60 in sqlite3VdbeMemSetStr () from /usr/local/lib/libsqlite3.so.37.12
    #2  0x000008f73e83cb11 in bindText () from /usr/local/lib/libsqlite3.so.37.12
    #3  0x000008f44bc91345 in db_sqlite3_query (stmt=0x8f6845bf028) at wallet/db_sqlite3.c:77
    #4  0x000008f44bc91122 in db_sqlite3_exec (stmt=0x8f6845bf028) at wallet/db_sqlite3.c:110
    #5  0x000008f44bcbb3b2 in db_exec_prepared_v2 (stmt=0x8f6845bf028) at ./wallet/db.c:2055
    #6  0x000008f44bcc6890 in wallet_block_add (w=0x8f688b5bba8, b=0x7f7ffffca788) at ./wallet/wallet.c:3556
    #7  0x000008f44bce2607 in test_wallet_outputs (ld=0x8f6a35a7828, ctx=0x8f6a35c0268) at wallet/test/run-wallet.c:1104
    #8  0x000008f44bcddec0 in main (argc=1, argv=0x7f7ffffcaaf8) at wallet/test/run-wallet.c:1930

Fix by explicitely setting the whole structure to zero.

[ Rebuilt generated files, too --RR ]
rustyrussell pushed a commit that referenced this pull request Feb 14, 2023
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace

```
(gdb) bt
\#0  0x00000000 in ?? ()
\#1  0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2  0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3  <signal handler called>
\#4  0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5  0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6  0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\#7  0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\#8  0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\#9  0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
    at plugins/libplugin.c:1208
\#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\ElementsProject#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\ElementsProject#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\ElementsProject#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\ElementsProject#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\ElementsProject#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```

I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.

So, I hope this is just usefult for @rustyrussell

Changelog-Fixed: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.

Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
rustyrussell pushed a commit that referenced this pull request Mar 23, 2023
The issue is that common_setup() wasn't called by the fuzz target,
leaving secp256k1_ctx as NULL.

UBSan error:

$ UBSAN_OPTIONS="print_stacktrace=1:halt_on_error=1" \
    ./fuzz-channel_id crash-1575b41ef09e62e4c09c165e6dc037a110b113f2

INFO: Running with entropic power schedule (0xFF, 100).
INFO: Seed: 1153355603
INFO: Loaded 1 modules   (25915 inline 8-bit counters): 25915 [0x563bae7ac3a8, 0x563bae7b28e3),
INFO: Loaded 1 PC tables (25915 PCs): 25915 [0x563bae7b28e8,0x563bae817c98),
./fuzz-channel_id: Running 1 inputs 1 time(s) each.
Running: crash-1575b41ef09e62e4c09c165e6dc037a110b113f2
bitcoin/pubkey.c:22:33: runtime error: null pointer passed as argument 1, which is declared to never be null
external/libwally-core/src/secp256k1/include/secp256k1.h:373:3: note: nonnull attribute specified here
    #0 0x563bae41e3db in pubkey_from_der bitcoin/pubkey.c:19:7
    #1 0x563bae4205e0 in fromwire_pubkey bitcoin/pubkey.c:111:7
    #2 0x563bae46437c in run tests/fuzz/fuzz-channel_id.c:42:3
    #3 0x563bae2f6016 in LLVMFuzzerTestOneInput tests/fuzz/libfuzz.c:23:2
    #4 0x563bae20a450 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long)
    #5 0x563bae1f4c3f in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long)
    #6 0x563bae1fa6e6 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long))
    #7 0x563bae223052 in main (tests/fuzz/fuzz-channel_id+0x181052) (BuildId: f7f56e14ffc06df54ab732d79ea922e773de1f25)
    #8 0x7fa7fa113082 in __libc_start_main
    #9 0x563bae1efbdd in _start

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior bitcoin/pubkey.c:22:33 in
rustyrussell pushed a commit that referenced this pull request Jun 6, 2023
Detected by UBSan:

$ UBSAN_OPTIONS=print_stacktrace=1 ./wallet/test/run-psbt_fixup

bitcoin/psbt.c:733:2: runtime error: applying zero offset to null pointer
    #0 0x53c829 in psbt_from_bytes lightning/bitcoin/psbt.c:733:2
    #1 0x5adcb0 in main lightning/wallet/test/run-psbt_fixup.c:174:10

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior bitcoin/psbt.c:733:2
rustyrussell pushed a commit that referenced this pull request Jun 6, 2023
The function is tiny and was only used in one location. And that one
location was leaking memory.

Detected by ASan:

==2637667==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 7 byte(s) in 1 object(s) allocated from:
    #0 0x4cd758 in __interceptor_strdup
    #1 0x64c70c in json_stream_log_suppress_for_cmd lightning/lightningd/jsonrpc.c:597:31
    #2 0x68a630 in json_getlog lightning/lightningd/log.c:974:2
    ...

SUMMARY: AddressSanitizer: 7 byte(s) leaked in 1 allocation(s).
rustyrussell pushed a commit that referenced this pull request Jun 6, 2023
It is possible for db_column_bytes() to return 0 and for
db_column_blob() to return NULL even when db_column_is_null() returns
false. We need to short circuit in this case.

Detected by UBSan:

  db/bindings.c:479:12: runtime error: null pointer passed as argument 2, which is declared to never be null
  /usr/include/string.h:44:28: note: nonnull attribute specified here

  #0 0x95f117 in db_col_arr_ db/bindings.c:479:2
  #1 0x95ef85 in db_col_channel_type db/bindings.c:459:32
  #2 0x852c03 in wallet_stmt2channel wallet/wallet.c:1483:9
  #3 0x81f396 in wallet_channels_load_active wallet/wallet.c:1749:23
  #4 0x81f03d in wallet_init_channels wallet/wallet.c:1765:9
  #5 0x72f1f9 in load_channels_from_wallet lightningd/peer_control.c:2257:7
  #6 0x672856 in main lightningd/lightningd.c:1121:25
rustyrussell pushed a commit that referenced this pull request Jun 6, 2023
Fixes nullability errors detected by UBSan:

wire/fromwire.c:173:46: runtime error: null pointer passed as argument 1, which is declared to never be null
external/libwally-core/src/secp256k1/include/secp256k1.h:432:3: note: nonnull attribute specified here
    #0 0x65214a in fromwire_secp256k1_ecdsa_signature wire/fromwire.c:173:6
    #1 0x659500 in printwire_secp256k1_ecdsa_signature devtools/print_wire.c:331:1
    #2 0x646ba2 in printwire_channel_update wire/peer_printgen.c:1900:7
    #3 0x637182 in printpeer_wire_message wire/peer_printgen.c:128:11
    #4 0x65a097 in main devtools/decodemsg.c:85:10
rustyrussell pushed a commit that referenced this pull request Jun 6, 2023
Memory leak detected by ASan:

==880002==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 32816 byte(s) in 1 object(s) allocated from:
    #0 0x5039e7 in malloc (lightningd/lightningd+0x5039e7)
    #1 0x7f2e8c203884 in __alloc_dir (/lib64/libc.so.6+0xd2884)
rustyrussell pushed a commit that referenced this pull request Jun 8, 2023
Detected by ASan in test_hsmtool_generatehsm:

==58698==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 120 byte(s) in 1 object(s) allocated from:
    #0 0x4e6247 in malloc
    #1 0x7f078452d672 in getdelim

SUMMARY: AddressSanitizer: 120 byte(s) leaked in 1 allocation(s).
rustyrussell added a commit that referenced this pull request Aug 9, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 9, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 10, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 10, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 10, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 12, 2023
These show that we should clean up our notes.  Here's the result from test_hardmpp:

# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
# No MPP, so added 0msat shadow fee
# Shadow route on flow 0/1 added 0 block delay. now 5
# sendpay flow groupid=1, partid=1, delivering=1800000000msat, probability=0.328
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1799999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=3, delivering=500000000msat, probability=0.475
# sendpay flow groupid=1, partid=2, delivering=1300000000msat, probability=0.242
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1299999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=5, delivering=260000000msat, probability=0.467
# sendpay flow groupid=1, partid=4, delivering=1040000000msat, probability=0.179
# Update chan knowledge scid=103x2x0, dir=0: [0msat,1039999999msat]
# onion error WIRE_TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0: failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
# Shadow route on flow 0/2 added 0 block delay. now 5
# Shadow route on flow 1/2 added 0 block delay. now 5
# sendpay flow groupid=1, partid=7, delivering=120000000msat, probability=0.494
# sendpay flow groupid=1, partid=6, delivering=920000000msat, probability=0.105

Ideally it would look something like:

# Computed 1 flows, probability=0.328:
#  Flow 1: 103x2x0 1800000000msat fee=0msat probability=0.328 shadow=+0msat/0blocks
#  Flow 1: FAIL: TEMPORARY_CHANNEL_FAILURE for 103x2x0.
# Computed 2 flows, probability=0.115:
#  Flow 2: XXX->XXX 1300000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 3: XXX->XXX 500000000msat fee=XXX, probability=0.475 shadow=+0msat/0blocks
#  Flow 2: FAIL: TEMPORARY_CHANNEL_FAILURE from node #1 103x2x0
# Computed 2 flows (3 total), probability=0.084, fee=0msat, delay=23
...
#  Flow 4: SUCCESS, 2 in progress should succeed soon.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 21, 2023
It now looks like (for test_hardmpp):

```
# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
#   Flow 1: amount=1800000000msat prob=0.328 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0->-103x3x0/1->
#   Flow 1: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 1: Failure of 1800000000msat for 103x5x0/0 capacity [0msat,3000000000msat] -> [0msat,1799999999msat]
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
#   Flow 2: amount=500000000msat prob=0.475 fees=0msat delay=12 path=-103x6x0/0(min=max=4294967295msat)->-103x1x0/1->-103x4x0/1->
#   Flow 3: amount=1300000000msat prob=0.242 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1799999999msat)->-103x3x0/1->
#   Flow 3: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 3: Failure of 1300000000msat for 103x5x0/0 capacity [0msat,1799999999msat] -> [0msat,1299999999msat]
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
#   Flow 4: amount=260000000msat prob=0.467 fees=0msat delay=12 path=-103x6x0/0(500000000msat in 1 htlcs,min=max=4294967295msat)->-103x1x0/1(500000000msat in 1 htlcs)->-103x4x0/1(500000000msat in 1 htlcs)->
#   Flow 5: amount=1040000000msat prob=0.179 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1299999999msat)->-103x3x0/1->
#   Flow 5: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 5: Failure of 1040000000msat for 103x5x0/0 capacity [0msat,1299999999msat] -> [0msat,1039999999msat]
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
#   Flow 6: amount=120000000msat prob=0.494 fees=0msat delay=12 path=-103x6x0/0(760000000msat in 2 htlcs,min=max=4294967295msat)->-103x1x0/1(760000000msat in 2 htlcs)->-103x4x0/1(760000000msat in 2 htlcs)->
#   Flow 7: amount=920000000msat prob=0.105 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1039999999msat)->-103x3x0/1->
#   Flow 7: Success
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Aug 23, 2023
It now looks like (for test_hardmpp):

```
# we have computed a set of 1 flows with probability 0.328, fees 0msat and delay 23
#   Flow 1: amount=1800000000msat prob=0.328 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0->-103x3x0/1->
#   Flow 1: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 1: Failure of 1800000000msat for 103x5x0/0 capacity [0msat,3000000000msat] -> [0msat,1799999999msat]
# we have computed a set of 2 flows with probability 0.115, fees 0msat and delay 23
#   Flow 2: amount=500000000msat prob=0.475 fees=0msat delay=12 path=-103x6x0/0(min=max=4294967295msat)->-103x1x0/1->-103x4x0/1->
#   Flow 3: amount=1300000000msat prob=0.242 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1799999999msat)->-103x3x0/1->
#   Flow 3: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 3: Failure of 1300000000msat for 103x5x0/0 capacity [0msat,1799999999msat] -> [0msat,1299999999msat]
# we have computed a set of 2 flows with probability 0.084, fees 0msat and delay 23
#   Flow 4: amount=260000000msat prob=0.467 fees=0msat delay=12 path=-103x6x0/0(500000000msat in 1 htlcs,min=max=4294967295msat)->-103x1x0/1(500000000msat in 1 htlcs)->-103x4x0/1(500000000msat in 1 htlcs)->
#   Flow 5: amount=1040000000msat prob=0.179 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1299999999msat)->-103x3x0/1->
#   Flow 5: Failed at node #1 (WIRE_TEMPORARY_CHANNEL_FAILURE): failed: WIRE_TEMPORARY_CHANNEL_FAILURE (reply from remote)
#   Flow 5: Failure of 1040000000msat for 103x5x0/0 capacity [0msat,1299999999msat] -> [0msat,1039999999msat]
# we have computed a set of 2 flows with probability 0.052, fees 0msat and delay 23
#   Flow 6: amount=120000000msat prob=0.494 fees=0msat delay=12 path=-103x6x0/0(760000000msat in 2 htlcs,min=max=4294967295msat)->-103x1x0/1(760000000msat in 2 htlcs)->-103x4x0/1(760000000msat in 2 htlcs)->
#   Flow 7: amount=920000000msat prob=0.105 fees=0msat delay=12 path=-103x2x0/1(min=max=4294967295msat)->-103x5x0/0(max=1039999999msat)->-103x3x0/1->
#   Flow 7: Success
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 10, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 23, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 23, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
cdecker pushed a commit that referenced this pull request Oct 24, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 27, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 27, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
rustyrussell added a commit that referenced this pull request Oct 28, 2023
Adding an index means:

1. Add the new subsystem, and new updated_index field to the db, and
   create xxx_index_deleted/created/updated APIs.
2. Hook up these functions to the points they need to be called.
3. Add index, start and limit fields to the list command.
4. Add created_index and updated_index into the list command.

This does #1.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant