Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move the historical bucket tracker to 32 unequal sized buckets #2176

Merged

Conversation

TheBlueMatt
Copy link
Collaborator

Currently we store our historical estimates of channel liquidity in
eight evenly-sized buckets, each representing a full octile of the
channel's total capacity. This lacks precision, especially at the
edges of channels where liquidity is expected to lie.

To mitigate this, we'd originally checked if a payment lies within
a bucket by comparing it to a sliding scale of 64ths of the
channel's capacity. This allowed us to assign penalties to payments
that fall within any more than the bottom 64th or lower than the
top 64th of a channel.

However, this still lacks material precision - on a 1 BTC channel
we could only consider failures for HTLCs above 1.5 million sats.
With today's lightning usage often including 1-100 sat payments in
tips, this is a rather significant lack of precision.

Here we rip out the existing buckets and replace them with 32
unequal sized buckets. This allows us to focus our precision at
the edges of a channel (where the liquidity is likely to lie, and
where precision helps the most).

We set the size of the edge buckets to 1/16,384th of the channel,
with the size increasing exponentially until it approaches the
inner buckets. For backwards compatibility, the buckets divide
evenly into octets, allowing us to convert the existing buckets
into the new ones cleanly.

This allows us to consider HTLCs down to 6,000 sats for 1 BTC
channels. In order to avoid failing to penalize channels which have
always failed, we drop the sliding scale for comparisons and simply
check if the payment is above the minimum bucket we're analyzing and
below or in the maximum one. This generates somewhat more
pessimistic scores, but fixes the lower bound where we suddenly
assign a 0% failure probability.

While this does represent a regression in routing performance, it
is not a substantial one. Across the three routing benchmarks which
use the ProbabilisticScorer, we see a 5-8% performance reduction,
with lots of noise.

@codecov-commenter
Copy link

codecov-commenter commented Apr 12, 2023

Codecov Report

Patch coverage: 92.30% and project coverage change: +0.02% 🎉

Comparison is base (3dffe54) 90.56% compared to head (06333e1) 90.59%.
Report is 130 commits behind head on main.

❗ Current head 06333e1 differs from pull request most recent head 9437642. Consider uploading reports for the commit 9437642 to get more accurate results

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2176      +/-   ##
==========================================
+ Coverage   90.56%   90.59%   +0.02%     
==========================================
  Files         109      112       +3     
  Lines       57304    58984    +1680     
  Branches    57304    58984    +1680     
==========================================
+ Hits        51899    53434    +1535     
- Misses       5405     5550     +145     
Files Changed Coverage Δ
lightning/src/routing/scoring.rs 93.20% <91.45%> (-1.08%) ⬇️
lightning/src/util/ser.rs 85.69% <100.00%> (+0.25%) ⬆️

... and 49 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@TheBlueMatt TheBlueMatt force-pushed the 2023-04-expose-success-prob branch 3 times, most recently from c526c15 to afdddc9 Compare April 16, 2023 21:58
@TheBlueMatt TheBlueMatt force-pushed the 2023-04-expose-success-prob branch 7 times, most recently from 5924744 to 3251b10 Compare April 24, 2023 05:26
@tnull tnull self-requested a review April 24, 2023 19:38
@TheBlueMatt TheBlueMatt force-pushed the 2023-04-expose-success-prob branch 2 times, most recently from e8959ba to adece2c Compare April 24, 2023 22:42
Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a brief initial scroll-through and commented what stood out.

Given the PR is quite big, would it be possible to break out some of the preparatory changes to their own sub-PR(s)? As commented, the switch to criterion in particular may merit its own PR, while some of the other prep work could be another PR?

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/routing/router.rs Outdated Show resolved Hide resolved
use crate::ln::features::InvoiceFeatures;
use crate::routing::gossip::NetworkGraph;
use crate::util::config::UserConfig;
use crate::util::ser::ReadableArgs;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Unused import.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My rustc doesn't think so? And its how NetworkGraph is read.

let mut routes = Vec::new();

'load_endpoints: for _ in 0..route_count * 3 /2 {
for _ in 0..route_count * 3 / 2 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: While we're here, can we use some variables for these number to make the arithmetics a bit more readable?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment instead as I'm not sure variables helps explain why.

bench/Cargo.toml Outdated Show resolved Hide resolved
/// The parameters used when scoring channels for routing. These can be tweaked at runtime
/// between routing calls as necessary. Note that changing decay parameters may or may not
/// impact channels which were last scored prior to the new half-life.
pub params: ProbabilisticScoringParameters,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we already have other set_* methods on ProbabilisticScorerUsingTime, should this rather be exposed via setter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should just remove the setters, it makes little sense to have them when we have no invalid states to check.

@TheBlueMatt
Copy link
Collaborator Author

Pulled out the first step in #2235

@TheBlueMatt
Copy link
Collaborator Author

Rebased after #2235 landed.

@tnull
Copy link
Contributor

tnull commented May 24, 2023

Rebased after #2235 landed.

Ah, excuse the delay, I was fooled by the blocked on dependent pr label. Will see to get back to this in the next days!

Copy link
Contributor

@alecchendev alecchendev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I probably have to do another pass to think more on some of the actual calculations in calculate_success_probability_times_billion, otherwise things made sense, just had a couple questions

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
#[inline]
fn amount_to_pos(amount_msat: u64, capacity_msat: u64) -> u16 {
if amount_msat < u64::max_value() / (MIN_SIZE_BUCKETS as u64) {
(amount_msat * (MIN_SIZE_BUCKETS as u64) / capacity_msat.saturating_add(1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are all these .saturating_add(1)s in case of divide by zeros?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed!

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
Comment on lines +1102 to +1103
/// Updates the history buckets for this channel. Because the history buckets track what we now
/// know about the channel's state *prior to our payment* (i.e. what we assume is "steady
/// state"), we allow the caller to set an offset applied to our liquidity bounds which
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not super clear to me why we can assume channels have some "steady state" liquidity?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It mostly stems from the fact that most nodes on the network are primarily "sources" or "sinks" - payment flows generally go in one direction over any given channel because they usually sit between one (set of) source(s) and sink(s). In practice node operators observe that channels are usually (mostly) saturated one way or another. Our goal with the historical tracker is to identify such channels and use that memory when scoring.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh interesting, I think this makes sense then 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It mostly stems from the fact that most nodes on the network are primarily "sources" or "sinks" - payment flows generally go in one direction over any given channel because they usually sit between one (set of) source(s) and sink(s). In practice node operators observe that channels are usually (mostly) saturated one way or another. Our goal with the historical tracker is to identify such channels and use that memory when scoring.

Note that this is still counterintuitive though: when we assume channel edges have a 'typical' liquidity behavior it would mean they have a 'steady state' w.r.t. channel drain, i.e., we might assume the flow in one direction over time to be 'steady' (even though that already might be a stretch, especially on low-traffic edges). However, here we try to extrapolate future liquidity bounds from historically observed liquidity bounds, which, assuming there is some 'typical' traffic, is exactly not the case. Except of course for channels that are always depleted in one direction, i.e., where the channel drain can't induce further changes in liquidity.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Except of course for channels that are always depleted in one direction, i.e., where the channel drain can't induce further changes in liquidity.

Its mostly this, tbh :)

But, in general, I think I see it still makes some sense - if we assume the flow on a channel is usually one direction, with occasional payments in the other direction, the anti-flow-direction balance of the channel is usually 0, with occasional jumps up to some value that's in the lower quarter or 10th or so of the channel's liquidity. For our purposes, what we'd then like to know is how often did one of these occasional payments come through (and hasn't yet been depleted) allowing us to pay in the "anti-flow" direction. That would really translate into just keeping a history of how often we succeeded/failed at payments at a given value, which is kinda-sorta what we're doing here, if you squint enough. We do let the liquidity bounds we learn over time get included though, which I'm not sure if its exactly right or not.

@TheBlueMatt TheBlueMatt added this to the 0.0.117 milestone Jul 8, 2023
@TheBlueMatt
Copy link
Collaborator Author

Tagging this 117 because we really need to improve our scoring success rate.

@valentinewallace
Copy link
Contributor

Can we split the first 5 commits into their own PR?

@TheBlueMatt TheBlueMatt force-pushed the 2023-04-expose-success-prob branch 2 times, most recently from 93ae026 to 1e40b25 Compare September 8, 2023 17:54
@tnull
Copy link
Contributor

tnull commented Sep 11, 2023

FWIW, second round of benchmarks (n=10000, Intel(R) Xeon(R) CPU E3-1226 v3 @ 3.30GHz, Linux):

     Running benches/bench.rs (target/release/deps/bench-4490af036b0d07bf)
Benchmarking generate_routes_with_zero_penalty_scorer
Benchmarking generate_routes_with_zero_penalty_scorer: Warming up for 3.0000 s

Warning: Unable to complete 10000 samples in 5.0s. You may wish to increase target time to 1152.5s, or reduce sample count to 40.
Benchmarking generate_routes_with_zero_penalty_scorer: Collecting 10000 samples in estimated 1152.5 s (10k iterations)
Benchmarking generate_routes_with_zero_penalty_scorer: Analyzing
generate_routes_with_zero_penalty_scorer
                        time:   [125.87 ms 126.77 ms 127.68 ms]
                        change: [-6.5533% -5.6046% -4.6198%] (p = 0.00 < 0.05)
                        Performance has improved.

Benchmarking generate_mpp_routes_with_zero_penalty_scorer
Benchmarking generate_mpp_routes_with_zero_penalty_scorer: Warming up for 3.0000 s

Warning: Unable to complete 10000 samples in 5.0s. You may wish to increase target time to 1288.6s, or reduce sample count to 30.
Benchmarking generate_mpp_routes_with_zero_penalty_scorer: Collecting 10000 samples in estimated 1288.6 s (10k iterations)
Benchmarking generate_mpp_routes_with_zero_penalty_scorer: Analyzing
generate_mpp_routes_with_zero_penalty_scorer
                        time:   [123.99 ms 125.18 ms 126.37 ms]
                        change: [-1.7004% -0.3152% +1.0714%] (p = 0.65 > 0.05)
                        No change in performance detected.
Found 584 outliers among 10000 measurements (5.84%)
  529 (5.29%) high mild
  55 (0.55%) high severe

Benchmarking generate_routes_with_probabilistic_scorer
Benchmarking generate_routes_with_probabilistic_scorer: Warming up for 3.0000 s

Warning: Unable to complete 10000 samples in 5.0s. You may wish to increase target time to 1987.9s, or reduce sample count to 20.
Benchmarking generate_routes_with_probabilistic_scorer: Collecting 10000 samples in estimated 1987.9 s (10k iterations)
Benchmarking generate_routes_with_probabilistic_scorer: Analyzing
generate_routes_with_probabilistic_scorer
                        time:   [200.45 ms 201.17 ms 201.90 ms]
                        change: [+13.883% +14.382% +14.834%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 400 outliers among 10000 measurements (4.00%)
  400 (4.00%) low mild

Benchmarking generate_mpp_routes_with_probabilistic_scorer
Benchmarking generate_mpp_routes_with_probabilistic_scorer: Warming up for 3.0000 s

Warning: Unable to complete 10000 samples in 5.0s. You may wish to increase target time to 2269.2s, or reduce sample count to 20.
Benchmarking generate_mpp_routes_with_probabilistic_scorer: Collecting 10000 samples in estimated 2269.2 s (10k iterations)
Benchmarking generate_mpp_routes_with_probabilistic_scorer: Analyzing
generate_mpp_routes_with_probabilistic_scorer
                        time:   [220.79 ms 221.19 ms 221.59 ms]
                        change: [+25.683% +26.076% +26.484%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 801 outliers among 10000 measurements (8.01%)
  200 (2.00%) low severe
  400 (4.00%) low mild
  1 (0.01%) high mild
  200 (2.00%) high severe

Benchmarking generate_large_mpp_routes_with_probabilistic_scorer
Benchmarking generate_large_mpp_routes_with_probabilistic_scorer: Warming up for 3.0000 s

Warning: Unable to complete 10000 samples in 5.0s. You may wish to increase target time to 6397.4s, or reduce sample count to 10.
Benchmarking generate_large_mpp_routes_with_probabilistic_scorer: Collecting 10000 samples in estimated 6397.4 s (10k iterations)
Benchmarking generate_large_mpp_routes_with_probabilistic_scorer: Analyzing
generate_large_mpp_routes_with_probabilistic_scorer
                        time:   [561.58 ms 569.23 ms 576.95 ms]
                        change: [+35.897% +38.434% +41.113%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 400 outliers among 10000 measurements (4.00%)
  400 (4.00%) high mild

@TheBlueMatt
Copy link
Collaborator Author

FWIW, second round of benchmarks (n=10000, Intel(R) Xeon(R) CPU E3-1226 v3 @ 3.30GHz, Linux):

Thanks. Somehow I can't actually run benchmarks locally because fetching time is now hitting the slowpath and syscalling on the machine i was gonna use. We really need to stop fetching time in the middle of our hot path 😭

@TheBlueMatt
Copy link
Collaborator Author

Squashed without further changes.

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
debug_assert!(payment_amt_64th_bucket <= 64);
if payment_amt_64th_bucket >= 64 { return None; }
let payment_pos = amount_to_pos(amount_msat, capacity_msat);
if payment_pos >= POSITION_TICKS { return None; }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explaining why we're walking the min/max combinations in a paragraph or two would be a game changer, and accounting for confusion wrt unequal bucket size.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think the above comment kinda does describe that. I went ahead and reworeded it, though, let me know if its clearer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the most confusing aspect that requires a more approachable explanation is why in the combination determinations, each min bucket is used once, but max buckets are used a decrementing number of times (depending on the loop's min bucket id).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did like the long text though, I was merely advocating to make it even longer :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the most confusing aspect that requires a more approachable explanation is why in the combination determinations, each min bucket is used once, but max buckets are used a decrementing number of times (depending on the loop's min bucket id).

That is explained, though (a max below a min is an invalid state). Not sure what else to say there.

I did like the long text though, I was merely advocating to make it even longer :)

Well, the long text was also explaining something that isn't true anymore, and just having more text definitely doesn't always make it easier to explain :p

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/routing/scoring.rs Show resolved Hide resolved
@arik-so
Copy link
Contributor

arik-so commented Sep 14, 2023

I think I'm fine with the comments given the extra context we discussed over chat. Feel free to squash.

@TheBlueMatt
Copy link
Collaborator Author

Squashed without further changes.

@arik-so
Copy link
Contributor

arik-so commented Sep 14, 2023

hm, looks like CI's failing

@TheBlueMatt
Copy link
Collaborator Author

Unrelated, should be fixed by #2577

arik-so
arik-so previously approved these changes Sep 15, 2023
tnull
tnull previously approved these changes Sep 15, 2023
Copy link
Contributor

@tnull tnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I think, mod some non-blocking nits.

Happy to have the cleanups addressed in a follow-up.

lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
lightning/src/util/ser.rs Outdated Show resolved Hide resolved
lightning/src/util/ser.rs Show resolved Hide resolved
lightning/src/routing/scoring.rs Show resolved Hide resolved
lightning/src/routing/scoring.rs Outdated Show resolved Hide resolved
In the next commit we'll need serialization for `[u16; 32]`, which
we add here, unifying it with the `[u8; *]` serialization macro.
Currently we store our historical estimates of channel liquidity in
eight evenly-sized buckets, each representing a full octile of the
channel's total capacity. This lacks precision, especially at the
edges of channels where liquidity is expected to lie.

To mitigate this, we'd originally checked if a payment lies within
a bucket by comparing it to a sliding scale of 64ths of the
channel's capacity. This allowed us to assign penalties to payments
that fall within any more than the bottom 64th or lower than the
top 64th of a channel.

However, this still lacks material precision - on a 1 BTC channel
we could only consider failures for HTLCs above 1.5 million sats.
With today's lightning usage often including 1-100 sat payments in
tips, this is a rather significant lack of precision.

Here we rip out the existing buckets and replace them with 32
*unequal* sized buckets. This allows us to focus our precision at
the edges of a channel (where the liquidity is likely to lie, and
where precision helps the most).

We set the size of the edge buckets to 1/16,384th of the channel,
with the size increasing exponentially until it approaches the
inner buckets. For backwards compatibility, the buckets divide
evenly into octets, allowing us to convert the existing buckets
into the new ones cleanly.

This allows us to consider HTLCs down to 6,000 sats for 1 BTC
channels. In order to avoid failing to penalize channels which have
always failed, we drop the sliding scale for comparisons and simply
check if the payment is above the minimum bucket we're analyzing and
below *or in* the maximum one. This generates somewhat more
pessimistic scores, but fixes the lower bound where we suddenly
assign a 0% failure probability.

While this does represent a regression in routing performance, in
some cases the impact of not having to examine as many nodes
dominates, leading to a performance increase.

On a Xeon E3-1220 v5, the `large_mpp_routes` benchmark shows a 15%
performance increase, while the more stable benchmarks show an 8%
and 15% performance regression.
The lower-bound of the scoring history buckets generally never get
used - if we try to send a payment and it fails, we don't learn
a new lower-bound for the liquidity of a channel, and if we
successfully send a payment we only learn a lower-bound that
applied *before* we sent the payment, not after it completed.

If we assume channels have some "steady-state" liquidity, then
tracking our liquidity estimates *after* a payment doesn't really
make sense - we're not super likely to make a second payment across
the same channel immediately (or, if we are, we can use our
un-decayed liquidity estimates for that). By the time we do go to
use the same channel again, we'd assume that its back at its
"steady-state" and the impacts of our payment have been lost.

To combat both of these effects, here we "subtract" the impact of
any just-successful payments from our liquidity estimates prior to
updating the historical buckets.
Points in the 0th minimum bucket either indicate we sent a payment
which is < 1/16,384th of the channel's capacity or, more likely,
we failed to send a payment. In either case, averaging the success
probability across the full range of upper-bounds doesn't make a
whole lot of sense - if we've never managed to send a "real"
payment over a channel, we should be considering it quite poor.

To address this, we special-case the 0th minimum bucket and only
look at the largest-offset max bucket when calculating the success
probability.
`historical_estimated_channel_liquidity_probabilities` previously
decayed to `Some(([0; 8], [0; 8]))`. This was thought to be useful
in that it allowed identification of cases where data was previously
available but is now decayed away vs cases where data was never
available. However, with the introduction of
`historical_estimated_payment_success_probability` (which uses the
existing scoring routines so will decay to `None`) this is
unnecessarily confusing.

Given data which has decayed to zero will also not be used anyway,
there's little reason to keep the old behavior, and we now decay to
`None`.

We also take this opportunity to split the overloaded
`get_decayed_buckets`, removing uneccessary code during scoring.
Scoring buckets are stored as fixed point ints, with a 5-bit
fractional part (i.e. a value of 1.0 is stored as "32"). Now that
we also have 32 buckets, this leads to the codebase having many
references to 32 which could reasonably be confused for each other.

Thus, we add a constant here for the value 1.0 in our fixed-point
scheme.
@TheBlueMatt
Copy link
Collaborator Author

Went ahead and addressed @tnull's comments cause there's a lot there and a few should be addressed here -

$ git diff-tree -U2 c48dc85bd 94376424cdiff --git a/lightning/src/routing/scoring.rs b/lightning/src/routing/scoring.rs
index 95affe080..5fdbf9ae3 100644
--- a/lightning/src/routing/scoring.rs
+++ b/lightning/src/routing/scoring.rs
@@ -707,14 +707,8 @@ impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> ProbabilisticScorerU
 						let dir_liq = liq.as_directed(source, target, 0, amt, self.decay_params);

-						let decayed_buckets = dir_liq.liquidity_history
+						let (min_buckets, max_buckets) = dir_liq.liquidity_history
 							.get_decayed_buckets(now, *dir_liq.last_updated,
-								self.decay_params.historical_no_updates_half_life);
-
-						let (min_buckets, max_buckets, _, _) =
-							if let Some(buckets) = decayed_buckets { buckets } else {
-								// If the buckets, once decayed, end up being zero, print them out
-								// as zeros.
-								([0; 32], [0; 32], 0, 0)
-							};
+								self.decay_params.historical_no_updates_half_life)
+							.unwrap_or(([0; 32], [0; 32]));

 						log_debug!(self.logger, core::concat!(
@@ -809,5 +803,5 @@ impl<G: Deref<Target = NetworkGraph<L>>, L: Deref, T: Time> ProbabilisticScorerU
 					let dir_liq = liq.as_directed(source, target, 0, amt, self.decay_params);

-					let (min_buckets, mut max_buckets, valid_points, required_decays) =
+					let (min_buckets, mut max_buckets) =
 						dir_liq.liquidity_history.get_decayed_buckets(
 							dir_liq.now, *dir_liq.last_updated,
@@ -1753,7 +1747,17 @@ mod bucketed_history {

 	impl<D: Deref<Target = HistoricalBucketRangeTracker>> HistoricalMinMaxBuckets<D> {
-		#[inline]
 		pub(super) fn get_decayed_buckets<T: Time>(&self, now: T, last_updated: T, half_life: Duration)
-		-> Option<([u16; 32], [u16; 32], u64, u32)> {
+		-> Option<([u16; 32], [u16; 32])> {
+			let (_, required_decays) = self.get_total_valid_points(now, last_updated, half_life)?;
+
+			let mut min_buckets = *self.min_liquidity_offset_history;
+			min_buckets.time_decay_data(required_decays);
+			let mut max_buckets = *self.max_liquidity_offset_history;
+			max_buckets.time_decay_data(required_decays);
+			Some((min_buckets.buckets, max_buckets.buckets))
+		}
+		#[inline]
+		pub(super) fn get_total_valid_points<T: Time>(&self, now: T, last_updated: T, half_life: Duration)
+		-> Option<(u64, u32)> {
 			let required_decays = now.duration_since(last_updated).as_secs()
 				.checked_div(half_life.as_secs())
@@ -1774,9 +1778,5 @@ mod bucketed_history {
 			}

-			let mut min_buckets = *self.min_liquidity_offset_history;
-			min_buckets.time_decay_data(required_decays);
-			let mut max_buckets = *self.max_liquidity_offset_history;
-			max_buckets.time_decay_data(required_decays);
-			Some((min_buckets.buckets, max_buckets.buckets, total_valid_points_tracked, required_decays))
+			Some((total_valid_points_tracked, required_decays))
 		}

@@ -1797,6 +1797,6 @@ mod bucketed_history {
 			// Check if all our buckets are zero, once decayed and treat it as if we had no data. We
 			// don't actually use the decayed buckets, though, as that would lose precision.
-			let (decayed_min_buckets, decayed_max_buckets, total_valid_points_tracked, required_decays)
-				= self.get_decayed_buckets(now, last_updated, half_life)?;
+			let (total_valid_points_tracked, _)
+				= self.get_total_valid_points(now, last_updated, half_life)?;

 			let mut cumulative_success_prob_times_billion = 0;
@@ -3276,5 +3276,5 @@ mod tests {

 		let mut amount_msat = 10_000_000;
-		let mut usage = ChannelUsage {
+		let usage = ChannelUsage {
 			amount_msat,
 			inflight_htlc_msat: 0,
diff --git a/lightning/src/util/ser.rs b/lightning/src/util/ser.rs
index 677daeca5..af4de88a1 100644
--- a/lightning/src/util/ser.rs
+++ b/lightning/src/util/ser.rs
@@ -554,5 +554,4 @@ impl Readable for bool {
 }

-// u8 arrays
 macro_rules! impl_array {
 	($size:expr, $ty: ty) => (
$ 

@TheBlueMatt TheBlueMatt merged commit 6d5c5ba into lightningdevkit:main Sep 15, 2023
10 of 14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants