New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(node+wallet): estimate transaction priority (aka fees suggestion) #2261
feat(node+wallet): estimate transaction priority (aka fees suggestion) #2261
Conversation
|
CLIDefault pretty-print mode: $ witnet node priority
╔══════════════════════════════════════════════════════════╗
║ TRANSACTION PRIORITY ESTIMATION REPORT ║
╠══════════════════════════════════════════════════════════╣
║ Data request transactions ║
╟──────────┬──────────────────┬────────────────────────────║
║ Tier │ Priority │ Time-to-block ║
╟──────────┼──────────────────┼────────────────────────────║
║ Stinky │ 0 | up to 496 epochs ║
║ Low │ 766 | around 4 epochs ║
║ Medium │ 1005 | around 4 epochs ║
║ High │ 1275 | around 4 epochs ║
║ Opulent │ 6922 | less than 2 epochs ║
╠══════════════════════════════════════════════════════════╣
║ Value transfer transactions ║
╟──────────┬──────────────────┬────────────────────────────║
║ Tier │ Priority │ Time-to-block ║
╟──────────┼──────────────────┼────────────────────────────║
║ Stinky │ 0 | up to 496 epochs ║
║ Low │ 6765 | around 2 epochs ║
║ Medium │ 7959 | around 2 epochs ║
║ High │ 9153 | around 2 epochs ║
║ Opulent │ 11000 | less than 2 epochs ║
╚══════════════════════════════════════════════════════════╝ JSON mode (using $ witnet node priority --json
{"jsonrpc":"2.0","result":{"drt_high":{"priority":1275,"time_to_block":{"Around":4}},"drt_low":{"priority":766,"time_to_block":{"Around":4}},"drt_medium":{"priority":1005,"time_to_block":{"Around":4}},"drt_opulent":{"priority":6922,"time_to_block":{"LessThan":2}},"drt_stinky":{"priority":0,"time_to_block":{"UpTo":496}},"vtt_high":{"priority":9153,"time_to_block":{"Around":2}},"vtt_low":{"priority":6765,"time_to_block":{"Around":2}},"vtt_medium":{"priority":7959,"time_to_block":{"Around":2}},"vtt_opulent":{"priority":11000,"time_to_block":{"LessThan":2}},"vtt_stinky":{"priority":0,"time_to_block":{"UpTo":496}}},"id":"1"} |
For future reference, this is the JSON response (probably helps when implementing this in a client such as Sheikah, the upcoming light wallet, or the block explorer): {
"jsonrpc":"2.0",
"result":{
"drt_high":{
"priority":1275,
"time_to_block":{
"Around":4
}
},
"drt_low":{
"priority":766,
"time_to_block":{
"Around":4
}
},
"drt_medium":{
"priority":1005,
"time_to_block":{
"Around":4
}
},
"drt_opulent":{
"priority":6922,
"time_to_block":{
"LessThan":2
}
},
"drt_stinky":{
"priority":0,
"time_to_block":{
"UpTo":496
}
},
"vtt_high":{
"priority":9153,
"time_to_block":{
"Around":2
}
},
"vtt_low":{
"priority":6765,
"time_to_block":{
"Around":2
}
},
"vtt_medium":{
"priority":7959,
"time_to_block":{
"Around":2
}
},
"vtt_opulent":{
"priority":11000,
"time_to_block":{
"LessThan":2
}
},
"vtt_stinky":{
"priority":0,
"time_to_block":{
"UpTo":496
}
}
},
"id":"1"
} |
A couple of remarks / questions:
|
Please read the big warning above. That explains why they look that high 😉
That's not correct. DRTs can stay in the mempool for as long as they want before they enter commitment stage. This is all about time-to-block. Nothing to do with the data request resolution workflow itself.
In my experience, transactions with 0 fees are mined after a while because, as you suggested, there are still some miners who actively set the minimum to 0. In any case, the minimum priority suggested for the |
Maybe this is a common misconception because we have only hit the DR block space limit seldomly? 🤔 |
Oops.
Interesting, that's news to me. I actually thought that if they were not picked up and resolved within four blocks, they were invalidated and you'd get an |
Think about this: before the request enters a block, how would you mark the origin of those 4 epochs limit? Generally speaking, the protocol can only enforce any limitations, penalties, etc. on stuff that it can deterministically derive from the chain state. That's not the case for transactions in the mempool, which doesn't exist from a protocol standpoint 😬 As I recently explained to other devs in the community, this is the same reason why there is no such thing as a "tally fee". The need for a tally (and its content) can be deterministically derived from the chain state. Hence, there's a rule that says that if a block doesn't contain a tally that was supposed to be there, it's invalid. With that rule in place, there's simply no need to use a fee to convince the miner to include the tally 🤷♂️ |
Nodes could remove "stale data requests" from their local mempool 4 epochs after they entered if they were not picked, but you are right that this is not a deterministic system. |
41e678a
to
b2c0cad
Compare
b2c0cad
to
725e3c0
Compare
/// Transaction priority engine | ||
priority_engine: PriorityEngine, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this field is outside of ChainState
, it will not be persisted. Which means that upon starting the node it will be empty, and also after a chain rollback it won't rollback the priority data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the beginning I put it in the ChainState
, to then realize that it wouldn't be a great thing because we would most likely need migrations.
Then, I moved it to ChainManager
and ran away from persisting the engine because:
PriorityEngine
is populated during synchronization, so fresh nodes and nodes that come back from being stopped for a while are in business immediately.- There's a minimum number of blocks that need to be tracked before the estimator gives a positive result. Before that, the
priority
method provides insightful feedback telling you to wait X minutes. - Rollbacks are rare and don't have a significant impact on priority (unless there's a mechanism for re-pushing rolled back transactions into the mempool, I guess?)
// Believe it or not, these `to_string` are needed for proper formatting, hence the | ||
// clippy allow directive above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe would be worth it to investigate why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So basically, this is the Display
impl of the Priority
type:
impl fmt::Display for Priority {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:.3}", self.0)
}
}
The problem is that it ignores all the flags passed to the formatter. So {:<8}
is handled the same as {}
. This is because write!
calls f.write_str
, and f.write_str
is used to write raw strings, ignoring all the extra flags. So the write macro ignores the flags even if we manually convert the type to string:
// This is the same as write!(f, "{:.3}", self.0)
let three_decimals = format!("{:.3}", self.0);
write!(f, "{}", three_decimals)
There are ways to manually check the padding and alignment flags, for example using f.align()
, but the easiest approach is to forward to the Display
implementation of a related type, such as String
:
let three_decimals = format!("{:.3}", self.0);
three_decimals.fmt(f)
But I haven't seen any uses of that code pattern, we always use write!
or f.write_str
to implement Display
, and that ignores padding and alignment. So I would just leave it as it is and keep the allow clippy, because it makes sense to convert a type to a string and then change the padding of that string, so clippy is wrong in this case.
@@ -1757,6 +1760,17 @@ pub async fn signaling_info(params: Result<(), jsonrpc_core::Error>) -> JsonRpcR | |||
} | |||
} | |||
|
|||
/// Get priority and time-to-block estimations for different priority tiers. | |||
pub async fn priority() -> JsonRpcResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the way we indicate that a function does not expect any params:
pub async fn priority() -> JsonRpcResult { | |
pub async fn priority(params: Result<(), jsonrpc_core::Error>) -> JsonRpcResult { | |
match params { | |
Ok(()) => (), | |
Err(e) => return Err(e), | |
}; |
Because without this logic, it is not an error to accidentally create a request with some unexpected params, and then if we update this method to add some parameters it may break code that should never have worked in the first place.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow that now explains the otherwise useless matches in the other methods. Thanks!
The priority command should always output the estimated time in seconds/minutes instead of epochs, we can't expect users to know if 10 epochs is fast or slow. |
I tried to make some unit tests and the estimates are a bit strange, I would expect the fees to always be sorted in order such that stinky < low < medium, etc, but in this case the result is '[1000, 849, 500398, 1149767, 1100000]' #[test]
fn can_estimate_correctly2() {
// 100 blocks where highest and lowest priorities are 1000000 and 1000
let priorities = vec![
Priorities {
drt_highest: Priority::from_fee_weight(1000000, 1),
drt_lowest: Some(Priority::from_fee_weight(1000, 1)),
vtt_highest: Priority::from_fee_weight(1000000, 1),
vtt_lowest: Some(Priority::from_fee_weight(1000, 1)),
}; 100
];
let engine = PriorityEngine::from_vec(priorities);
let estimate = engine.estimate_priority().unwrap();
let mut vtt_estimates = vec![];
vtt_estimates.push(estimate.vtt_stinky);
vtt_estimates.push(estimate.vtt_low);
vtt_estimates.push(estimate.vtt_medium);
vtt_estimates.push(estimate.vtt_high);
vtt_estimates.push(estimate.vtt_opulent);
let vtt_fees: Vec<_> = vtt_estimates.iter().map(|estimate| estimate.priority.nano_wit).collect();
panic!("{:?}", vtt_fees);
} |
221d1ee
to
4fde7cb
Compare
This is the output I get on mainnet with commit 4fde7cb
|
Also I don't see any way to encode the information that the block still has room for more transactions. Because when blocks are almost empty (1 or 2 transactions per block, but never 0) I would expect the "low" fees to be near 0, and when the blocks are full I would expect "low" to be near the minimum. But the actual estimate is near the minimum in both cases. |
I'm wondering if it would be better UX to fix the estimated time to the most common values (for example, 1 minute, 1 hour, 1 day), and then calculate the required fee to get that. So instead of
How about something like:
Because with the current approach we currently get some useless estimates like in the example above, because they have the same expected time. Ensuring that the expected time is different does not guarantee that the fees will be different, but maybe it is easier to understand. In the first example my question would be "so what is the lowest priority that gives me around 1 minute 30 seconds?". |
I like that approach too. Instead of speaking about network conditions, it focuses on the user's expectations, alike to an SLA. Let me prototype this a bit, I think it won't be much of a change. |
…t with named fields
The lowest priorities are used now to calculate the range of the buckets, and this is done in a first pass to avoid "missbucketing" fix witnet#2273
…nsaction creation
also: add tests for Fee decoding
6091b36
to
8a74c26
Compare
@tmpolaczyk something to add, sir? |
data_structures/Cargo.toml
Outdated
ordered-float = "3.0" | ||
partial_struct = { path = "../partial_struct" } | ||
pdatastructs = "0.7.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not used
pdatastructs = "0.7.0" |
data_structures/Cargo.toml
Outdated
protobuf = { version = "2.23.0", features = ["with-serde"] } | ||
protobuf-convert = "0.1.1" | ||
rand = "0.7.3" | ||
rand = "0.8.5" | ||
rand_distr = "0.4.3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rand_distr
is only used in tests, so it can be moved to dev-dependencies
It works, only a few comments about dependencies and it can be merged |
961747f removes redundant dependencies as suggested by @tmpolaczyk |
Tasks
debug
levelsend
,split
andjoin
CLI methods for interactive fee choosing with--suggest-fee
flagsendRequest
PriorityEngine
fromChainState
toChainManager
ALL PRIORITY VALUES USED IN THIS FEATURE ARE MULTIPLIED BY 1000 FOR SUB-NANOWIT PRECISION, PROVIDED THAT THE SMALLEST POSSIBLE FEE IS 1 NANOWIT BUT THE LIGHTEST TRANSACTION IS 169 WEIGHT UNITS.FOR THE SAKE OF UX, THESE VALUES STILL NEED TO BE PRESENTED TO WALLET USERS IN NANOWIT, IDEALLY WITH 1 DECIMAL DIGIT IF THAT DIGIT IS DIFFERENT FROM 0.fix #2273