Skip to content
This repository has been archived by the owner on Mar 14, 2023. It is now read-only.

Add example with compact block filters #153

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

danielabrozzoni
Copy link
Collaborator

@danielabrozzoni danielabrozzoni commented Jan 19, 2023

No description provided.

@evanlinjin
Copy link
Collaborator

I just had a brief look through, nothing seems apparent to me why balance would disappear after restart. Will properly debug later.

@danielabrozzoni
Copy link
Collaborator Author

Rebased on top of #157

(Still doesn't work though)

@evanlinjin
Copy link
Collaborator

Rebased on top of #157

(Still doesn't work though)

Did you also add on the fix mentioned in Discord?

@evanlinjin
Copy link
Collaborator

evanlinjin commented Jan 31, 2023

danielabrozzoni#2

I did a PR into yours fixing the balance disappearing after restart.

Looking good so far. In my opinion, the next steps for this PR:

Don't hold a mutable reference of the tracker when syncing. We need a structure (imo, like an iterator), where we only grab initial data from the tracker to create. Then it spits out data which the tracker can consume.

In my head, this thing that the CBF Iterator spits out can look something like this:

pub enum CbfData {
    RelevantBlock(bitcoin::Block),
    DisconnectedBlock(u32), // block height
}

Additionally, we need to somehow figure out how to properly do stop gap. We need logic that goes "uhh oh, we need more scripts for a rescan"!

@danielabrozzoni
Copy link
Collaborator Author

Don't hold a mutable reference of the tracker when syncing. We need a structure (imo, like an iterator), where we only grab initial data from the tracker to create. Then it spits out data which the tracker can consume.

Right, done! :)

Additionally, we need to somehow figure out how to properly do stop gap. We need logic that goes "uhh oh, we need more scripts for a rescan"!

I think this is pretty trivial to implement, but I'd prefer to do it after receiving some feedback on the current code, if possible

pub fn sync_setup(
&mut self,
scripts: impl Iterator<Item = Script>,
processed_height: Option<BlockId>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of processed_height as an input, what do you think about using local_chain: BTreeMap<u32, BlockHash> and iterate in reverse until we find the last point of agreement?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nakamoto has handle::Handle::get_block_by_height so this should be doable.

// TODO: improve this!
// It might be that both me and this peer
// are lagging behind
return Ok(CbfEvent::ChainSynced(setup.processed_height));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using nakamoto's Event::Synced to emit this? This seems very error prone and even an attack vector. I think you shouldn't need to keep track of setup.peer_height at all.

So when you get Event::Synced with height == Handle::get_tip() you can emit this.

let mut update = ChainGraph::<TxHeight>::default();

if let Some(last_cp) = last_checkpoint.clone() {
// Otherwise the chains don't connect :)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if they don't connect though? :)


for (tx, height) in txs {
//keychain_tracker.lock().unwrap().txout_index.pad_all_with_unused(stop_gap);
if keychain_tracker.txout_index.is_relevant(&tx) {
Copy link
Owner

@LLFourn LLFourn Feb 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think rather than doing this here we should simply have a way of filtering out irrelevant things out of an update. For now we can just include all transactions in the block. I made an issue to build the tools to do this properly: #193

There really isn't much harm to just acquiring a .lock() on the db and the keychain tracker and just doing an apply_update to the KeychainTracker and pushing the changeset into the database right away. This is good because it saves your progress for each block as you go.

}
CbfEvent::BlockDisconnected(id) => {
// TODO: what happens if a block gets disconnected before I process its
// filter?
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really hope that BlockMatched doesn't get emitted for things that are not in the chain! Hopefully nakamoto guarantees this.

}
CbfCommands::Scan => {
// indexing logic!
let mut keychain_tracker = keychain_tracker.lock().unwrap();
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try and do every lock in scoped block {...} so the reader can see where locks can be taken and for how long they need to be taken for. Here you should only need the lock for the time it takes to get the scripts you're interested in and the checkpoints. You can return them from the block and release the lock.

chain_graph: keychain_tracker
.chain_graph()
.determine_changeset(&update)?,
};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should aim to not do this manually after #193 is done.

@@ -19,14 +19,15 @@ use bdk_chain::{
use bdk_coin_select::{coin_select_bnb, CoinSelector, CoinSelectorOpt, WeightedValue};
pub use clap;
use clap::{Parser, Subcommand};
pub use log;
Copy link
Owner

@LLFourn LLFourn Feb 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if we need this. I think CLI should try and avoid using logging unless we're going into a kind of daemon mode. I think env_logger is a good idea though but I don't see it being initialized anywhere yet.

(this might be because this is on top of @evanlinjin's PR and nothing to do with this PR).

struct CbfArgs {
#[arg(value_enum, default_value_t = Domains::IPv4)]
domains: Domains,
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer it if domains were just arguments to the actual commands that use domains and to not have CbfArgs. FWIW I've complained about adding global arguments to the bdk_cli::init to @evanlinjin but it looks like he didn't remove it from that PR yet.

Copy link
Owner

@LLFourn LLFourn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @danielabrozzoni. I'm excited about nakamoto CBF. It's looking like a great option. I think we should do #193 so we can simplify the implementation here.

@evanlinjin
Copy link
Collaborator

@LLFourn I've tried removing the global CLI args but I couldn't find a way to do it. The "general commands" need to take in a client, and so the client needs to be initiated before our chain-specific commands.

@LLFourn
Copy link
Owner

LLFourn commented Feb 23, 2023

@evanlinjin I see so you actually need to know "domains" for broadcasting too. So let's add the ability to add params to Send (rather than globally). Then pass those params through to the closure I made in #198 for broadcasting. Then you can create the client inside the closure.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants