Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync with Upstream #1

Open
wants to merge 919 commits into
base: master
Choose a base branch
from
Open

Sync with Upstream #1

wants to merge 919 commits into from

Conversation

2a5A1Ghu1
Copy link
Member

No description provided.

Roasbeef and others added 30 commits October 31, 2023 13:02
blockchain: export CheckSerializedHeight
…up a test websocket server to run the tests. Also, ensure these are run within a timeout, since they rely on concurrency
now testing that GetBestBlockHashAsync sends the getbestblockhash command via websocket connection and that the channel returned can be used to send the response when it is received
DoubleHashRaw provides a simple function for doing double hashes.  Since
it uses the digest instead of making the caller allocate a byte slice, it
can be more memory efficient vs other double hash functions.
isSyncCandidate is now changed to return true even if the peer is a
pruned node if and only if our chaintip is within 288 blocks of the
peer.

Rationale:
Pruned nodes that signal NODE_NETWORK_LIMITED MUST serve 288 blocks from
their chaintip.  If our chaintip is within that range, this peer can be
a sync candidate even if they aren't an archival node.
…aintips

blockchain, btcjson: Implement getchaintips rpc call
On startup, Ancestor call was taking a lot of time when the node was
loading the blockindex onto memory. This change speeds up the Ancestor
function significantly and speeds up the node during startup.

On testnet3 at blockheight ~2,500,000, the startup was around 30seconds
on current main and was 5 seconds with this change. Below is a benchstat
result showing the significant speedup.

goos: darwin
goarch: arm64
pkg: github.com/utreexo/utreexod/blockchain
           │     old.txt      │               new.txt                │
           │      sec/op      │    sec/op     vs base                │
Ancestor-8   120819.301µ ± 5%   7.013µ ± 19%  -99.99% (p=0.000 n=10)

           │  old.txt   │            new.txt             │
           │    B/op    │    B/op     vs base            │
Ancestor-8   0.000 ± 0%   0.000 ± 0%  ~ (p=1.000 n=10) ¹
¹ all samples are equal

           │  old.txt   │            new.txt             │
           │ allocs/op  │ allocs/op   vs base            │
Ancestor-8   0.000 ± 0%   0.000 ± 0%  ~ (p=1.000 n=10) ¹
¹ all samples are equal
We reuse the Bytes() function rather than duplicating its logic.
blockchain: Add ancestor optimization to finding Ancestor
Also, add mainnet seed.
…candidate-behavior

wire, netsync: change isSyncCandidate behavior
The use of the GO111MODULE environment variable doesn't have any effect
anymore and hasn't for a couple of versions. The default was set to "on"
a while back, so we can remove that variable everywhere.
To simplify building the release-grade (stripped and
reproducible) binaries from source, we add the install and
release-install make goals. Running either of the commands will create
binaries in the $GOPATH/bin directories.
The main difference between the two goals is that the release-install
will not contain any local paths and no debug information.
This change is part of the effort to add utxocache support to btcd.

sizehelper introduces code for 2 main things:
    1: Calculating how many entries to allocate for a map given a size
       in bytes.
    2: Calculating how much a map takes up in memory given the entries
       were allocated for the map.

These functionality are useful for allocating maps so that they'll be
allocating below a certain number of bytes.  Since go maps will always
allocate in powers of B (where B is the bucket size for the given map),
it may allocate too much memory.  For example, for a map that can store
8GB of entries, the map will grow to be 16GB once the map is full and
the caller puts an extra entry onto the map.

If we want to give a memory guarantee to the user, we can either:
    1: Limit the cache size to fixed sizes (4GB, 8GB, ...).
    2: Allocate a slice of maps.

The sizehelper code helps with (2).
chainhash, wire, btcutil, main: Memory efficient txhash
We also remove the replace directives in place.
oftenoccur and others added 30 commits April 26, 2024 08:08
Signed-off-by: oftenoccur <ezc5@sina.com>
Signed-off-by: MarkDaveny <peicuiping@aliyun.com>
InvalidateBlock() invalidates a given block and marks all its
descendents as invalid as well. The active chain tip changes if the
invalidated block is part of the best chain.
For debug purposes down the road, log that the node is pruned if it's
set to pruned.
This exposes publicly the ability to decode arbitrary-length bech32
strings and return the bech32 version that was used in the encoding. It
provides the underlying functionality for both DecodeNoLimit and
DecodeGeneric.
This is to mitigate CVE-2017-12842. Along the way, also error when
deserializing transactions that have the witness marker flag set
but have no witnesses. This matches Bitcoin Core's behaviour initially
introduced here bitcoin/bitcoin#14039. Allowing
such transactions is benign, but this makes sure that our parsing code
matches Core's exactly.
Update standardness rules congruent to Bitcoin Core
blockchain, fullblocktests, workmath, testhelper: add InvalidateBlock() method to BlockChain
Added DecodeNoLimitGeneric to bech32.go
refactor: set strconv.ParseFloat bitsize to 64
reorganizeChain() used to handle the following:
1: That the blocknodes being disconnected/connected indeed to connect
   properly without errors.
2: Perform the actual disconnect/connect of the blocknodes.

The functionality of 1, the validation that the disconnects/connects can
happen without errors are now refactored out into
verifyReorganizationValidity.

This is an effort made so that ReconsiderBlock() can call
verifyReorganizationValidity and set the block status of the
reconsidered chain and return nil even when an error returns as it's ok
to get an error when reconsidering an invalid branch.
ReconsiderBlock reconsiders the validity of the block for the passed
in blockhash. The behavior of the function mimics that of Bitcoin Core.

The invalid status of the block nodes are reset and if the chaintip that
is being reconsidered has more cumulative work, then we'll validate the
blocks and reorganize to it. If the cumulative work is lesser than the
current active chain tip, then nothing else will be done.
Signed-off-by: coderwander <770732124@qq.com>
blockchain: Add ReconsiderBlock()
invalidateblock and reconsiderblock are added to the rpcclient package
and an integration test is added to test the added functions.
The rpc calls and the rpchelp is added for the invalidateblock
and reconsiderblock methods on BlockChain.
main, rpcclient, integration: add rpccalls for invalidate and reconsiderblock
build: bump version to v0.24.2-beta
Fixes #2199.

Previous to this fix the keytype was only interpreted as a single byte,
even though BIP-0174 states it is to be parsed as a CompactSize/VarInt.
This commit updates the error str to match the same error returned from
`btcd` for both pre-0.24.2 and post-0.24.2.
psbt: decode keytype as compact size
Sending RPC requests through unix sockets
commit 0b2998b
Author: cec489 <173723251+cec489@users.noreply.github.com>
Date:   Mon Jun 24 20:01:13 2024 +0000

    A cleaner fix is to set the startTime in the server Start() function
    which is where the server is actually started.

commit ae6c125
Author: cec489 <173723251+cec489@users.noreply.github.com>
Date:   Mon Jun 24 19:15:23 2024 +0000

    Fix the btcctl uptime command by moving the setting of startupTime
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet