Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release Candidate v2023.3.0/v2023.3.1 - HIP30 Hard Fork #4500

Merged
merged 66 commits into from
Oct 18, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
66 commits
Select commit Hold shift + click to select a range
9f1576f
Fixed debug run for mac. (#4484)
Frozen Aug 22, 2023
8c20652
Next validator in view change. (#4492)
Frozen Aug 30, 2023
083eef4
HIP-30 Boilerplate (#4495)
MaxMustermann2 Aug 30, 2023
62feec9
HIP-30: minimum validator commission of 7% (#4496)
MaxMustermann2 Sep 1, 2023
0c981ff
HIP-30: Emission split (#4497)
MaxMustermann2 Sep 2, 2023
14eba6e
HIP-30: Set up pre-image generation, recording, export and import (#4…
MaxMustermann2 Sep 11, 2023
115e434
HIP-30: Shard reduction (#4498)
MaxMustermann2 Sep 11, 2023
b798df0
Fix for index. (#4504)
Frozen Sep 18, 2023
6d65d11
Small improvements. (#4477)
Frozen Sep 18, 2023
688b933
HIP-30: Balance migration (#4499)
MaxMustermann2 Sep 18, 2023
c2bf8de
Hip30 balance migration with fix. (#4502)
Frozen Sep 18, 2023
343058a
remove double import
diego1q2w Sep 19, 2023
445d8e3
rename variable
diego1q2w Sep 19, 2023
2890d84
remove unused fmt
diego1q2w Sep 19, 2023
29d46b9
Fixed imports. (#4507)
Frozen Sep 19, 2023
74ede04
Block gas 30m. (#4501)
Frozen Sep 19, 2023
ed08ff3
Hip30 : localnet account migration fix (#4508)
sophoah Sep 19, 2023
9a5900d
create the sate as the insertchain does
diego1q2w Sep 21, 2023
0130436
create the sate as the insertchain does
diego1q2w Sep 21, 2023
310b4cd
roll back changes
diego1q2w Sep 21, 2023
924d133
use the updated state in case there is one
diego1q2w Sep 21, 2023
6428430
use the updated state in case there is one
diego1q2w Sep 21, 2023
aa0ad97
add testing fmt
diego1q2w Sep 21, 2023
018c336
fix getReceipts rpc issue (#4511)
GheisMohammadi Sep 22, 2023
51ed6b6
Merge branch 'dev' into hip30/testing
diego1q2w Sep 22, 2023
68cb005
merge fixes
diego1q2w Sep 22, 2023
ff86c57
pass the correct config
diego1q2w Sep 22, 2023
1eb1cc4
pass the correct config
diego1q2w Sep 22, 2023
ccc9252
Fixes.
Frozen Sep 22, 2023
cdcae9a
reduce the block number count
diego1q2w Sep 23, 2023
9586553
add verify preimages rpc method
diego1q2w Sep 25, 2023
2706451
write preimages on process
diego1q2w Sep 25, 2023
9c84d1c
commit preimages
diego1q2w Sep 25, 2023
fcbe6c9
commit preimages
diego1q2w Sep 25, 2023
e205242
Merge pull request #4510 from Frozen/hip30/testing-minor-fixes
Frozen Sep 26, 2023
8b6df6b
verify root hashes after commit
diego1q2w Sep 26, 2023
8824e42
send metrics on node start
diego1q2w Sep 27, 2023
0fadfe3
send the verified preimages
diego1q2w Sep 27, 2023
6b476a1
correct the starting block
diego1q2w Sep 27, 2023
8335523
register the verified address
diego1q2w Sep 27, 2023
83d5104
flush the db every export and verify
diego1q2w Sep 27, 2023
5e1f482
add shard label
diego1q2w Sep 27, 2023
215bb10
minnor fixes
diego1q2w Sep 27, 2023
a9077c9
Merge pull request #4513 from harmony-one/hip30/add-prometheus-metrics
diego1q2w Sep 28, 2023
7111b92
aggregate the recovery multisig reward (#4514)
sophoah Sep 28, 2023
d8f1225
1) Removed unused worker (#4512)
Frozen Sep 28, 2023
fa99cd1
Improvements of streamsync to deploy on mainnet (#4493)
GheisMohammadi Sep 29, 2023
0252bd7
fix duplicate function def
GheisMohammadi Oct 3, 2023
532e28f
Merge pull request #4518 from harmony-one/fix/duplicate_def
adsorptionenthalpy Oct 3, 2023
171e612
reset devnet and set 30M epoch for all network except mainnet/testnet…
sophoah Oct 5, 2023
abf9dba
reduce the epoch time for devnet to 30 min (#4522)
diego1q2w Oct 6, 2023
ce0f483
add GetNodeData tests for stream client, increase nodes and receipts …
GheisMohammadi Oct 6, 2023
2378b2d
Merge pull request #4503 from harmony-one/hip30/testing
sophoah Oct 9, 2023
3f6a0db
use new(big.Int) so we don't modify the epoch value (#4523)
diego1q2w Oct 9, 2023
e133806
add hip30 testing for devnet/partner network (#4525)
sophoah Oct 10, 2023
d2743d9
enable hip30 epoch for testnet (#4526)
diego1q2w Oct 11, 2023
88e033a
enable hip30 and gas30m epoch for mainnet (#4528)
diego1q2w Oct 11, 2023
b143085
fix preimage import bugs (#4529)
diego1q2w Oct 12, 2023
1adea06
Fixed lru cache size. (#4535)
Frozen Oct 15, 2023
ce2c057
fix decryptRaw issue for nil/empty data (#4532)
GheisMohammadi Oct 15, 2023
ae578ba
update deprecated ioutil, improve local accounts (#4527)
GheisMohammadi Oct 15, 2023
370d122
make peer connected/disconnected debug log level (#4537)
diego1q2w Oct 16, 2023
cf5dd8b
Revert improvements. (#4520)
Frozen Oct 17, 2023
0d402e4
Updated go lib p2p deps. (#4538)
Frozen Oct 17, 2023
9f5768a
Flush data. (#4536)
Frozen Oct 17, 2023
1633656
Rotation fix and update. (#4516)
Frozen Oct 17, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ trace-pointer:
bash ./scripts/go_executable_build.sh -t

debug:
rm -rf .dht-127.0.0.1*
bash ./test/debug.sh

debug-kill:
Expand Down Expand Up @@ -167,3 +168,15 @@ docker:

travis_go_checker:
bash ./scripts/travis_go_checker.sh

travis_rpc_checker:
bash ./scripts/travis_rpc_checker.sh

travis_rosetta_checker:
bash ./scripts/travis_rosetta_checker.sh

debug_external: clean
bash test/debug-external.sh

build_localnet_validator:
bash test/build-localnet-validator.sh
5 changes: 2 additions & 3 deletions accounts/abi/bind/auth.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ import (
"crypto/ecdsa"
"errors"
"io"
"io/ioutil"
"math/big"

"github.com/ethereum/go-ethereum/common"
Expand All @@ -44,7 +43,7 @@ var ErrNotAuthorized = errors.New("not authorized to sign this account")
// Deprecated: Use NewTransactorWithChainID instead.
func NewTransactor(keyin io.Reader, passphrase string) (*TransactOpts, error) {
log.Warn("WARNING: NewTransactor has been deprecated in favour of NewTransactorWithChainID")
json, err := ioutil.ReadAll(keyin)
json, err := io.ReadAll(keyin)
if err != nil {
return nil, err
}
Expand Down Expand Up @@ -103,7 +102,7 @@ func NewKeyedTransactor(key *ecdsa.PrivateKey) *TransactOpts {
// NewTransactorWithChainID is a utility method to easily create a transaction signer from
// an encrypted json key stream and the associated passphrase.
func NewTransactorWithChainID(keyin io.Reader, passphrase string, chainID *big.Int) (*TransactOpts, error) {
json, err := ioutil.ReadAll(keyin)
json, err := io.ReadAll(keyin)
if err != nil {
return nil, err
}
Expand Down
9 changes: 4 additions & 5 deletions accounts/keystore/account_cache_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ package keystore

import (
"fmt"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
Expand Down Expand Up @@ -133,11 +132,11 @@ func TestUpdatedKeyfileContents(t *testing.T) {
return
}

// needed so that modTime of `file` is different to its current value after ioutil.WriteFile
// needed so that modTime of `file` is different to its current value after io.WriteFile
time.Sleep(1000 * time.Millisecond)

// Now replace file contents with crap
if err := ioutil.WriteFile(file, []byte("foo"), 0644); err != nil {
if err := os.WriteFile(file, []byte("foo"), 0644); err != nil {
t.Fatal(err)
return
}
Expand All @@ -150,9 +149,9 @@ func TestUpdatedKeyfileContents(t *testing.T) {

// forceCopyFile is like cp.CopyFile, but doesn't complain if the destination exists.
func forceCopyFile(dst, src string) error {
data, err := ioutil.ReadFile(src)
data, err := os.ReadFile(src)
if err != nil {
return err
}
return ioutil.WriteFile(dst, data, 0644)
return os.WriteFile(dst, data, 0644)
}
30 changes: 19 additions & 11 deletions accounts/keystore/file_cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
package keystore

import (
"io/ioutil"
"io/fs"
"os"
"path/filepath"
"strings"
Expand All @@ -42,7 +42,7 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, er
t0 := time.Now()

// List all the failes from the keystore folder
files, err := ioutil.ReadDir(keyDir)
files, err := os.ReadDir(keyDir)
if err != nil {
return nil, nil, nil, err
}
Expand All @@ -63,15 +63,19 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, er
utils.Logger().Debug().Str("path", path).Msg("Ignoring file on account scan")
continue
}
// Gather the set of all and fresly modified files
// Gather the set of all and freshly modified files
all.Add(path)

modified := fi.ModTime()
if modified.After(fc.lastMod) {
mods.Add(path)
}
if modified.After(newLastMod) {
newLastMod = modified
if info, err := fi.Info(); err != nil {
continue
} else {
modified := info.ModTime()
if modified.After(fc.lastMod) {
mods.Add(path)
}
if modified.After(newLastMod) {
newLastMod = modified
}
}
}
t2 := time.Now()
Expand All @@ -94,14 +98,18 @@ func (fc *fileCache) scan(keyDir string) (mapset.Set, mapset.Set, mapset.Set, er
}

// nonKeyFile ignores editor backups, hidden files and folders/symlinks.
func nonKeyFile(fi os.FileInfo) bool {
func nonKeyFile(fi fs.DirEntry) bool {
// Skip editor backups and UNIX-style hidden files.
if strings.HasSuffix(fi.Name(), "~") || strings.HasPrefix(fi.Name(), ".") {
return true
}
// Skip misc special files, directories (yes, symlinks too).
if fi.IsDir() || fi.Mode()&os.ModeType != 0 {
if info, err := fi.Info(); err != nil {
return true
} else {
if fi.IsDir() || info.Mode()&os.ModeType != 0 {
return true
}
}
return false
}
3 changes: 1 addition & 2 deletions accounts/keystore/key.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
Expand Down Expand Up @@ -195,7 +194,7 @@ func writeTemporaryKeyFile(file string, content []byte) (string, error) {
}
// Atomic write: create a temporary hidden file first
// then move it into place. TempFile assigns mode 0600.
f, err := ioutil.TempFile(filepath.Dir(file), "."+filepath.Base(file)+".tmp")
f, err := os.CreateTemp(filepath.Dir(file), "."+filepath.Base(file)+".tmp")
if err != nil {
return "", err
}
Expand Down
3 changes: 1 addition & 2 deletions accounts/keystore/keystore_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
package keystore

import (
"io/ioutil"
"os"
"runtime"
"strings"
Expand Down Expand Up @@ -213,7 +212,7 @@ func TestSignRace(t *testing.T) {
}

func tmpKeyStore(t *testing.T, encrypted bool) (string, *KeyStore) {
d, err := ioutil.TempDir("", "eth-keystore-test")
d, err := os.MkdirTemp("", "eth-keystore-test")
if err != nil {
t.Fatal(err)
}
Expand Down
3 changes: 1 addition & 2 deletions accounts/keystore/passphrase.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"

Expand Down Expand Up @@ -82,7 +81,7 @@ type keyStorePassphrase struct {

func (ks keyStorePassphrase) GetKey(addr common.Address, filename, auth string) (*Key, error) {
// Load the key from the keystore and decrypt its contents
keyjson, err := ioutil.ReadFile(filename)
keyjson, err := os.ReadFile(filename)
if err != nil {
return nil, err
}
Expand Down
4 changes: 2 additions & 2 deletions accounts/keystore/passphrase_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
package keystore

import (
"io/ioutil"
"os"
"testing"

"github.com/ethereum/go-ethereum/common"
Expand All @@ -30,7 +30,7 @@ const (

// Tests that a json key file can be decrypted and encrypted in multiple rounds.
func TestKeyEncryptDecrypt(t *testing.T) {
keyjson, err := ioutil.ReadFile("testdata/very-light-scrypt.json")
keyjson, err := os.ReadFile("testdata/very-light-scrypt.json")
if err != nil {
t.Fatal(err)
}
Expand Down
3 changes: 1 addition & 2 deletions accounts/keystore/plain_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ import (
"crypto/rand"
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"reflect"
Expand All @@ -32,7 +31,7 @@ import (
)

func tmpKeyStoreIface(t *testing.T, encrypted bool) (dir string, ks keyStore) {
d, err := ioutil.TempDir("", "geth-keystore-test")
d, err := os.MkdirTemp("", "geth-keystore-test")
if err != nil {
t.Fatal(err)
}
Expand Down
11 changes: 5 additions & 6 deletions api/service/legacysync/syncing.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ import (
"github.com/harmony-one/harmony/internal/chain"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/node/worker"
"github.com/harmony-one/harmony/p2p"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
Expand Down Expand Up @@ -932,7 +931,7 @@ func (ss *StateSync) UpdateBlockAndStatus(block *types.Block, bc core.BlockChain
}

// generateNewState will construct most recent state from downloaded blocks
func (ss *StateSync) generateNewState(bc core.BlockChain, worker *worker.Worker) error {
func (ss *StateSync) generateNewState(bc core.BlockChain) error {
// update blocks created before node start sync
parentHash := bc.CurrentBlock().Hash()

Expand Down Expand Up @@ -995,7 +994,7 @@ func (ss *StateSync) generateNewState(bc core.BlockChain, worker *worker.Worker)
}

// ProcessStateSync processes state sync from the blocks received but not yet processed so far
func (ss *StateSync) ProcessStateSync(startHash []byte, size uint32, bc core.BlockChain, worker *worker.Worker) error {
func (ss *StateSync) ProcessStateSync(startHash []byte, size uint32, bc core.BlockChain) error {
// Gets consensus hashes.
if err := ss.getConsensusHashes(startHash, size); err != nil {
return errors.Wrap(err, "getConsensusHashes")
Expand All @@ -1005,7 +1004,7 @@ func (ss *StateSync) ProcessStateSync(startHash []byte, size uint32, bc core.Blo
if ss.stateSyncTaskQueue.Len() > 0 {
ss.downloadBlocks(bc)
}
return ss.generateNewState(bc, worker)
return ss.generateNewState(bc)
}

func (peerConfig *SyncPeerConfig) registerToBroadcast(peerHash []byte, ip, port string) error {
Expand Down Expand Up @@ -1076,7 +1075,7 @@ func (ss *StateSync) GetMaxPeerHeight() (uint64, error) {
}

// SyncLoop will keep syncing with peers until catches up
func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus, loopMinTime time.Duration) {
func (ss *StateSync) SyncLoop(bc core.BlockChain, isBeacon bool, consensus *consensus.Consensus, loopMinTime time.Duration) {
utils.Logger().Info().Msgf("legacy sync is executing ...")
if !isBeacon {
ss.RegisterNodeInfo()
Expand Down Expand Up @@ -1110,7 +1109,7 @@ func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeaco
if size > SyncLoopBatchSize {
size = SyncLoopBatchSize
}
err := ss.ProcessStateSync(startHash[:], size, bc, worker)
err := ss.ProcessStateSync(startHash[:], size, bc)
if err != nil {
utils.Logger().Error().Err(err).
Msgf("[SYNC] ProcessStateSync failed (isBeacon: %t, ShardID: %d, otherHeight: %d, currentHeight: %d)",
Expand Down
7 changes: 2 additions & 5 deletions api/service/stagedstreamsync/const.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,6 @@ const (
// no more request will be assigned to workers to wait for InsertChain to finish.
SoftQueueCap int = 100

// DefaultConcurrency is the default settings for concurrency
DefaultConcurrency int = 4

// ShortRangeTimeout is the timeout for each short range sync, which allow short range sync
// to restart automatically when stuck in `getBlockHashes`
ShortRangeTimeout time.Duration = 1 * time.Minute
Expand Down Expand Up @@ -74,10 +71,10 @@ type (

func (c *Config) fixValues() {
if c.Concurrency == 0 {
c.Concurrency = DefaultConcurrency
c.Concurrency = c.MinStreams
}
if c.Concurrency > c.MinStreams {
c.MinStreams = c.Concurrency
c.Concurrency = c.MinStreams
}
if c.MinStreams > c.InitStreams {
c.InitStreams = c.MinStreams
Expand Down
21 changes: 17 additions & 4 deletions api/service/stagedstreamsync/downloader.go
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,17 @@ func (d *Downloader) SubscribeDownloadFinished(ch chan struct{}) event.Subscript
// waitForBootFinish waits for stream manager to finish the initial discovery and have
// enough peers to start downloader
func (d *Downloader) waitForBootFinish() {
bootCompleted, numStreams := d.waitForEnoughStreams(d.config.InitStreams)
if bootCompleted {
fmt.Printf("boot completed for shard %d ( %d streams are connected )\n",
d.bc.ShardID(), numStreams)
}
}

func (d *Downloader) waitForEnoughStreams(requiredStreams int) (bool, int) {
d.logger.Info().Int("requiredStreams", requiredStreams).
Msg("waiting for enough stream connections to continue syncing")

evtCh := make(chan streammanager.EvtStreamAdded, 1)
sub := d.syncProtocol.SubscribeAddStreamEvent(evtCh)
defer sub.Unsubscribe()
Expand All @@ -177,12 +188,11 @@ func (d *Downloader) waitForBootFinish() {
trigger()

case <-checkCh:
if d.syncProtocol.NumStreams() >= d.config.InitStreams {
fmt.Printf("boot completed for shard %d ( %d streams are connected )\n", d.bc.ShardID(), d.syncProtocol.NumStreams())
return
if d.syncProtocol.NumStreams() >= requiredStreams {
return true, d.syncProtocol.NumStreams()
}
case <-d.closeC:
return
return false, d.syncProtocol.NumStreams()
}
}
}
Expand Down Expand Up @@ -212,6 +222,9 @@ func (d *Downloader) loop() {
case <-d.downloadC:
bnBeforeSync := d.bc.CurrentBlock().NumberU64()
estimatedHeight, addedBN, err := d.stagedSyncInstance.doSync(d.ctx, initSync)
if err == ErrNotEnoughStreams {
d.waitForEnoughStreams(d.config.MinStreams)
}
if err != nil {
//TODO: if there is a bad block which can't be resolved
if d.stagedSyncInstance.invalidBlock.Active {
Expand Down
2 changes: 1 addition & 1 deletion api/service/stagedstreamsync/errors.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ var (
ErrUnexpectedNumberOfBlockHashes = WrapStagedSyncError("unexpected number of getBlocksByHashes result")
ErrUnexpectedBlockHashes = WrapStagedSyncError("unexpected get block hashes result delivered")
ErrNilBlock = WrapStagedSyncError("nil block found")
ErrNotEnoughStreams = WrapStagedSyncError("not enough streams")
ErrNotEnoughStreams = WrapStagedSyncError("number of streams smaller than minimum required")
ErrParseCommitSigAndBitmapFail = WrapStagedSyncError("parse commitSigAndBitmap failed")
ErrVerifyHeaderFail = WrapStagedSyncError("verify header failed")
ErrInsertChainFail = WrapStagedSyncError("insert to chain failed")
Expand Down
5 changes: 5 additions & 0 deletions api/service/stagedstreamsync/short_range_helper.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import (

"github.com/ethereum/go-ethereum/common"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/internal/utils"
syncProto "github.com/harmony-one/harmony/p2p/stream/protocols/sync"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
"github.com/pkg/errors"
Expand Down Expand Up @@ -132,6 +133,10 @@ func (sh *srHelper) getBlocksByHashes(ctx context.Context, hashes []common.Hash,

func (sh *srHelper) checkPrerequisites() error {
if sh.syncProtocol.NumStreams() < sh.config.Concurrency {
utils.Logger().Info().
Int("available streams", sh.syncProtocol.NumStreams()).
Interface("concurrency", sh.config.Concurrency).
Msg("not enough streams to do concurrent processes")
return ErrNotEnoughStreams
}
return nil
Expand Down
Loading