Release Candidate: dev -> main (#4319)

* Rebase dev branch to current main branch (#4318)

* add openssl compatibility on m2 chips using darwin (#4302)

Adds support for OpenSSL on MacOS Ventura using m2 chips.

* [dumpdb] ensure each cross link is dumped (#4311)

* bump libp2p to version 0.24.0 and update its dependencies and relevant tests (#4315)

* Removed legacy syncing peer provider. (#4260)

* Removed legacy syncing peer provider.

* Fix localnet.

* Fix migrate version.

* Rebased on main.

* Fix formatting.

* Remove blockchain dependency from engine. (#4310)

* Consensus doesn't require anymore `Node` as a circular dependency.

* Rebased upon main.

* Removed engine beacon chain dependency.

* Fixed nil error.

* Fixed error.

* bump libp2p to version 0.24.0 and update its dependencies and relevant tests

* fix format, remove wrongly added configs

* add back wrongly deleted comment

* fix travis go checker

Co-authored-by: Konstantin <355847+Frozen@users.noreply.github.com>
Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>

* bump libp2p to version 0.24.0 and update its dependencies and relevant tests (#4315)

* Removed legacy syncing peer provider. (#4260)

* Removed legacy syncing peer provider.

* Fix localnet.

* Fix migrate version.

* Rebased on main.

* Fix formatting.

* Remove blockchain dependency from engine. (#4310)

* Consensus doesn't require anymore `Node` as a circular dependency.

* Rebased upon main.

* Removed engine beacon chain dependency.

* Fixed nil error.

* Fixed error.

* bump libp2p to version 0.24.0 and update its dependencies and relevant tests

* fix format, remove wrongly added configs

* add back wrongly deleted comment

* fix travis go checker

Co-authored-by: Konstantin <355847+Frozen@users.noreply.github.com>
Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>

* Fix for consensus stuck. (#4307)

* Added check for block validity.

* Starts new view change if block invalid.

* Revert "Starts new view change if block invalid."

This reverts commit e889fa5da2e0780f087ab7dae5106b96287706db.

* staged dns sync v1.0 (#4316)

* staged dns sync v1.0

* enabled stream downloader for localnet

* fix code review issues

* remove extra lock

Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>

* add description for closing client and change randomize process to ma… (#4276)

* add description for closing client and change randomize process to make sure only online nodes are added to sync config

* fix sync test

* fix legacy limitNumPeers test

* add WaitForEachPeerToConnect to node configs to make parallel peer connection optional

Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>

* Small fixes and code cleanup for network stack.  (#4320)

* staged dns sync v1.0

* enabled stream downloader for localnet

* fix code review issues

* remove extra lock

* staged dns sync v1.0

* Fixed, code clean up and other.

* Fixed, code clean up and other.

* Fixed, code clean up and other.

* Fix config.

Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>

* Fix not disable cache in archival mode (#4322)

* Feature registry (#4324)

* Registry for services.

* Test.

* Reverted comment.

* Fix.

* Slash fix (#4284)

* Implementation of new slashing rate calculation

* Write tests for then new slashing rate calculation

* Add engine.applySlashing tests

* fix #4059

Co-authored-by: Alex Brezas <abresas@gmail.com>
Co-authored-by: Dimitris Lamprinos <pkakelas@gmail.com>

* Bump github.com/aws/aws-sdk-go from 1.30.1 to 1.33.0 (#4325) (#4328)

Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.30.1 to 1.33.0.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/v1.33.0/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.30.1...v1.33.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump github.com/btcsuite/btcd from 0.21.0-beta to 0.23.2 (#4327) (#4329)

Bumps [github.com/btcsuite/btcd](https://github.com/btcsuite/btcd) from 0.21.0-beta to 0.23.2.
- [Release notes](https://github.com/btcsuite/btcd/releases)
- [Changelog](https://github.com/btcsuite/btcd/blob/master/CHANGES)
- [Commits](https://github.com/btcsuite/btcd/compare/v0.21.0-beta...v0.23.2)

---
updated-dependencies:
- dependency-name: github.com/btcsuite/btcd
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Max <82761650+MaxMustermann2@users.noreply.github.com>
Co-authored-by: Gheis <36589218+GheisMohammadi@users.noreply.github.com>
Co-authored-by: Konstantin <355847+Frozen@users.noreply.github.com>
Co-authored-by: “GheisMohammadi” <“Gheis.Mohammadi@gmail.com”>
Co-authored-by: Danny Willis <102543677+dannyposi@users.noreply.github.com>
Co-authored-by: PeekPI <894646171@QQ.COM>
Co-authored-by: Alex Brezas <abresas@gmail.com>
Co-authored-by: Dimitris Lamprinos <pkakelas@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
pull/4348/head
Casey Gardiner 2 years ago committed by GitHub
parent 9d466b8f3b
commit 7ab8be3377
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 6
      Makefile
  2. 6
      README.md
  3. 52
      api/service/legacysync/downloader/client.go
  4. 51
      api/service/legacysync/epoch_syncing.go
  5. 40
      api/service/legacysync/helpers.go
  6. 113
      api/service/legacysync/syncing.go
  7. 47
      api/service/legacysync/syncing_test.go
  8. 86
      api/service/stagedsync/default_stages.go
  9. 51
      api/service/stagedsync/errors.go
  10. 106
      api/service/stagedsync/stage.go
  11. 698
      api/service/stagedsync/stage_blockhashes.go
  12. 784
      api/service/stagedsync/stage_bodies.go
  13. 114
      api/service/stagedsync/stage_finish.go
  14. 146
      api/service/stagedsync/stage_heads.go
  15. 121
      api/service/stagedsync/stage_lastmile.go
  16. 330
      api/service/stagedsync/stage_state.go
  17. 1316
      api/service/stagedsync/stagedsync.go
  18. 94
      api/service/stagedsync/stages.go
  19. 401
      api/service/stagedsync/sync_config.go
  20. 90
      api/service/stagedsync/sync_status.go
  21. 292
      api/service/stagedsync/syncing.go
  22. 38
      api/service/stagedsync/task_queue.go
  23. 2
      cmd/bootnode/main.go
  24. 21
      cmd/harmony/config_migrations.go
  25. 24
      cmd/harmony/default.go
  26. 16
      cmd/harmony/dumpdb.go
  27. 20
      cmd/harmony/flags.go
  28. 7
      cmd/harmony/flags_test.go
  29. 28
      cmd/harmony/main.go
  30. 18
      consensus/consensus.go
  31. 18
      consensus/consensus_service.go
  32. 3
      consensus/consensus_test.go
  33. 28
      consensus/consensus_v2.go
  34. 4
      consensus/double_sign.go
  35. 4
      consensus/downloader.go
  36. 2
      consensus/leader.go
  37. 2
      consensus/threshold.go
  38. 10
      consensus/validator.go
  39. 13
      consensus/view_change.go
  40. 4
      consensus/view_change_construct.go
  41. 2
      core/offchain.go
  42. 240
      go.mod
  43. 1162
      go.sum
  44. 2
      hmy/hmy.go
  45. 2
      hmy/net.go
  46. 31
      internal/chain/engine.go
  47. 387
      internal/chain/engine_test.go
  48. 16
      internal/configs/harmony/harmony.go
  49. 15
      internal/configs/node/config.go
  50. 2
      internal/configs/node/network.go
  51. 35
      internal/registry/registry.go
  52. 16
      internal/registry/registry_test.go
  53. 2
      internal/utils/utils.go
  54. 6
      internal/utils/utils_test.go
  55. 2
      node/api.go
  56. 81
      node/node.go
  57. 4
      node/node_handler.go
  58. 24
      node/node_handler_test.go
  59. 3
      node/node_newblock_test.go
  60. 107
      node/node_syncing.go
  61. 19
      node/node_test.go
  62. 8
      p2p/discovery/discovery.go
  63. 3
      p2p/discovery/discovery_test.go
  64. 8
      p2p/gater.go
  65. 15
      p2p/host.go
  66. 4
      p2p/metrics.go
  67. 2
      p2p/security/security.go
  68. 39
      p2p/security/security_test.go
  69. 2
      p2p/stream/common/streammanager/cooldown.go
  70. 6
      p2p/stream/common/streammanager/interface.go
  71. 6
      p2p/stream/common/streammanager/interface_test.go
  72. 6
      p2p/stream/common/streammanager/streammanager.go
  73. 2
      p2p/stream/common/streammanager/streammanager_test.go
  74. 4
      p2p/stream/protocols/sync/protocol.go
  75. 4
      p2p/stream/protocols/sync/protocol_test.go
  76. 2
      p2p/stream/protocols/sync/stream.go
  77. 25
      p2p/stream/protocols/sync/stream_test.go
  78. 2
      p2p/stream/types/interface.go
  79. 2
      p2p/stream/types/stream.go
  80. 2
      p2p/stream/types/utils.go
  81. 2
      p2p/types/peerAddr.go
  82. 2
      p2p/types/types.go
  83. 2
      p2p/utils_test.go
  84. 3
      rosetta/infra/harmony-mainnet.conf
  85. 3
      rosetta/infra/harmony-pstn.conf
  86. 2
      rosetta/services/network.go
  87. 2
      rosetta/services/network_test.go
  88. 2
      rpc/common/types.go
  89. 6
      scripts/go_executable_build.sh
  90. 2
      scripts/setup_bls_build_flags.sh
  91. 308
      staking/slash/double-sign.go
  92. 467
      staking/slash/double-sign_test.go
  93. 2
      test/helpers/p2p.go

@ -1,7 +1,7 @@
TOP:=$(realpath ..)
export CGO_CFLAGS:=-I$(TOP)/bls/include -I$(TOP)/mcl/include -I/usr/local/opt/openssl/include
export CGO_LDFLAGS:=-L$(TOP)/bls/lib -L/usr/local/opt/openssl/lib
export LD_LIBRARY_PATH:=$(TOP)/bls/lib:$(TOP)/mcl/lib:/usr/local/opt/openssl/lib:/opt/homebrew/opt/gmp/lib/:/opt/homebrew/opt/openssl/lib
export CGO_CFLAGS:=-I$(TOP)/bls/include -I$(TOP)/mcl/include -I/opt/homebrew/opt/openssl@1.1/include
export CGO_LDFLAGS:=-L$(TOP)/bls/lib -L/opt/homebrew/opt/openssl@1.1/lib
export LD_LIBRARY_PATH:=$(TOP)/bls/lib:$(TOP)/mcl/lib:/opt/homebrew/opt/openssl@1.1/lib:/opt/homebrew/opt/gmp/lib/:/opt/homebrew/opt/openssl@1.1/lib
export LIBRARY_PATH:=$(LD_LIBRARY_PATH)
export DYLD_FALLBACK_LIBRARY_PATH:=$(LD_LIBRARY_PATH)
export GO111MODULE:=on

@ -114,9 +114,9 @@ The `make` command should automatically build the Harmony binary & all dependent
However, if you wish to bypass the Makefile, first export the build flags:
```bash
export CGO_CFLAGS="-I$GOPATH/src/github.com/harmony-one/bls/include -I$GOPATH/src/github.com/harmony-one/mcl/include -I/usr/local/opt/openssl/include"
export CGO_LDFLAGS="-L$GOPATH/src/github.com/harmony-one/bls/lib -L/usr/local/opt/openssl/lib"
export LD_LIBRARY_PATH=$GOPATH/src/github.com/harmony-one/bls/lib:$GOPATH/src/github.com/harmony-one/mcl/lib:/usr/local/opt/openssl/lib
export CGO_CFLAGS="-I$GOPATH/src/github.com/harmony-one/bls/include -I$GOPATH/src/github.com/harmony-one/mcl/include -I/opt/homebrew/opt/openssl@1.1/include"
export CGO_LDFLAGS="-L$GOPATH/src/github.com/harmony-one/bls/lib -L/opt/homebrew/opt/openssl@1.1/lib"
export LD_LIBRARY_PATH=$GOPATH/src/github.com/harmony-one/bls/lib:$GOPATH/src/github.com/harmony-one/mcl/lib:/opt/homebrew/opt/openssl@1.1/lib
export LIBRARY_PATH=$LD_LIBRARY_PATH
export DYLD_FALLBACK_LIBRARY_PATH=$LD_LIBRARY_PATH
export GO111MODULE=on

@ -8,6 +8,7 @@ import (
pb "github.com/harmony-one/harmony/api/service/legacysync/downloader/proto"
"github.com/harmony-one/harmony/internal/utils"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
)
// Client is the client model for downloader package.
@ -15,17 +16,22 @@ type Client struct {
dlClient pb.DownloaderClient
opts []grpc.DialOption
conn *grpc.ClientConn
addr string
}
// ClientSetup setups a Client given ip and port.
func ClientSetup(ip, port string) *Client {
func ClientSetup(ip, port string, withBlock bool) *Client {
client := Client{}
client.opts = append(client.opts, grpc.WithInsecure())
if withBlock {
client.opts = append(client.opts, grpc.WithBlock())
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client.addr = fmt.Sprintf("%s:%s", ip, port)
var err error
client.conn, err = grpc.DialContext(ctx, fmt.Sprintf(ip+":"+port), client.opts...)
client.conn, err = grpc.DialContext(ctx, client.addr, client.opts...)
if err != nil {
utils.Logger().Error().Err(err).Str("ip", ip).Msg("[SYNC] client.go:ClientSetup fail to dial")
return nil
@ -35,12 +41,50 @@ func ClientSetup(ip, port string) *Client {
return &client
}
// IsReady returns true if client is ready
func (client *Client) IsReady() bool {
return client.conn.GetState() == connectivity.Ready
}
// IsConnecting returns true if client is connecting
func (client *Client) IsConnecting() bool {
return client.conn.GetState() == connectivity.Connecting
}
// State returns current Connecting state
func (client *Client) State() connectivity.State {
return client.conn.GetState()
}
// WaitForConnection waits for client to connect
func (client *Client) WaitForConnection(t time.Duration) bool {
ctx, cancel := context.WithTimeout(context.Background(), t)
defer cancel()
if client.conn.GetState() == connectivity.Ready {
return true
}
if ready := client.conn.WaitForStateChange(ctx, client.conn.GetState()); !ready {
return false
} else {
return client.conn.GetState() == connectivity.Ready
}
}
// Close closes the Client.
func (client *Client) Close() {
func (client *Client) Close(reason string) {
err := client.conn.Close()
if err != nil {
utils.Logger().Info().Msg("[SYNC] unable to close connection")
utils.Logger().Info().
Str("peerAddress", client.addr).
Msg("[SYNC] unable to close peer connection")
return
}
utils.Logger().Info().
Str("peerAddress", client.addr).
Str("reason", reason).
Msg("[SYNC] peer connection closed")
}
// GetBlockHashes gets block hashes from all the peers by calling grpc request.

@ -37,13 +37,13 @@ type EpochSync struct {
// If the last result is expired, ask the remote DNS nodes for latest height and return the result.
func (ss *EpochSync) GetSyncStatus() SyncCheckResult {
return ss.syncStatus.Get(func() SyncCheckResult {
return ss.isInSync(false)
return ss.isSynchronized(false)
})
}
// isInSync query the remote DNS node for the latest height to check what is the current
// isSynchronized query the remote DNS node for the latest height to check what is the current
// sync status
func (ss *EpochSync) isInSync(_ bool) SyncCheckResult {
func (ss *EpochSync) isSynchronized(_ bool) SyncCheckResult {
if ss.syncConfig == nil {
return SyncCheckResult{} // If syncConfig is not instantiated, return not in sync
}
@ -70,7 +70,7 @@ func (ss *EpochSync) isInSync(_ bool) SyncCheckResult {
Uint64("CurrentEpoch", curEpoch).
Msg("[EPOCHSYNC] Checking sync status")
return SyncCheckResult{
IsInSync: inSync,
IsSynchronized: inSync,
OtherHeight: otherHeight1,
HeightDiff: epochDiff,
}
@ -85,17 +85,18 @@ func (ss *EpochSync) GetActivePeerNumber() int {
}
// SyncLoop will keep syncing with peers until catches up
func (ss *EpochSync) SyncLoop(bc core.BlockChain, isBeacon bool, consensus *consensus.Consensus) time.Duration {
return time.Duration(ss.syncLoop(bc, isBeacon, consensus)) * time.Second
func (ss *EpochSync) SyncLoop(bc core.BlockChain, consensus *consensus.Consensus) time.Duration {
return time.Duration(syncLoop(bc, ss.syncConfig)) * time.Second
}
func (ss *EpochSync) syncLoop(bc core.BlockChain, isBeacon bool, _ *consensus.Consensus) (timeout int) {
maxHeight := getMaxPeerHeight(ss.syncConfig)
func syncLoop(bc core.BlockChain, syncConfig *SyncConfig) (timeout int) {
isBeacon := bc.ShardID() == 0
maxHeight := getMaxPeerHeight(syncConfig)
for {
if maxHeight == 0 || maxHeight == math.MaxUint64 {
utils.Logger().Info().
Msgf("[EPOCHSYNC] No peers to sync (isBeacon: %t, ShardID: %d, peerscount: %d)",
isBeacon, bc.ShardID(), ss.syncConfig.PeersCount())
Msgf("[EPOCHSYNC] No peers to sync (isBeacon: %t, ShardID: %d, peersCount: %d)",
isBeacon, bc.ShardID(), syncConfig.PeersCount())
return 10
}
@ -104,19 +105,19 @@ func (ss *EpochSync) syncLoop(bc core.BlockChain, isBeacon bool, _ *consensus.Co
if otherEpoch == curEpoch+1 {
utils.Logger().Info().
Msgf("[EPOCHSYNC] Node is now IN SYNC! (isBeacon: %t, ShardID: %d, otherEpoch: %d, currentEpoch: %d, peersCount: %d)",
isBeacon, bc.ShardID(), otherEpoch, curEpoch, ss.syncConfig.PeersCount())
isBeacon, bc.ShardID(), otherEpoch, curEpoch, syncConfig.PeersCount())
return 60
}
if otherEpoch < curEpoch {
for _, peerCfg := range ss.syncConfig.GetPeers() {
ss.syncConfig.RemovePeer(peerCfg, fmt.Sprintf("[EPOCHSYNC]: current height is higher that others, removve peers: %s", peerCfg.String()))
for _, peerCfg := range syncConfig.GetPeers() {
syncConfig.RemovePeer(peerCfg, fmt.Sprintf("[EPOCHSYNC]: current height is higher that others, remove peers: %s", peerCfg.String()))
}
return 2
}
utils.Logger().Info().
Msgf("[EPOCHSYNC] Node is OUT OF SYNC (isBeacon: %t, ShardID: %d, otherEpoch: %d, currentEpoch: %d, peers count %d)",
isBeacon, bc.ShardID(), otherEpoch, curEpoch, ss.syncConfig.PeersCount())
isBeacon, bc.ShardID(), otherEpoch, curEpoch, syncConfig.PeersCount())
var heights []uint64
loopEpoch := curEpoch + 1
@ -133,7 +134,7 @@ func (ss *EpochSync) syncLoop(bc core.BlockChain, isBeacon bool, _ *consensus.Co
return 10
}
err := ss.ProcessStateSync(heights, bc)
err := ProcessStateSync(syncConfig, heights, bc)
if err != nil {
utils.Logger().Error().Err(err).
Msgf("[EPOCHSYNC] ProcessStateSync failed (isBeacon: %t, ShardID: %d, otherEpoch: %d, currentEpoch: %d)",
@ -144,11 +145,11 @@ func (ss *EpochSync) syncLoop(bc core.BlockChain, isBeacon bool, _ *consensus.Co
}
// ProcessStateSync processes state sync from the blocks received but not yet processed so far
func (ss *EpochSync) ProcessStateSync(heights []uint64, bc core.BlockChain) error {
func ProcessStateSync(syncConfig *SyncConfig, heights []uint64, bc core.BlockChain) error {
var payload [][]byte
var peerCfg *SyncPeerConfig
peers := ss.syncConfig.GetPeers()
peers := syncConfig.GetPeers()
if len(peers) == 0 {
return errors.New("no peers to sync")
}
@ -156,11 +157,11 @@ func (ss *EpochSync) ProcessStateSync(heights []uint64, bc core.BlockChain) erro
for index, peerConfig := range peers {
resp := peerConfig.GetClient().GetBlocksByHeights(heights)
if resp == nil {
ss.syncConfig.RemovePeer(peerConfig, fmt.Sprintf("[EPOCHSYNC]: no response from peer: #%d %s, count %d", index, peerConfig.String(), len(peers)))
syncConfig.RemovePeer(peerConfig, fmt.Sprintf("[EPOCHSYNC]: no response from peer: #%d %s, count %d", index, peerConfig.String(), len(peers)))
continue
}
if len(resp.Payload) == 0 {
ss.syncConfig.RemovePeer(peerConfig, fmt.Sprintf("[EPOCHSYNC]: empty payload response from peer: #%d %s, count %d", index, peerConfig.String(), len(peers)))
syncConfig.RemovePeer(peerConfig, fmt.Sprintf("[EPOCHSYNC]: empty payload response from peer: #%d %s, count %d", index, peerConfig.String(), len(peers)))
continue
}
payload = resp.Payload
@ -168,12 +169,12 @@ func (ss *EpochSync) ProcessStateSync(heights []uint64, bc core.BlockChain) erro
break
}
if len(payload) == 0 {
return errors.Errorf("empty payload: no blocks were returned by GetBlocksByHeights for all peers, currentPeersCount %d", ss.syncConfig.PeersCount())
return errors.Errorf("empty payload: no blocks were returned by GetBlocksByHeights for all peers, currentPeersCount %d", syncConfig.PeersCount())
}
err := ss.processWithPayload(payload, bc)
err := processWithPayload(payload, bc)
if err != nil {
// Assume that node sent us invalid data.
ss.syncConfig.RemovePeer(peerCfg, fmt.Sprintf("[EPOCHSYNC]: failed to process with payload from peer: %s", err.Error()))
syncConfig.RemovePeer(peerCfg, fmt.Sprintf("[EPOCHSYNC]: failed to process with payload from peer: %s", err.Error()))
utils.Logger().Error().Err(err).
Msgf("[EPOCHSYNC] Removing peer %s for invalid data", peerCfg.String())
return err
@ -181,7 +182,7 @@ func (ss *EpochSync) ProcessStateSync(heights []uint64, bc core.BlockChain) erro
return nil
}
func (ss *EpochSync) processWithPayload(payload [][]byte, bc core.BlockChain) error {
func processWithPayload(payload [][]byte, bc core.BlockChain) error {
decoded := make([]*types.Block, 0, len(payload))
for idx, blockBytes := range payload {
block, err := RlpDecodeBlockOrBlockWithSig(blockBytes)
@ -201,8 +202,8 @@ func (ss *EpochSync) processWithPayload(payload [][]byte, bc core.BlockChain) er
}
// CreateSyncConfig creates SyncConfig for StateSync object.
func (ss *EpochSync) CreateSyncConfig(peers []p2p.Peer, shardID uint32) error {
func (ss *EpochSync) CreateSyncConfig(peers []p2p.Peer, shardID uint32, waitForEachPeerToConnect bool) error {
var err error
ss.syncConfig, err = createSyncConfig(ss.syncConfig, peers, shardID)
ss.syncConfig, err = createSyncConfig(ss.syncConfig, peers, shardID, waitForEachPeerToConnect)
return err
}

@ -28,11 +28,11 @@ func getMaxPeerHeight(syncConfig *SyncConfig) uint64 {
// utils.Logger().Debug().Bool("isBeacon", isBeacon).Str("peerIP", peerConfig.ip).Str("peerPort", peerConfig.port).Msg("[Sync]getMaxPeerHeight")
response, err := peerConfig.client.GetBlockChainHeight()
if err != nil {
utils.Logger().Warn().Err(err).Str("peerIP", peerConfig.ip).Str("peerPort", peerConfig.port).Msg("[Sync]GetBlockChainHeight failed")
utils.Logger().Warn().Err(err).Str("peerIP", peerConfig.peer.IP).Str("peerPort", peerConfig.peer.Port).Msg("[Sync]GetBlockChainHeight failed")
syncConfig.RemovePeer(peerConfig, fmt.Sprintf("failed getMaxPeerHeight for shard %d with message: %s", syncConfig.ShardID(), err.Error()))
return
}
utils.Logger().Info().Str("peerIP", peerConfig.ip).Uint64("blockHeight", response.BlockHeight).
utils.Logger().Info().Str("peerIP", peerConfig.peer.IP).Uint64("blockHeight", response.BlockHeight).
Msg("[SYNC] getMaxPeerHeight")
lock.Lock()
@ -51,21 +51,22 @@ func getMaxPeerHeight(syncConfig *SyncConfig) uint64 {
return maxHeight
}
func createSyncConfig(syncConfig *SyncConfig, peers []p2p.Peer, shardID uint32) (*SyncConfig, error) {
func createSyncConfig(syncConfig *SyncConfig, peers []p2p.Peer, shardID uint32, waitForEachPeerToConnect bool) (*SyncConfig, error) {
// sanity check to ensure no duplicate peers
if err := checkPeersDuplicity(peers); err != nil {
return syncConfig, err
}
// limit the number of dns peers to connect
randSeed := time.Now().UnixNano()
peers = limitNumPeers(peers, randSeed)
targetSize, peers := limitNumPeers(peers, randSeed)
utils.Logger().Debug().
Int("len", len(peers)).
Int("peers count", len(peers)).
Int("target size", targetSize).
Uint32("shardID", shardID).
Msg("[SYNC] CreateSyncConfig: len of peers")
if len(peers) == 0 {
if targetSize == 0 {
return syncConfig, errors.New("[SYNC] no peers to connect to")
}
if syncConfig != nil {
@ -73,24 +74,43 @@ func createSyncConfig(syncConfig *SyncConfig, peers []p2p.Peer, shardID uint32)
}
syncConfig = NewSyncConfig(shardID, nil)
if !waitForEachPeerToConnect {
var wg sync.WaitGroup
for _, peer := range peers {
ps := peers[:targetSize]
for _, peer := range ps {
wg.Add(1)
go func(peer p2p.Peer) {
defer wg.Done()
client := downloader.ClientSetup(peer.IP, peer.Port)
client := downloader.ClientSetup(peer.IP, peer.Port, false)
if client == nil {
return
}
peerConfig := &SyncPeerConfig{
ip: peer.IP,
port: peer.Port,
peer: peer,
client: client,
}
syncConfig.AddPeer(peerConfig)
}(peer)
}
wg.Wait()
} else {
var connectedPeers int
for _, peer := range peers {
client := downloader.ClientSetup(peer.IP, peer.Port, true)
if client == nil || !client.IsReady() {
continue
}
peerConfig := &SyncPeerConfig{
peer: peer,
client: client,
}
syncConfig.AddPeer(peerConfig)
connectedPeers++
if connectedPeers >= targetSize {
break
}
}
}
utils.Logger().Info().
Int("len", len(syncConfig.peers)).
Uint32("shardID", shardID).

@ -44,12 +44,14 @@ const (
numPeersHighBound = 5
downloadTaskBatch = 5
//LoopMinTime sync loop must take at least as this value, otherwise it waits for it
LoopMinTime = 0
)
// SyncPeerConfig is peer config to sync.
type SyncPeerConfig struct {
ip string
port string
peer p2p.Peer
peerHash []byte
client *downloader.Client
blockHashes [][]byte // block hashes before node doing sync
@ -64,7 +66,7 @@ func (peerConfig *SyncPeerConfig) GetClient() *downloader.Client {
// IsEqual checks the equality between two sync peers
func (peerConfig *SyncPeerConfig) IsEqual(pc2 *SyncPeerConfig) bool {
return peerConfig.ip == pc2.ip && peerConfig.port == pc2.port
return peerConfig.peer.IP == pc2.peer.IP && peerConfig.peer.Port == pc2.peer.Port
}
// SyncBlockTask is the task struct to sync a specific block.
@ -161,6 +163,9 @@ func (sc *SyncConfig) ForEachPeer(f func(peer *SyncPeerConfig) (brk bool)) {
}
func (sc *SyncConfig) PeersCount() int {
if sc == nil {
return 0
}
sc.mtx.RLock()
defer sc.mtx.RUnlock()
return len(sc.peers)
@ -171,7 +176,8 @@ func (sc *SyncConfig) RemovePeer(peer *SyncPeerConfig, reason string) {
sc.mtx.Lock()
defer sc.mtx.Unlock()
peer.client.Close()
closeReason := fmt.Sprintf("remove peer (reason: %s)", reason)
peer.client.Close(closeReason)
for i, p := range sc.peers {
if p == peer {
sc.peers = append(sc.peers[:i], sc.peers[i+1:]...)
@ -179,8 +185,8 @@ func (sc *SyncConfig) RemovePeer(peer *SyncPeerConfig, reason string) {
}
}
utils.Logger().Info().
Str("peerIP", peer.ip).
Str("peerPortMsg", peer.port).
Str("peerIP", peer.peer.IP).
Str("peerPortMsg", peer.peer.Port).
Str("reason", reason).
Msg("[SYNC] remove GRPC peer")
}
@ -285,7 +291,7 @@ func (sc *SyncConfig) CloseConnections() {
sc.mtx.RLock()
defer sc.mtx.RUnlock()
for _, pc := range sc.peers {
pc.client.Close()
pc.client.Close("close all connections")
}
}
@ -360,9 +366,9 @@ func (peerConfig *SyncPeerConfig) GetBlocks(hashes [][]byte) ([][]byte, error) {
}
// CreateSyncConfig creates SyncConfig for StateSync object.
func (ss *StateSync) CreateSyncConfig(peers []p2p.Peer, shardID uint32) error {
func (ss *StateSync) CreateSyncConfig(peers []p2p.Peer, shardID uint32, waitForEachPeerToConnect bool) error {
var err error
ss.syncConfig, err = createSyncConfig(ss.syncConfig, peers, shardID)
ss.syncConfig, err = createSyncConfig(ss.syncConfig, peers, shardID, waitForEachPeerToConnect)
return err
}
@ -384,16 +390,16 @@ func checkPeersDuplicity(ps []p2p.Peer) error {
}
// limitNumPeers limits number of peers to release some server end sources.
func limitNumPeers(ps []p2p.Peer, randSeed int64) []p2p.Peer {
func limitNumPeers(ps []p2p.Peer, randSeed int64) (int, []p2p.Peer) {
targetSize := calcNumPeersWithBound(len(ps), NumPeersLowBound, numPeersHighBound)
if len(ps) <= targetSize {
return ps
return len(ps), ps
}
r := rand.New(rand.NewSource(randSeed))
r.Shuffle(len(ps), func(i, j int) { ps[i], ps[j] = ps[j], ps[i] })
return ps[:targetSize]
return targetSize, ps
}
// Peers are expected to limited at half of the size, capped between lowBound and highBound.
@ -459,19 +465,20 @@ func (sc *SyncConfig) InitForTesting(client *downloader.Client, blockHashes [][]
func (sc *SyncConfig) cleanUpPeers(maxFirstID int) {
fixedPeer := sc.peers[maxFirstID]
utils.Logger().Info().Int("peers", len(sc.peers)).Msg("[SYNC] before cleanUpPeers")
var removedPeers int
for i := 0; i < len(sc.peers); i++ {
if CompareSyncPeerConfigByblockHashes(fixedPeer, sc.peers[i]) != 0 {
// TODO: move it into a util delete func.
// See tip https://github.com/golang/go/wiki/SliceTricks
// Close the client and remove the peer out of the
sc.peers[i].client.Close()
sc.peers[i].client.Close("cleanup peers")
copy(sc.peers[i:], sc.peers[i+1:])
sc.peers[len(sc.peers)-1] = nil
sc.peers = sc.peers[:len(sc.peers)-1]
removedPeers++
}
}
utils.Logger().Info().Int("peers", len(sc.peers)).Msg("[SYNC] post cleanUpPeers")
utils.Logger().Info().Int("removed peers", removedPeers).Msg("[SYNC] post cleanUpPeers")
}
// GetBlockHashesConsensusAndCleanUp selects the most common peer config based on their block hashes to download/sync.
@ -492,7 +499,7 @@ func (sc *SyncConfig) GetBlockHashesConsensusAndCleanUp() error {
}
utils.Logger().Info().
Int("maxFirstID", maxFirstID).
Str("targetPeerIP", sc.peers[maxFirstID].ip).
Str("targetPeerIP", sc.peers[maxFirstID].peer.IP).
Int("maxCount", maxCount).
Int("hashSize", len(sc.peers[maxFirstID].blockHashes)).
Msg("[SYNC] block consensus hashes")
@ -512,15 +519,15 @@ func (ss *StateSync) getConsensusHashes(startHash []byte, size uint32) error {
response := peerConfig.client.GetBlockHashes(startHash, size, ss.selfip, ss.selfport)
if response == nil {
utils.Logger().Warn().
Str("peerIP", peerConfig.ip).
Str("peerPort", peerConfig.port).
Str("peerIP", peerConfig.peer.IP).
Str("peerPort", peerConfig.peer.Port).
Msg("[SYNC] getConsensusHashes Nil Response")
ss.syncConfig.RemovePeer(peerConfig, fmt.Sprintf("StateSync %d: nil response for GetBlockHashes", ss.blockChain.ShardID()))
return
}
utils.Logger().Info().Uint32("queried blockHash size", size).
Int("got blockHashSize", len(response.Payload)).
Str("PeerIP", peerConfig.ip).
Str("PeerIP", peerConfig.peer.IP).
Msg("[SYNC] GetBlockHashes")
if len(response.Payload) > int(size+1) {
utils.Logger().Warn().
@ -582,13 +589,13 @@ func (ss *StateSync) downloadBlocks(bc core.BlockChain) {
payload, err := peerConfig.GetBlocks(tasks.blockHashes())
if err != nil {
utils.Logger().Warn().Err(err).
Str("peerID", peerConfig.ip).
Str("port", peerConfig.port).
Str("peerID", peerConfig.peer.IP).
Str("port", peerConfig.peer.Port).
Msg("[SYNC] downloadBlocks: GetBlocks failed")
ss.syncConfig.RemovePeer(peerConfig, fmt.Sprintf("StateSync %d: error returned for GetBlocks: %s", ss.blockChain.ShardID(), err.Error()))
return
}
if err != nil || len(payload) == 0 {
if len(payload) == 0 {
count++
utils.Logger().Error().Int("failNumber", count).
Msg("[SYNC] downloadBlocks: no more retrievable blocks")
@ -855,7 +862,7 @@ func (ss *StateSync) UpdateBlockAndStatus(block *types.Block, bc core.BlockChain
haveCurrentSig := len(block.GetCurrentCommitSig()) != 0
// Verify block signatures
if block.NumberU64() > 1 {
// Verify signature every 100 blocks
// Verify signature every N blocks (which N is verifyHeaderBatchSize and can be adjusted in configs)
verifySeal := block.NumberU64()%verifyHeaderBatchSize == 0 || verifyAllSig
verifyCurrentSig := verifyAllSig && haveCurrentSig
if verifyCurrentSig {
@ -894,7 +901,7 @@ func (ss *StateSync) UpdateBlockAndStatus(block *types.Block, bc core.BlockChain
utils.Logger().Error().
Err(err).
Msgf(
"[SYNC] UpdateBlockAndStatus: Error adding newck to blockchain %d %d",
"[SYNC] UpdateBlockAndStatus: Error adding new block to blockchain %d %d",
block.NumberU64(),
block.ShardID(),
)
@ -1004,7 +1011,7 @@ func (peerConfig *SyncPeerConfig) registerToBroadcast(peerHash []byte, ip, port
}
func (peerConfig *SyncPeerConfig) String() interface{} {
return fmt.Sprintf("peer: %s:%s ", peerConfig.ip, peerConfig.port)
return fmt.Sprintf("peer: %s:%s ", peerConfig.peer.IP, peerConfig.peer.Port)
}
// RegisterNodeInfo will register node to peers to accept future new block broadcasting
@ -1018,12 +1025,12 @@ func (ss *StateSync) RegisterNodeInfo() int {
count := 0
ss.syncConfig.ForEachPeer(func(peerConfig *SyncPeerConfig) (brk bool) {
logger := utils.Logger().With().Str("peerPort", peerConfig.port).Str("peerIP", peerConfig.ip).Logger()
logger := utils.Logger().With().Str("peerPort", peerConfig.peer.Port).Str("peerIP", peerConfig.peer.IP).Logger()
if count >= registrationNumber {
brk = true
return
}
if peerConfig.ip == ss.selfip && peerConfig.port == GetSyncingPort(ss.selfport) {
if peerConfig.peer.IP == ss.selfip && peerConfig.peer.Port == GetSyncingPort(ss.selfport) {
logger.Debug().
Str("selfport", ss.selfport).
Str("selfsyncport", GetSyncingPort(ss.selfport)).
@ -1058,11 +1065,14 @@ func (ss *StateSync) GetMaxPeerHeight() uint64 {
}
// SyncLoop will keep syncing with peers until catches up
func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus) {
func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus, loopMinTime time.Duration) {
utils.Logger().Info().Msgf("legacy sync is executing ...")
if !isBeacon {
ss.RegisterNodeInfo()
}
for {
start := time.Now()
otherHeight := getMaxPeerHeight(ss.syncConfig)
currentHeight := bc.CurrentBlock().NumberU64()
if currentHeight >= otherHeight {
@ -1089,6 +1099,14 @@ func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeaco
break
}
ss.purgeOldBlocksFromCache()
if loopMinTime != 0 {
waitTime := loopMinTime - time.Since(start)
c := time.After(waitTime)
select {
case <-c:
}
}
}
if consensus != nil {
if err := ss.addConsensusLastMile(bc, consensus); err != nil {
@ -1099,6 +1117,7 @@ func (ss *StateSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeaco
consensus.UpdateConsensusInformation()
}
}
utils.Logger().Info().Msgf("legacy sync is executed")
ss.purgeAllBlocksFromCache()
}
@ -1149,12 +1168,19 @@ type (
}
SyncCheckResult struct {
IsInSync bool
IsSynchronized bool
OtherHeight uint64
HeightDiff uint64
}
)
func ParseResult(res SyncCheckResult) (IsSynchronized bool, OtherHeight uint64, HeightDiff uint64) {
IsSynchronized = res.IsSynchronized
OtherHeight = res.OtherHeight
HeightDiff = res.HeightDiff
return IsSynchronized, OtherHeight, HeightDiff
}
func newSyncStatus(role nodeconfig.Role) syncStatus {
expiration := getSyncStatusExpiration(role)
return syncStatus{
@ -1200,6 +1226,11 @@ func (status *syncStatus) Clone() syncStatus {
}
}
func (ss *StateSync) IsSynchronized() bool {
result := ss.GetSyncStatus()
return result.IsSynchronized
}
func (status *syncStatus) expired() bool {
return time.Since(status.lastUpdateTime) > status.expiration
}
@ -1214,20 +1245,32 @@ func (status *syncStatus) update(result SyncCheckResult) {
// If the last result is expired, ask the remote DNS nodes for latest height and return the result.
func (ss *StateSync) GetSyncStatus() SyncCheckResult {
return ss.syncStatus.Get(func() SyncCheckResult {
return ss.isInSync(false)
return ss.isSynchronized(false)
})
}
func (ss *StateSync) GetParsedSyncStatus() (IsSynchronized bool, OtherHeight uint64, HeightDiff uint64) {
res := ss.syncStatus.Get(func() SyncCheckResult {
return ss.isSynchronized(false)
})
return ParseResult(res)
}
// GetSyncStatusDoubleChecked return the sync status when enforcing a immediate query on DNS nodes
// with a double check to avoid false alarm.
func (ss *StateSync) GetSyncStatusDoubleChecked() SyncCheckResult {
result := ss.isInSync(true)
result := ss.isSynchronized(true)
return result
}
// isInSync query the remote DNS node for the latest height to check what is the current
func (ss *StateSync) GetParsedSyncStatusDoubleChecked() (IsSynchronized bool, OtherHeight uint64, HeightDiff uint64) {
result := ss.isSynchronized(true)
return ParseResult(result)
}
// isSynchronized query the remote DNS node for the latest height to check what is the current
// sync status
func (ss *StateSync) isInSync(doubleCheck bool) SyncCheckResult {
func (ss *StateSync) isSynchronized(doubleCheck bool) SyncCheckResult {
if ss.syncConfig == nil {
return SyncCheckResult{} // If syncConfig is not instantiated, return not in sync
}
@ -1245,7 +1288,7 @@ func (ss *StateSync) isInSync(doubleCheck bool) SyncCheckResult {
Uint64("lastHeight", lastHeight).
Msg("[SYNC] Checking sync status")
return SyncCheckResult{
IsInSync: !wasOutOfSync,
IsSynchronized: !wasOutOfSync,
OtherHeight: otherHeight1,
HeightDiff: heightDiff,
}
@ -1269,7 +1312,7 @@ func (ss *StateSync) isInSync(doubleCheck bool) SyncCheckResult {
heightDiff = 0 // overflow
}
return SyncCheckResult{
IsInSync: !(wasOutOfSync && isOutOfSync && lastHeight == currentHeight),
IsSynchronized: !(wasOutOfSync && isOutOfSync && lastHeight == currentHeight),
OtherHeight: otherHeight2,
HeightDiff: heightDiff,
}

@ -29,34 +29,46 @@ func TestSyncPeerConfig_IsEqual(t *testing.T) {
}{
{
p1: &SyncPeerConfig{
ip: "0.0.0.1",
port: "1",
peer: p2p.Peer{
IP: "0.0.0.1",
Port: "1",
},
},
p2: &SyncPeerConfig{
ip: "0.0.0.1",
port: "2",
peer: p2p.Peer{
IP: "0.0.0.1",
Port: "2",
},
},
exp: false,
},
{
p1: &SyncPeerConfig{
ip: "0.0.0.1",
port: "1",
peer: p2p.Peer{
IP: "0.0.0.1",
Port: "1",
},
},
p2: &SyncPeerConfig{
ip: "0.0.0.2",
port: "1",
peer: p2p.Peer{
IP: "0.0.0.2",
Port: "1",
},
},
exp: false,
},
{
p1: &SyncPeerConfig{
ip: "0.0.0.1",
port: "1",
peer: p2p.Peer{
IP: "0.0.0.1",
Port: "1",
},
},
p2: &SyncPeerConfig{
ip: "0.0.0.1",
port: "1",
peer: p2p.Peer{
IP: "0.0.0.1",
Port: "1",
},
},
exp: true,
},
@ -167,7 +179,8 @@ func TestLimitPeersWithBound(t *testing.T) {
for _, test := range tests {
ps := makePeersForTest(test.size)
res := limitNumPeers(ps, 1)
sz, res := limitNumPeers(ps, 1)
res = res[:sz]
if len(res) != test.expSize {
t.Errorf("result size unexpected: %v / %v", len(res), test.expSize)
@ -183,8 +196,10 @@ func TestLimitPeersWithBound_random(t *testing.T) {
ps2 := makePeersForTest(100)
s1, s2 := int64(1), int64(2)
res1 := limitNumPeers(ps1, s1)
res2 := limitNumPeers(ps2, s2)
sz1, res1 := limitNumPeers(ps1, s1)
res1 = res1[:sz1]
sz2, res2 := limitNumPeers(ps2, s2)
res2 = res2[:sz2]
if reflect.DeepEqual(res1, res2) {
t.Fatal("not randomized limit peer")
}
@ -280,7 +295,7 @@ func TestSyncStatus_Get_Concurrency(t *testing.T) {
fb := func() SyncCheckResult {
time.Sleep(1 * time.Second)
atomic.AddInt32(&updated, 1)
return SyncCheckResult{IsInSync: true}
return SyncCheckResult{IsSynchronized: true}
}
for i := 0; i != 20; i++ {
wg.Add(1)

@ -0,0 +1,86 @@
package stagedsync
import (
"context"
)
type ForwardOrder []SyncStageID
type RevertOrder []SyncStageID
type CleanUpOrder []SyncStageID
var DefaultForwardOrder = ForwardOrder{
Heads,
BlockHashes,
BlockBodies,
// Stages below don't use Internet
States,
LastMile,
Finish,
}
var DefaultRevertOrder = RevertOrder{
Finish,
LastMile,
States,
BlockBodies,
BlockHashes,
Heads,
}
var DefaultCleanUpOrder = CleanUpOrder{
Finish,
LastMile,
States,
BlockBodies,
BlockHashes,
Heads,
}
func DefaultStages(ctx context.Context,
headsCfg StageHeadsCfg,
blockHashesCfg StageBlockHashesCfg,
bodiesCfg StageBodiesCfg,
statesCfg StageStatesCfg,
lastMileCfg StageLastMileCfg,
finishCfg StageFinishCfg) []*Stage {
handlerStageHeads := NewStageHeads(headsCfg)
handlerStageBlockHashes := NewStageBlockHashes(blockHashesCfg)
handlerStageBodies := NewStageBodies(bodiesCfg)
handleStageStates := NewStageStates(statesCfg)
handlerStageLastMile := NewStageLastMile(lastMileCfg)
handlerStageFinish := NewStageFinish(finishCfg)
return []*Stage{
{
ID: Heads,
Description: "Retrieve Chain Heads",
Handler: handlerStageHeads,
},
{
ID: BlockHashes,
Description: "Download block hashes",
Handler: handlerStageBlockHashes,
},
{
ID: BlockBodies,
Description: "Download block bodies",
Handler: handlerStageBodies,
},
{
ID: States,
Description: "Insert new blocks and update blockchain states",
Handler: handleStageStates,
},
{
ID: LastMile,
Description: "update status for blocks after sync and update last mile blocks as well",
Handler: handlerStageLastMile,
},
{
ID: Finish,
Description: "Final stage to update current block for the RPC API",
Handler: handlerStageFinish,
},
}
}

@ -0,0 +1,51 @@
package stagedsync
import (
"fmt"
)
// Errors ...
var (
ErrRegistrationFail = WrapStagedSyncError("registration failed")
ErrGetBlock = WrapStagedSyncError("get block failed")
ErrGetBlockHash = WrapStagedSyncError("get block hash failed")
ErrGetConsensusHashes = WrapStagedSyncError("get consensus hashes failed")
ErrGenStateSyncTaskQueue = WrapStagedSyncError("generate state sync task queue failed")
ErrDownloadBlocks = WrapStagedSyncError("get download blocks failed")
ErrUpdateBlockAndStatus = WrapStagedSyncError("update block and status failed")
ErrGenerateNewState = WrapStagedSyncError("get generate new state failed")
ErrFetchBlockHashProgressFail = WrapStagedSyncError("fetch cache progress for block hashes stage failed")
ErrFetchCachedBlockHashFail = WrapStagedSyncError("fetch cached block hashes failed")
ErrNotEnoughBlockHashes = WrapStagedSyncError("peers haven't sent all requested block hashes")
ErrRetrieveCachedProgressFail = WrapStagedSyncError("retrieving cache progress for block hashes stage failed")
ErrRetrieveCachedHashProgressFail = WrapStagedSyncError("retrieving cache progress for block hashes stage failed")
ErrSaveBlockHashesProgressFail = WrapStagedSyncError("saving progress for block hashes stage failed")
ErrSaveCachedBlockHashesProgressFail = WrapStagedSyncError("saving cache progress for block hashes stage failed")
ErrSavingCacheLastBlockHashFail = WrapStagedSyncError("saving cache last block hash for block hashes stage failed")
ErrCachingBlockHashFail = WrapStagedSyncError("caching downloaded block hashes failed")
ErrCommitTransactionFail = WrapStagedSyncError("failed to write db commit")
ErrUnexpectedNumberOfBlocks = WrapStagedSyncError("unexpected number of block delivered")
ErrSavingBodiesProgressFail = WrapStagedSyncError("saving progress for block bodies stage failed")
ErrAddTasksToQueueFail = WrapStagedSyncError("cannot add task to queue")
ErrSavingCachedBodiesProgressFail = WrapStagedSyncError("saving cache progress for blocks stage failed")
ErrRetrievingCachedBodiesProgressFail = WrapStagedSyncError("retrieving cache progress for blocks stage failed")
ErrNoConnectedPeers = WrapStagedSyncError("haven't connected to any peer yet!")
ErrNotEnoughConnectedPeers = WrapStagedSyncError("not enough connected peers")
ErrSaveStateProgressFail = WrapStagedSyncError("saving progress for block States stage failed")
ErrPruningCursorCreationFail = WrapStagedSyncError("failed to create cursor for pruning")
ErrInvalidBlockNumber = WrapStagedSyncError("invalid block number")
ErrInvalidBlockBytes = WrapStagedSyncError("invalid block bytes to insert into chain")
ErrAddTaskFailed = WrapStagedSyncError("cannot add task to queue")
ErrNodeNotEnoughBlockHashes = WrapStagedSyncError("some of the nodes didn't provide all block hashes")
ErrCachingBlocksFail = WrapStagedSyncError("caching downloaded block bodies failed")
ErrSaveBlocksFail = WrapStagedSyncError("save downloaded block bodies failed")
ErrStageNotFound = WrapStagedSyncError("stage not found")
ErrSomeNodesNotReady = WrapStagedSyncError("some nodes are not ready")
ErrSomeNodesBlockHashFail = WrapStagedSyncError("some nodes failed to download block hashes")
ErrMaxPeerHeightFail = WrapStagedSyncError("get max peer height failed")
)
// WrapStagedSyncError wraps errors for staged sync and returns error object
func WrapStagedSyncError(context string) error {
return fmt.Errorf("[STAGED_SYNC]: %s", context)
}

@ -0,0 +1,106 @@
package stagedsync
import (
"github.com/ethereum/go-ethereum/common"
"github.com/ledgerwatch/erigon-lib/kv"
)
type ExecFunc func(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) error
type StageHandler interface {
// Exec is the execution function for the stage to move forward.
// * firstCycle - is it the first cycle of syncing.
// * invalidBlockRevert - whether the execution is to solve the invalid block
// * s - is the current state of the stage and contains stage data.
// * reverter - if the stage needs to cause reverting, `reverter` methods can be used.
Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) error
// Revert is the reverting logic of the stage.
// * firstCycle - is it the first cycle of syncing.
// * u - contains information about the revert itself.
// * s - represents the state of this stage at the beginning of revert.
Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) error
// CleanUp is the execution function for the stage to prune old data.
// * firstCycle - is it the first cycle of syncing.
// * p - is the current state of the stage and contains stage data.
CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) error
}
// Stage is a single sync stage in staged sync.
type Stage struct {
// ID of the sync stage. Should not be empty and should be unique. It is recommended to prefix it with reverse domain to avoid clashes (`com.example.my-stage`).
ID SyncStageID
// Handler handles the logic for the stage
Handler StageHandler
// Description is a string that is shown in the logs.
Description string
// DisabledDescription shows in the log with a message if the stage is disabled. Here, you can show which command line flags should be provided to enable the page.
DisabledDescription string
// Disabled defines if the stage is disabled. It sets up when the stage is build by its `StageBuilder`.
Disabled bool
}
// StageState is the state of the stage.
type StageState struct {
state *StagedSync
ID SyncStageID
BlockNumber uint64 // BlockNumber is the current block number of the stage at the beginning of the state execution.
}
func (s *StageState) LogPrefix() string { return s.state.LogPrefix() }
func (s *StageState) CurrentStageProgress(db kv.Getter) (uint64, error) {
return GetStageProgress(db, s.ID, s.state.isBeacon)
}
func (s *StageState) StageProgress(db kv.Getter, id SyncStageID) (uint64, error) {
return GetStageProgress(db, id, s.state.isBeacon)
}
// Update updates the stage state (current block number) in the database. Can be called multiple times during stage execution.
func (s *StageState) Update(db kv.Putter, newBlockNum uint64) error {
return SaveStageProgress(db, s.ID, s.state.isBeacon, newBlockNum)
}
func (s *StageState) UpdateCleanUp(db kv.Putter, blockNum uint64) error {
return SaveStageCleanUpProgress(db, s.ID, s.state.isBeacon, blockNum)
}
// Reverter allows the stage to cause an revert.
type Reverter interface {
// RevertTo begins staged sync revert to the specified block.
RevertTo(revertPoint uint64, invalidBlock common.Hash)
}
// RevertState contains the information about revert.
type RevertState struct {
ID SyncStageID
// RevertPoint is the block to revert to.
RevertPoint uint64
CurrentBlockNumber uint64
// If revert is caused by a bad block, this hash is not empty
InvalidBlock common.Hash
state *StagedSync
}
func (u *RevertState) LogPrefix() string { return u.state.LogPrefix() }
// Done updates the DB state of the stage.
func (u *RevertState) Done(db kv.Putter) error {
return SaveStageProgress(db, u.ID, u.state.isBeacon, u.RevertPoint)
}
type CleanUpState struct {
ID SyncStageID
ForwardProgress uint64 // progress of stage forward move
CleanUpProgress uint64 // progress of stage prune move. after sync cycle it become equal to ForwardProgress by Done() method
state *StagedSync
}
func (s *CleanUpState) LogPrefix() string { return s.state.LogPrefix() + " CleanUp" }
func (s *CleanUpState) Done(db kv.Putter) error {
return SaveStageCleanUpProgress(db, s.ID, s.state.isBeacon, s.ForwardProgress)
}
func (s *CleanUpState) DoneAt(db kv.Putter, blockNum uint64) error {
return SaveStageCleanUpProgress(db, s.ID, s.state.isBeacon, blockNum)
}

@ -0,0 +1,698 @@
package stagedsync
import (
"context"
"encoding/hex"
"fmt"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/internal/utils"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon-lib/kv/mdbx"
"github.com/ledgerwatch/log/v3"
)
type StageBlockHashes struct {
configs StageBlockHashesCfg
}
type StageBlockHashesCfg struct {
ctx context.Context
bc core.BlockChain
db kv.RwDB
turbo bool
turboModeCh chan struct{}
bgProcRunning bool
isBeacon bool
cachedb kv.RwDB
logProgress bool
}
func NewStageBlockHashes(cfg StageBlockHashesCfg) *StageBlockHashes {
return &StageBlockHashes{
configs: cfg,
}
}
func NewStageBlockHashesCfg(ctx context.Context, bc core.BlockChain, db kv.RwDB, isBeacon bool, turbo bool, logProgress bool) StageBlockHashesCfg {
cachedb, err := initHashesCacheDB(ctx, isBeacon)
if err != nil {
panic("can't initialize sync caches")
}
return StageBlockHashesCfg{
ctx: ctx,
bc: bc,
db: db,
turbo: turbo,
isBeacon: isBeacon,
cachedb: cachedb,
logProgress: logProgress,
}
}
func initHashesCacheDB(ctx context.Context, isBeacon bool) (db kv.RwDB, err error) {
// create caches db
cachedbName := BlockHashesCacheDB
if isBeacon {
cachedbName = "beacon_" + cachedbName
}
cachedb := mdbx.NewMDBX(log.New()).Path(cachedbName).MustOpen()
// create transaction on cachedb
tx, errRW := cachedb.BeginRw(ctx)
if errRW != nil {
utils.Logger().Error().
Err(errRW).
Msg("[STAGED_SYNC] initializing sync caches failed")
return nil, errRW
}
defer tx.Rollback()
if err := tx.CreateBucket(BlockHashesBucket); err != nil {
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] creating cache bucket failed")
return nil, err
}
if err := tx.CreateBucket(StageProgressBucket); err != nil {
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] creating progress bucket failed")
return nil, err
}
if err := tx.Commit(); err != nil {
return nil, err
}
return cachedb, nil
}
func (bh *StageBlockHashes) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) (err error) {
if len(s.state.syncConfig.peers) < NumPeersLowBound {
return ErrNotEnoughConnectedPeers
}
maxPeersHeight := s.state.syncStatus.MaxPeersHeight
currentHead := bh.configs.bc.CurrentBlock().NumberU64()
if currentHead >= maxPeersHeight {
return nil
}
currProgress := uint64(0)
targetHeight := s.state.syncStatus.currentCycle.TargetHeight
isBeacon := s.state.isBeacon
startHash := bh.configs.bc.CurrentBlock().Hash()
isLastCycle := targetHeight >= maxPeersHeight
canRunInTurboMode := bh.configs.turbo && !isLastCycle
// retrieve the progress
if errV := CreateView(bh.configs.ctx, bh.configs.db, tx, func(etx kv.Tx) error {
if currProgress, err = s.CurrentStageProgress(etx); err != nil { //GetStageProgress(etx, BlockHashes, isBeacon); err != nil {
return err
}
if currProgress > 0 {
key := strconv.FormatUint(currProgress, 10)
bucketName := GetBucketName(BlockHashesBucket, isBeacon)
currHash := []byte{}
if currHash, err = etx.GetOne(bucketName, []byte(key)); err != nil || len(currHash[:]) == 0 {
//TODO: currProgress and DB don't match. Either re-download all or verify db and set currProgress to last
return err
}
startHash.SetBytes(currHash[:])
}
return nil
}); errV != nil {
return errV
}
if currProgress == 0 {
if err := bh.clearBlockHashesBucket(tx, s.state.isBeacon); err != nil {
return err
}
startHash = bh.configs.bc.CurrentBlock().Hash()
currProgress = currentHead
}
if currProgress >= targetHeight {
if canRunInTurboMode && currProgress < maxPeersHeight {
bh.configs.turboModeCh = make(chan struct{})
go bh.runBackgroundProcess(nil, s, isBeacon, currProgress, maxPeersHeight, startHash)
}
return nil
}
// check whether any block hashes after curr height is cached
if bh.configs.turbo && !firstCycle {
var cacheHash []byte
if cacheHash, err = bh.getHashFromCache(currProgress + 1); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] fetch cache progress for block hashes stage failed")
} else {
if len(cacheHash[:]) > 0 {
// get blocks from cached db rather than calling peers, and update current progress
newProgress, newStartHash, err := bh.loadBlockHashesFromCache(s, cacheHash, currProgress, targetHeight, tx)
if err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] fetch cached block hashes failed")
bh.clearCache()
bh.clearBlockHashesBucket(tx, isBeacon)
} else {
currProgress = newProgress
startHash.SetBytes(newStartHash[:])
}
}
}
}
if currProgress >= targetHeight {
if canRunInTurboMode && currProgress < maxPeersHeight {
bh.configs.turboModeCh = make(chan struct{})
go bh.runBackgroundProcess(nil, s, isBeacon, currProgress, maxPeersHeight, startHash)
}
return nil
}
size := uint32(0)
startTime := time.Now()
startBlock := currProgress
if bh.configs.logProgress {
fmt.Print("\033[s") // save the cursor position
}
for ok := true; ok; ok = currProgress < targetHeight {
size = uint32(targetHeight - currProgress)
if size > SyncLoopBatchSize {
size = SyncLoopBatchSize
}
// Gets consensus hashes.
if err := s.state.getConsensusHashes(startHash[:], size, false); err != nil {
return err
}
// selects the most common peer config based on their block hashes and doing the clean up
if err := s.state.syncConfig.GetBlockHashesConsensusAndCleanUp(false); err != nil {
return err
}
// double check block hashes
if s.state.DoubleCheckBlockHashes {
invalidPeersMap, validBlockHashes, err := s.state.getInvalidPeersByBlockHashes(tx)
if err != nil {
return err
}
if validBlockHashes < int(size) {
return ErrNotEnoughBlockHashes
}
s.state.syncConfig.cleanUpInvalidPeers(invalidPeersMap)
}
// save the downloaded files to db
if currProgress, startHash, err = bh.saveDownloadedBlockHashes(s, currProgress, startHash, tx); err != nil {
return err
}
// log the stage progress in console
if bh.configs.logProgress {
//calculating block speed
dt := time.Now().Sub(startTime).Seconds()
speed := float64(0)
if dt > 0 {
speed = float64(currProgress-startBlock) / dt
}
blockSpeed := fmt.Sprintf("%.2f", speed)
fmt.Print("\033[u\033[K") // restore the cursor position and clear the line
fmt.Println("downloading block hash progress:", currProgress, "/", targetHeight, "(", blockSpeed, "blocks/s", ")")
}
}
// continue downloading in background
if canRunInTurboMode && currProgress < maxPeersHeight {
bh.configs.turboModeCh = make(chan struct{})
go bh.runBackgroundProcess(nil, s, isBeacon, currProgress, maxPeersHeight, startHash)
}
return nil
}
// runBackgroundProcess continues downloading block hashes in the background and caching them on disk while next stages are running.
// In the next sync cycle, this stage will use cached block hashes rather than download them from peers.
// This helps performance and reduces stage duration. It also helps to use the resources more efficiently.
func (bh *StageBlockHashes) runBackgroundProcess(tx kv.RwTx, s *StageState, isBeacon bool, startHeight uint64, targetHeight uint64, startHash common.Hash) error {
size := uint32(0)
currProgress := startHeight
currHash := startHash
bh.configs.bgProcRunning = true
defer func() {
if bh.configs.bgProcRunning {
close(bh.configs.turboModeCh)
bh.configs.bgProcRunning = false
}
}()
// retrieve bg progress and last hash
errV := bh.configs.cachedb.View(context.Background(), func(rtx kv.Tx) error {
if progressBytes, err := rtx.GetOne(StageProgressBucket, []byte(LastBlockHeight)); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] retrieving cache progress for block hashes stage failed")
return ErrRetrieveCachedProgressFail
} else {
if len(progressBytes[:]) > 0 {
savedProgress, _ := unmarshalData(progressBytes)
if savedProgress > startHeight {
currProgress = savedProgress
// retrieve start hash
if lastBlockHash, err := rtx.GetOne(StageProgressBucket, []byte(LastBlockHash)); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] retrieving cache progress for block hashes stage failed")
return ErrRetrieveCachedHashProgressFail
} else {
currHash.SetBytes(lastBlockHash[:])
}
}
}
}
return nil
})
if errV != nil {
return errV
}
for {
select {
case <-bh.configs.turboModeCh:
return nil
default:
if currProgress >= targetHeight {
return nil
}
size = uint32(targetHeight - currProgress)
if size > SyncLoopBatchSize {
size = SyncLoopBatchSize
}
// Gets consensus hashes.
if err := s.state.getConsensusHashes(currHash[:], size, true); err != nil {
return err
}
// selects the most common peer config based on their block hashes and doing the clean up
if err := s.state.syncConfig.GetBlockHashesConsensusAndCleanUp(true); err != nil {
return err
}
// save the downloaded files to db
var err error
if currProgress, currHash, err = bh.saveBlockHashesInCacheDB(s, currProgress, currHash); err != nil {
return err
}
}
//TODO: do we need sleep a few milliseconds? ex: time.Sleep(1 * time.Millisecond)
}
}
func (bh *StageBlockHashes) clearBlockHashesBucket(tx kv.RwTx, isBeacon bool) error {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = bh.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
bucketName := GetBucketName(BlockHashesBucket, isBeacon)
if err := tx.ClearBucket(bucketName); err != nil {
return err
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
// saveDownloadedBlockHashes saves block hashes to db (map from block heigh to block hash)
func (bh *StageBlockHashes) saveDownloadedBlockHashes(s *StageState, progress uint64, startHash common.Hash, tx kv.RwTx) (p uint64, h common.Hash, err error) {
p = progress
h.SetBytes(startHash.Bytes())
lastAddedID := int(0) // the first id won't be added
saved := false
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = bh.configs.db.BeginRw(context.Background())
if err != nil {
return p, h, err
}
defer tx.Rollback()
}
s.state.syncConfig.ForEachPeer(func(configPeer *SyncPeerConfig) (brk bool) {
if len(configPeer.blockHashes) == 0 {
return //fetch the rest from other peer
}
for id := 0; id < len(configPeer.blockHashes); id++ {
if id <= lastAddedID {
continue
}
blockHash := configPeer.blockHashes[id]
if len(blockHash) == 0 {
return //fetch the rest from other peer
}
key := strconv.FormatUint(p+1, 10)
bucketName := GetBucketName(BlockHashesBucket, s.state.isBeacon)
if err := tx.Put(bucketName, []byte(key), blockHash); err != nil {
utils.Logger().Error().
Err(err).
Int("block hash index", id).
Str("block hash", hex.EncodeToString(blockHash)).
Msg("[STAGED_SYNC] adding block hash to db failed")
return
}
p++
h.SetBytes(blockHash[:])
lastAddedID = id
}
// check if all block hashes are added to db break the loop
if lastAddedID == len(configPeer.blockHashes)-1 {
saved = true
brk = true
}
return
})
// save progress
if err = s.Update(tx, p); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block hashes stage failed")
return progress, startHash, ErrSaveBlockHashesProgressFail
}
if len(s.state.syncConfig.peers) > 0 && len(s.state.syncConfig.peers[0].blockHashes) > 0 && !saved {
return progress, startHash, ErrSaveBlockHashesProgressFail
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return progress, startHash, err
}
}
return p, h, nil
}
// saveBlockHashesInCacheDB saves block hashes to cache db (map from block heigh to block hash)
func (bh *StageBlockHashes) saveBlockHashesInCacheDB(s *StageState, progress uint64, startHash common.Hash) (p uint64, h common.Hash, err error) {
p = progress
h.SetBytes(startHash[:])
lastAddedID := int(0) // the first id won't be added
saved := false
etx, err := bh.configs.cachedb.BeginRw(context.Background())
if err != nil {
return p, h, err
}
defer etx.Rollback()
s.state.syncConfig.ForEachPeer(func(configPeer *SyncPeerConfig) (brk bool) {
for id, blockHash := range configPeer.blockHashes {
if id <= lastAddedID {
continue
}
key := strconv.FormatUint(p+1, 10)
if err := etx.Put(BlockHashesBucket, []byte(key), blockHash); err != nil {
utils.Logger().Error().
Err(err).
Int("block hash index", id).
Str("block hash", hex.EncodeToString(blockHash)).
Msg("[STAGED_SYNC] adding block hash to db failed")
return
}
p++
h.SetBytes(blockHash[:])
lastAddedID = id
}
// check if all block hashes are added to db break the loop
if lastAddedID == len(configPeer.blockHashes)-1 {
saved = true
brk = true
}
return
})
// save cache progress (last block height)
if err = etx.Put(StageProgressBucket, []byte(LastBlockHeight), marshalData(p)); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving cache progress for block hashes stage failed")
return p, h, ErrSaveCachedBlockHashesProgressFail
}
// save cache progress
if err = etx.Put(StageProgressBucket, []byte(LastBlockHash), h.Bytes()); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving cache last block hash for block hashes stage failed")
return p, h, ErrSavingCacheLastBlockHashFail
}
// if node was connected to other peers and had some hashes to store in db, but it failed to save the blocks, return error
if len(s.state.syncConfig.peers) > 0 && len(s.state.syncConfig.peers[0].blockHashes) > 0 && !saved {
return p, h, ErrCachingBlockHashFail
}
// commit transaction to db to cache all downloaded blocks
if err := etx.Commit(); err != nil {
return p, h, err
}
// it cached block hashes successfully, so, it returns the cache progress and last cached block hash
return p, h, nil
}
// clearCache removes block hashes from cache db
func (bh *StageBlockHashes) clearCache() error {
tx, err := bh.configs.cachedb.BeginRw(context.Background())
if err != nil {
return nil
}
defer tx.Rollback()
if err := tx.ClearBucket(BlockHashesBucket); err != nil {
return nil
}
if err := tx.Commit(); err != nil {
return err
}
return nil
}
// getHashFromCache fetches block hashes from cache db
func (bh *StageBlockHashes) getHashFromCache(height uint64) (h []byte, err error) {
tx, err := bh.configs.cachedb.BeginRw(context.Background())
if err != nil {
return nil, err
}
defer tx.Rollback()
var cacheHash []byte
key := strconv.FormatUint(height, 10)
if exist, err := tx.Has(BlockHashesBucket, []byte(key)); !exist || err != nil {
return nil, ErrFetchBlockHashProgressFail
}
if cacheHash, err = tx.GetOne(BlockHashesBucket, []byte(key)); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] fetch cache progress for block hashes stage failed")
return nil, ErrFetchBlockHashProgressFail
}
hv, _ := unmarshalData(cacheHash)
if len(cacheHash) <= 1 || hv == 0 {
return nil, ErrFetchBlockHashProgressFail
}
if err := tx.Commit(); err != nil {
return nil, err
}
return cacheHash[:], nil
}
// loadBlockHashesFromCache loads block hashes from cache db to main sync db and update the progress
func (bh *StageBlockHashes) loadBlockHashesFromCache(s *StageState, startHash []byte, startHeight uint64, targetHeight uint64, tx kv.RwTx) (p uint64, h common.Hash, err error) {
p = startHeight
h.SetBytes(startHash[:])
useInternalTx := tx == nil
if useInternalTx {
tx, err = bh.configs.db.BeginRw(bh.configs.ctx)
if err != nil {
return p, h, err
}
defer tx.Rollback()
}
if errV := bh.configs.cachedb.View(context.Background(), func(rtx kv.Tx) error {
// load block hashes from cache db and copy them to main sync db
for ok := true; ok; ok = p < targetHeight {
key := strconv.FormatUint(p+1, 10)
lastHash, err := rtx.GetOne(BlockHashesBucket, []byte(key))
if err != nil {
utils.Logger().Error().
Err(err).
Str("block height", key).
Msg("[STAGED_SYNC] retrieve block hash from cache failed")
return err
}
if len(lastHash[:]) == 0 {
return nil
}
bucketName := GetBucketName(BlockHashesBucket, s.state.isBeacon)
if err = tx.Put(bucketName, []byte(key), lastHash); err != nil {
return err
}
h.SetBytes(lastHash[:])
p++
}
// load extra block hashes from cache db and copy them to bg db to be downloaded in background by block stage
s.state.syncStatus.currentCycle.lock.Lock()
defer s.state.syncStatus.currentCycle.lock.Unlock()
pExtraHashes := p
s.state.syncStatus.currentCycle.ExtraHashes = make(map[uint64][]byte)
for ok := true; ok; ok = pExtraHashes < p+s.state.MaxBackgroundBlocks {
key := strconv.FormatUint(pExtraHashes+1, 10)
newHash, err := rtx.GetOne(BlockHashesBucket, []byte(key))
if err != nil {
utils.Logger().Error().
Err(err).
Str("block height", key).
Msg("[STAGED_SYNC] retrieve extra block hashes for background process failed")
break
}
if len(newHash[:]) == 0 {
return nil
}
s.state.syncStatus.currentCycle.ExtraHashes[pExtraHashes+1] = newHash
pExtraHashes++
}
return nil
}); errV != nil {
return startHeight, h, errV
}
// save progress
if err = s.Update(tx, p); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving retrieved cached progress for block hashes stage failed")
h.SetBytes(startHash[:])
return startHeight, h, err
}
// update the progress
if useInternalTx {
if err := tx.Commit(); err != nil {
h.SetBytes(startHash[:])
return startHeight, h, err
}
}
return p, h, nil
}
func (bh *StageBlockHashes) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = bh.configs.db.BeginRw(bh.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
// terminate background process in turbo mode
if bh.configs.bgProcRunning {
bh.configs.bgProcRunning = false
bh.configs.turboModeCh <- struct{}{}
close(bh.configs.turboModeCh)
}
// clean block hashes db
hashesBucketName := GetBucketName(BlockHashesBucket, bh.configs.isBeacon)
if err = tx.ClearBucket(hashesBucketName); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] clear block hashes bucket after revert failed")
return err
}
// clean cache db as well
if err := bh.clearCache(); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] clear block hashes cache failed")
return err
}
// clear extra block hashes
s.state.syncStatus.currentCycle.ExtraHashes = make(map[uint64][]byte)
// save progress
currentHead := bh.configs.bc.CurrentBlock().NumberU64()
if err = s.Update(tx, currentHead); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block hashes stage after revert failed")
return err
}
if err = u.Done(tx); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] reset after revert failed")
return err
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return ErrCommitTransactionFail
}
}
return nil
}
func (bh *StageBlockHashes) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = bh.configs.db.BeginRw(bh.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
// terminate background process in turbo mode
if bh.configs.bgProcRunning {
bh.configs.bgProcRunning = false
bh.configs.turboModeCh <- struct{}{}
close(bh.configs.turboModeCh)
}
hashesBucketName := GetBucketName(BlockHashesBucket, bh.configs.isBeacon)
tx.ClearBucket(hashesBucketName)
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

@ -0,0 +1,784 @@
package stagedsync
import (
"context"
"encoding/hex"
"fmt"
"strconv"
"sync"
"time"
"github.com/Workiva/go-datastructures/queue"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/internal/utils"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon-lib/kv/mdbx"
"github.com/ledgerwatch/log/v3"
)
type StageBodies struct {
configs StageBodiesCfg
}
type StageBodiesCfg struct {
ctx context.Context
bc core.BlockChain
db kv.RwDB
turbo bool
turboModeCh chan struct{}
bgProcRunning bool
isBeacon bool
cachedb kv.RwDB
logProgress bool
}
func NewStageBodies(cfg StageBodiesCfg) *StageBodies {
return &StageBodies{
configs: cfg,
}
}
func NewStageBodiesCfg(ctx context.Context, bc core.BlockChain, db kv.RwDB, isBeacon bool, turbo bool, logProgress bool) StageBodiesCfg {
cachedb, err := initBlocksCacheDB(ctx, isBeacon)
if err != nil {
panic("can't initialize sync caches")
}
return StageBodiesCfg{
ctx: ctx,
bc: bc,
db: db,
turbo: turbo,
isBeacon: isBeacon,
cachedb: cachedb,
logProgress: logProgress,
}
}
func initBlocksCacheDB(ctx context.Context, isBeacon bool) (db kv.RwDB, err error) {
// create caches db
cachedbName := BlockCacheDB
if isBeacon {
cachedbName = "beacon_" + cachedbName
}
cachedb := mdbx.NewMDBX(log.New()).Path(cachedbName).MustOpen()
tx, errRW := cachedb.BeginRw(ctx)
if errRW != nil {
utils.Logger().Error().
Err(errRW).
Msg("[STAGED_SYNC] initializing sync caches failed")
return nil, errRW
}
defer tx.Rollback()
if err := tx.CreateBucket(DownloadedBlocksBucket); err != nil {
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] creating cache bucket failed")
return nil, err
}
if err := tx.CreateBucket(StageProgressBucket); err != nil {
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] creating progress bucket failed")
return nil, err
}
if err := tx.Commit(); err != nil {
return nil, err
}
return cachedb, nil
}
// Exec progresses Bodies stage in the forward direction
func (b *StageBodies) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) (err error) {
maxPeersHeight := s.state.syncStatus.MaxPeersHeight
currentHead := b.configs.bc.CurrentBlock().NumberU64()
if currentHead >= maxPeersHeight {
return nil
}
currProgress := uint64(0)
targetHeight := s.state.syncStatus.currentCycle.TargetHeight
isBeacon := s.state.isBeacon
isLastCycle := targetHeight >= maxPeersHeight
canRunInTurboMode := b.configs.turbo && !isLastCycle
if errV := CreateView(b.configs.ctx, b.configs.db, tx, func(etx kv.Tx) error {
if currProgress, err = s.CurrentStageProgress(etx); err != nil {
return err
}
return nil
}); errV != nil {
return errV
}
if currProgress == 0 {
if err := b.clearBlocksBucket(tx, s.state.isBeacon); err != nil {
return err
}
currProgress = currentHead
}
if currProgress >= targetHeight {
return nil
}
// load cached blocks to main sync db
if b.configs.turbo && !firstCycle {
if currProgress, err = b.loadBlocksFromCache(s, currProgress, tx); err != nil {
return err
}
}
if currProgress >= targetHeight {
return nil
}
size := uint64(0)
startTime := time.Now()
startBlock := currProgress
if b.configs.logProgress {
fmt.Print("\033[s") // save the cursor position
}
for ok := true; ok; ok = currProgress < targetHeight {
maxSize := targetHeight - currProgress
size = uint64(downloadTaskBatch * len(s.state.syncConfig.peers))
if size > maxSize {
size = maxSize
}
if err = b.loadBlockHashesToTaskQueue(s, currProgress+1, size, tx); err != nil {
s.state.RevertTo(b.configs.bc.CurrentBlock().NumberU64(), b.configs.bc.CurrentBlock().Hash())
return err
}
// Download blocks.
verifyAllSig := true //TODO: move it to configs
if err = b.downloadBlocks(s, verifyAllSig, tx); err != nil {
return nil
}
// save blocks and update current progress
if currProgress, err = b.saveDownloadedBlocks(s, currProgress, tx); err != nil {
return err
}
// log the stage progress in console
if b.configs.logProgress {
//calculating block speed
dt := time.Now().Sub(startTime).Seconds()
speed := float64(0)
if dt > 0 {
speed = float64(currProgress-startBlock) / dt
}
blockSpeed := fmt.Sprintf("%.2f", speed)
fmt.Print("\033[u\033[K") // restore the cursor position and clear the line
fmt.Println("downloading blocks progress:", currProgress, "/", targetHeight, "(", blockSpeed, "blocks/s", ")")
}
}
// Run background process in turbo mode
if canRunInTurboMode && currProgress < maxPeersHeight {
b.configs.turboModeCh = make(chan struct{})
go b.runBackgroundProcess(tx, s, isBeacon, currProgress, currProgress+s.state.MaxBackgroundBlocks)
}
return nil
}
// runBackgroundProcess continues downloading blocks in the background and caching them on disk while next stages are running.
// In the next sync cycle, this stage will use cached blocks rather than download them from peers.
// This helps performance and reduces stage duration. It also helps to use the resources more efficiently.
func (b *StageBodies) runBackgroundProcess(tx kv.RwTx, s *StageState, isBeacon bool, startHeight uint64, targetHeight uint64) error {
s.state.syncStatus.currentCycle.lock.RLock()
defer s.state.syncStatus.currentCycle.lock.RUnlock()
if s.state.syncStatus.currentCycle.Number == 0 || len(s.state.syncStatus.currentCycle.ExtraHashes) == 0 {
return nil
}
currProgress := startHeight
var err error
size := uint64(0)
b.configs.bgProcRunning = true
defer func() {
if b.configs.bgProcRunning {
close(b.configs.turboModeCh)
b.configs.bgProcRunning = false
}
}()
for ok := true; ok; ok = currProgress < targetHeight {
select {
case <-b.configs.turboModeCh:
return nil
default:
if currProgress >= targetHeight {
return nil
}
maxSize := targetHeight - currProgress
size = uint64(downloadTaskBatch * len(s.state.syncConfig.peers))
if size > maxSize {
size = maxSize
}
if err = b.loadExtraBlockHashesToTaskQueue(s, currProgress+1, size); err != nil {
return err
}
// Download blocks.
verifyAllSig := true //TODO: move it to configs
if err = b.downloadBlocks(s, verifyAllSig, nil); err != nil {
return nil
}
// save blocks and update current progress
if currProgress, err = b.cacheBlocks(s, currProgress); err != nil {
return err
}
}
}
return nil
}
func (b *StageBodies) clearBlocksBucket(tx kv.RwTx, isBeacon bool) error {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = b.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
bucketName := GetBucketName(DownloadedBlocksBucket, isBeacon)
if err := tx.ClearBucket(bucketName); err != nil {
return err
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
// downloadBlocks downloads blocks from state sync task queue.
func (b *StageBodies) downloadBlocks(s *StageState, verifyAllSig bool, tx kv.RwTx) (err error) {
ss := s.state
var wg sync.WaitGroup
taskQueue := downloadTaskQueue{ss.stateSyncTaskQueue}
s.state.InitDownloadedBlocksMap()
ss.syncConfig.ForEachPeer(func(peerConfig *SyncPeerConfig) (brk bool) {
wg.Add(1)
go func() {
defer wg.Done()
if !peerConfig.client.IsReady() {
// try to connect
if ready := peerConfig.client.WaitForConnection(1000 * time.Millisecond); !ready {
if !peerConfig.client.IsConnecting() { // if it's idle or closed then remove it
ss.syncConfig.RemovePeer(peerConfig, "not ready to download blocks")
}
return
}
}
for !taskQueue.empty() {
tasks, err := taskQueue.poll(downloadTaskBatch, time.Millisecond)
if err != nil || len(tasks) == 0 {
if err == queue.ErrDisposed {
continue
}
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] downloadBlocks: ss.stateSyncTaskQueue poll timeout")
break
}
payload, err := peerConfig.GetBlocks(tasks.blockHashes())
if err != nil {
isBrokenPeer := peerConfig.AddFailedTime(downloadBlocksRetryLimit)
utils.Logger().Error().
Err(err).
Str("peerID", peerConfig.ip).
Str("port", peerConfig.port).
Msg("[STAGED_SYNC] downloadBlocks: GetBlocks failed")
if err := taskQueue.put(tasks); err != nil {
utils.Logger().Error().
Err(err).
Interface("taskIndexes", tasks.indexes()).
Msg("cannot add task back to queue")
}
if isBrokenPeer {
ss.syncConfig.RemovePeer(peerConfig, "get blocks failed")
}
return
}
if len(payload) == 0 {
isBrokenPeer := peerConfig.AddFailedTime(downloadBlocksRetryLimit)
utils.Logger().Error().
Str("peerID", peerConfig.ip).
Str("port", peerConfig.port).
Msg("[STAGED_SYNC] downloadBlocks: no more retrievable blocks")
if err := taskQueue.put(tasks); err != nil {
utils.Logger().Error().
Err(err).
Interface("taskIndexes", tasks.indexes()).
Interface("taskBlockes", tasks.blockHashesStr()).
Msg("downloadBlocks: cannot add task")
}
if isBrokenPeer {
ss.syncConfig.RemovePeer(peerConfig, "no blocks in payload")
}
return
}
// node received blocks from peer, so it is working now
peerConfig.failedTimes = 0
failedTasks, err := b.handleBlockSyncResult(s, payload, tasks, verifyAllSig, tx)
if err != nil {
isBrokenPeer := peerConfig.AddFailedTime(downloadBlocksRetryLimit)
utils.Logger().Error().
Err(err).
Str("peerID", peerConfig.ip).
Str("port", peerConfig.port).
Msg("[STAGED_SYNC] downloadBlocks: handleBlockSyncResult failed")
if err := taskQueue.put(tasks); err != nil {
utils.Logger().Error().
Err(err).
Interface("taskIndexes", tasks.indexes()).
Interface("taskBlockes", tasks.blockHashesStr()).
Msg("downloadBlocks: cannot add task")
}
if isBrokenPeer {
ss.syncConfig.RemovePeer(peerConfig, "handleBlockSyncResult failed")
}
return
}
if len(failedTasks) != 0 {
isBrokenPeer := peerConfig.AddFailedTime(downloadBlocksRetryLimit)
utils.Logger().Error().
Str("peerID", peerConfig.ip).
Str("port", peerConfig.port).
Msg("[STAGED_SYNC] downloadBlocks: some tasks failed")
if err := taskQueue.put(failedTasks); err != nil {
utils.Logger().Error().
Err(err).
Interface("task Indexes", failedTasks.indexes()).
Interface("task Blocks", tasks.blockHashesStr()).
Msg("cannot add task")
}
if isBrokenPeer {
ss.syncConfig.RemovePeer(peerConfig, "some blocks failed to handle")
}
return
}
}
}()
return
})
wg.Wait()
return nil
}
func (b *StageBodies) handleBlockSyncResult(s *StageState, payload [][]byte, tasks syncBlockTasks, verifyAllSig bool, tx kv.RwTx) (syncBlockTasks, error) {
if len(payload) > len(tasks) {
utils.Logger().Error().
Err(ErrUnexpectedNumberOfBlocks).
Int("expect", len(tasks)).
Int("got", len(payload))
return tasks, ErrUnexpectedNumberOfBlocks
}
var failedTasks syncBlockTasks
if len(payload) < len(tasks) {
utils.Logger().Warn().
Err(ErrUnexpectedNumberOfBlocks).
Int("expect", len(tasks)).
Int("got", len(payload))
failedTasks = append(failedTasks, tasks[len(payload):]...)
}
s.state.lockBlocks.Lock()
defer s.state.lockBlocks.Unlock()
for i, blockBytes := range payload {
if len(blockBytes[:]) <= 1 {
failedTasks = append(failedTasks, tasks[i])
continue
}
k := uint64(tasks[i].index) // fmt.Sprintf("%d", tasks[i].index) //fmt.Sprintf("%020d", tasks[i].index)
s.state.downloadedBlocks[k] = make([]byte, len(blockBytes))
copy(s.state.downloadedBlocks[k], blockBytes[:])
}
return failedTasks, nil
}
func (b *StageBodies) saveProgress(s *StageState, progress uint64, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = b.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
// save progress
if err = s.Update(tx, progress); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block bodies stage failed")
return ErrSavingBodiesProgressFail
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (b *StageBodies) loadBlockHashesToTaskQueue(s *StageState, startIndex uint64, size uint64, tx kv.RwTx) error {
s.state.stateSyncTaskQueue = queue.New(0)
if errV := CreateView(b.configs.ctx, b.configs.db, tx, func(etx kv.Tx) error {
for i := startIndex; i < startIndex+size; i++ {
key := strconv.FormatUint(i, 10)
id := int(i - startIndex)
bucketName := GetBucketName(BlockHashesBucket, s.state.isBeacon)
blockHash, err := etx.GetOne(bucketName, []byte(key))
if err != nil {
return err
}
if blockHash == nil || len(blockHash) == 0 {
break
}
if err := s.state.stateSyncTaskQueue.Put(SyncBlockTask{index: id, blockHash: blockHash}); err != nil {
s.state.stateSyncTaskQueue = queue.New(0)
utils.Logger().Error().
Err(err).
Int("taskIndex", id).
Str("taskBlock", hex.EncodeToString(blockHash)).
Msg("[STAGED_SYNC] loadBlockHashesToTaskQueue: cannot add task")
break
}
}
return nil
}); errV != nil {
return errV
}
if s.state.stateSyncTaskQueue.Len() != int64(size) {
return ErrAddTaskFailed
}
return nil
}
func (b *StageBodies) loadExtraBlockHashesToTaskQueue(s *StageState, startIndex uint64, size uint64) error {
s.state.stateSyncTaskQueue = queue.New(0)
for i := startIndex; i < startIndex+size; i++ {
id := int(i - startIndex)
blockHash := s.state.syncStatus.currentCycle.ExtraHashes[i]
if len(blockHash[:]) == 0 {
break
}
if err := s.state.stateSyncTaskQueue.Put(SyncBlockTask{index: id, blockHash: blockHash}); err != nil {
s.state.stateSyncTaskQueue = queue.New(0)
utils.Logger().Warn().
Err(err).
Int("taskIndex", id).
Str("taskBlock", hex.EncodeToString(blockHash)).
Msg("[STAGED_SYNC] loadBlockHashesToTaskQueue: cannot add task")
break
}
}
if s.state.stateSyncTaskQueue.Len() != int64(size) {
return ErrAddTasksToQueueFail
}
return nil
}
func (b *StageBodies) saveDownloadedBlocks(s *StageState, progress uint64, tx kv.RwTx) (p uint64, err error) {
p = progress
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = b.configs.db.BeginRw(context.Background())
if err != nil {
return p, err
}
defer tx.Rollback()
}
downloadedBlocks := s.state.GetDownloadedBlocks()
for i := uint64(0); i < uint64(len(downloadedBlocks)); i++ {
blockBytes := downloadedBlocks[i]
n := progress + i + 1
blkNumber := marshalData(n)
bucketName := GetBucketName(DownloadedBlocksBucket, s.state.isBeacon)
if err := tx.Put(bucketName, blkNumber, blockBytes); err != nil {
utils.Logger().Error().
Err(err).
Uint64("block height", n).
Msg("[STAGED_SYNC] adding block to db failed")
return p, err
}
p++
}
// check if all block hashes are added to db break the loop
if p-progress != uint64(len(downloadedBlocks)) {
return progress, ErrSaveBlocksFail
}
// save progress
if err = s.Update(tx, p); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block bodies stage failed")
return progress, ErrSavingBodiesProgressFail
}
// if it's using its own transaction, commit transaction to db to cache all downloaded blocks
if useInternalTx {
if err := tx.Commit(); err != nil {
return progress, err
}
}
// it cached blocks successfully, so, it returns the cache progress
return p, nil
}
func (b *StageBodies) cacheBlocks(s *StageState, progress uint64) (p uint64, err error) {
p = progress
tx, err := b.configs.cachedb.BeginRw(context.Background())
if err != nil {
return p, err
}
defer tx.Rollback()
downloadedBlocks := s.state.GetDownloadedBlocks()
for i := uint64(0); i < uint64(len(downloadedBlocks)); i++ {
blockBytes := downloadedBlocks[i]
n := progress + i + 1
blkNumber := marshalData(n) // fmt.Sprintf("%020d", p+1)
if err := tx.Put(DownloadedBlocksBucket, blkNumber, blockBytes); err != nil {
utils.Logger().Error().
Err(err).
Uint64("block height", p).
Msg("[STAGED_SYNC] caching block failed")
return p, err
}
p++
}
// check if all block hashes are added to db break the loop
if p-progress != uint64(len(downloadedBlocks)) {
return p, ErrCachingBlocksFail
}
// save progress
if err = tx.Put(StageProgressBucket, []byte(LastBlockHeight), marshalData(p)); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving cache progress for blocks stage failed")
return p, ErrSavingCachedBodiesProgressFail
}
if err := tx.Commit(); err != nil {
return p, err
}
return p, nil
}
// clearCache removes block hashes from cache db
func (b *StageBodies) clearCache() error {
tx, err := b.configs.cachedb.BeginRw(context.Background())
if err != nil {
return nil
}
defer tx.Rollback()
if err := tx.ClearBucket(DownloadedBlocksBucket); err != nil {
return nil
}
if err := tx.Commit(); err != nil {
return err
}
return nil
}
// load blocks from cache db to main sync db and update the progress
func (b *StageBodies) loadBlocksFromCache(s *StageState, startHeight uint64, tx kv.RwTx) (p uint64, err error) {
p = startHeight
useInternalTx := tx == nil
if useInternalTx {
tx, err = b.configs.db.BeginRw(b.configs.ctx)
if err != nil {
return p, err
}
defer tx.Rollback()
}
defer func() {
// Clear cache db
b.configs.cachedb.Update(context.Background(), func(etx kv.RwTx) error {
if err := etx.ClearBucket(DownloadedBlocksBucket); err != nil {
return err
}
return nil
})
}()
errV := b.configs.cachedb.View(context.Background(), func(rtx kv.Tx) error {
lastCachedHeightBytes, err := rtx.GetOne(StageProgressBucket, []byte(LastBlockHeight))
if err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] retrieving cache progress for blocks stage failed")
return ErrRetrievingCachedBodiesProgressFail
}
lastHeight, err := unmarshalData(lastCachedHeightBytes)
if err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] retrieving cache progress for blocks stage failed")
return ErrRetrievingCachedBodiesProgressFail
}
if startHeight >= lastHeight {
return nil
}
// load block hashes from cache db snd copy them to main sync db
for ok := true; ok; ok = p < lastHeight {
key := marshalData(p + 1)
blkBytes, err := rtx.GetOne(DownloadedBlocksBucket, []byte(key))
if err != nil {
utils.Logger().Error().
Err(err).
Uint64("block height", p+1).
Msg("[STAGED_SYNC] retrieve block from cache failed")
return err
}
if len(blkBytes[:]) <= 1 {
break
}
bucketName := GetBucketName(DownloadedBlocksBucket, s.state.isBeacon)
if err = tx.Put(bucketName, []byte(key), blkBytes); err != nil {
return err
}
p++
}
return nil
})
if errV != nil {
return startHeight, errV
}
// save progress
if err = s.Update(tx, p); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving retrieved cached progress for blocks stage failed")
return startHeight, ErrSavingCachedBodiesProgressFail
}
// update the progress
if useInternalTx {
if err := tx.Commit(); err != nil {
return startHeight, err
}
}
return p, nil
}
func (b *StageBodies) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = b.configs.db.BeginRw(b.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
// terminate background process in turbo mode
if b.configs.bgProcRunning {
b.configs.bgProcRunning = false
b.configs.turboModeCh <- struct{}{}
close(b.configs.turboModeCh)
}
// clean block hashes db
blocksBucketName := GetBucketName(DownloadedBlocksBucket, b.configs.isBeacon)
if err = tx.ClearBucket(blocksBucketName); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] clear blocks bucket after revert failed")
return err
}
// clean cache db as well
if err := b.clearCache(); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] clear blocks cache failed")
return err
}
// save progress
currentHead := b.configs.bc.CurrentBlock().NumberU64()
if err = s.Update(tx, currentHead); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block bodies stage after revert failed")
return err
}
if err = u.Done(tx); err != nil {
return err
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}
func (b *StageBodies) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = b.configs.db.BeginRw(b.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
// terminate background process in turbo mode
if b.configs.bgProcRunning {
b.configs.bgProcRunning = false
b.configs.turboModeCh <- struct{}{}
close(b.configs.turboModeCh)
}
blocksBucketName := GetBucketName(DownloadedBlocksBucket, b.configs.isBeacon)
tx.ClearBucket(blocksBucketName)
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

@ -0,0 +1,114 @@
package stagedsync
import (
"context"
"github.com/ledgerwatch/erigon-lib/kv"
)
type StageFinish struct {
configs StageFinishCfg
}
type StageFinishCfg struct {
ctx context.Context
db kv.RwDB
}
func NewStageFinish(cfg StageFinishCfg) *StageFinish {
return &StageFinish{
configs: cfg,
}
}
func NewStageFinishCfg(ctx context.Context, db kv.RwDB) StageFinishCfg {
return StageFinishCfg{
ctx: ctx,
db: db,
}
}
func (finish *StageFinish) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) error {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = finish.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
// TODO: prepare indices (useful for RPC) and finalize
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (bh *StageFinish) clearBucket(tx kv.RwTx, isBeacon bool) error {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = bh.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
bucketName := GetBucketName(BlockHashesBucket, isBeacon)
if err := tx.ClearBucket(bucketName); err != nil {
return err
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (finish *StageFinish) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = finish.configs.db.BeginRw(finish.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if err = u.Done(tx); err != nil {
return err
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}
func (finish *StageFinish) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = finish.configs.db.BeginRw(finish.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

@ -0,0 +1,146 @@
package stagedsync
import (
"context"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/internal/utils"
"github.com/ledgerwatch/erigon-lib/kv"
)
type StageHeads struct {
configs StageHeadsCfg
}
type StageHeadsCfg struct {
ctx context.Context
bc core.BlockChain
db kv.RwDB
}
func NewStageHeads(cfg StageHeadsCfg) *StageHeads {
return &StageHeads{
configs: cfg,
}
}
func NewStageHeadersCfg(ctx context.Context, bc core.BlockChain, db kv.RwDB) StageHeadsCfg {
return StageHeadsCfg{
ctx: ctx,
bc: bc,
db: db,
}
}
func (heads *StageHeads) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) error {
if len(s.state.syncConfig.peers) < NumPeersLowBound {
return ErrNotEnoughConnectedPeers
}
// no need to update target if we are redoing the stages because of bad block
if invalidBlockRevert {
return nil
}
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = heads.configs.db.BeginRw(heads.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
maxPeersHeight := s.state.syncStatus.MaxPeersHeight
maxBlocksPerSyncCycle := s.state.MaxBlocksPerSyncCycle
currentHeight := heads.configs.bc.CurrentBlock().NumberU64()
s.state.syncStatus.currentCycle.TargetHeight = maxPeersHeight
targetHeight := uint64(0)
if errV := CreateView(heads.configs.ctx, heads.configs.db, tx, func(etx kv.Tx) (err error) {
if targetHeight, err = s.CurrentStageProgress(etx); err != nil {
return err
}
return nil
}); errV != nil {
return errV
}
// if current height is ahead of target height, we need recalculate target height
if targetHeight <= currentHeight {
if maxPeersHeight <= currentHeight {
return nil
}
utils.Logger().Info().
Uint64("max blocks per sync cycle", maxBlocksPerSyncCycle).
Uint64("maxPeersHeight", maxPeersHeight).
Msgf("[STAGED_SYNC] current height is ahead of target height, target height is readjusted to max peers height")
targetHeight = maxPeersHeight
}
if targetHeight > maxPeersHeight {
targetHeight = maxPeersHeight
}
if maxBlocksPerSyncCycle > 0 && targetHeight-currentHeight > maxBlocksPerSyncCycle {
targetHeight = currentHeight + maxBlocksPerSyncCycle
}
s.state.syncStatus.currentCycle.TargetHeight = targetHeight
if err := s.Update(tx, targetHeight); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for headers stage failed")
return err
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (heads *StageHeads) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = heads.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
if err = u.Done(tx); err != nil {
return err
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (heads *StageHeads) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = heads.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

@ -0,0 +1,121 @@
package stagedsync
import (
"context"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/core/types"
"github.com/ledgerwatch/erigon-lib/kv"
)
type StageLastMile struct {
configs StageLastMileCfg
}
type StageLastMileCfg struct {
ctx context.Context
bc core.BlockChain
db kv.RwDB
}
func NewStageLastMile(cfg StageLastMileCfg) *StageLastMile {
return &StageLastMile{
configs: cfg,
}
}
func NewStageLastMileCfg(ctx context.Context, bc core.BlockChain, db kv.RwDB) StageLastMileCfg {
return StageLastMileCfg{
ctx: ctx,
bc: bc,
db: db,
}
}
func (lm *StageLastMile) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) (err error) {
maxPeersHeight := s.state.syncStatus.MaxPeersHeight
targetHeight := s.state.syncStatus.currentCycle.TargetHeight
isLastCycle := targetHeight >= maxPeersHeight
if !isLastCycle {
return nil
}
bc := lm.configs.bc
// update blocks after node start sync
parentHash := bc.CurrentBlock().Hash()
for {
block := s.state.getMaxConsensusBlockFromParentHash(parentHash)
if block == nil {
break
}
err = s.state.UpdateBlockAndStatus(block, bc, true)
if err != nil {
break
}
parentHash = block.Hash()
}
// TODO ek – Do we need to hold syncMux now that syncConfig has its own mutex?
s.state.syncMux.Lock()
s.state.syncConfig.ForEachPeer(func(peer *SyncPeerConfig) (brk bool) {
peer.newBlocks = []*types.Block{}
return
})
s.state.syncMux.Unlock()
// update last mile blocks if any
parentHash = bc.CurrentBlock().Hash()
for {
block := s.state.getBlockFromLastMileBlocksByParentHash(parentHash)
if block == nil {
break
}
err = s.state.UpdateBlockAndStatus(block, bc, false)
if err != nil {
break
}
parentHash = block.Hash()
}
return nil
}
func (lm *StageLastMile) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = lm.configs.db.BeginRw(lm.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if err = u.Done(tx); err != nil {
return err
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}
func (lm *StageLastMile) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = lm.configs.db.BeginRw(lm.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

@ -0,0 +1,330 @@
package stagedsync
import (
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/internal/chain"
"github.com/harmony-one/harmony/internal/utils"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/pkg/errors"
)
type StageStates struct {
configs StageStatesCfg
}
type StageStatesCfg struct {
ctx context.Context
bc core.BlockChain
db kv.RwDB
logProgress bool
}
func NewStageStates(cfg StageStatesCfg) *StageStates {
return &StageStates{
configs: cfg,
}
}
func NewStageStatesCfg(ctx context.Context, bc core.BlockChain, db kv.RwDB, logProgress bool) StageStatesCfg {
return StageStatesCfg{
ctx: ctx,
bc: bc,
db: db,
logProgress: logProgress,
}
}
func getBlockHashByHeight(h uint64, isBeacon bool, tx kv.RwTx) common.Hash {
var invalidBlockHash common.Hash
hashesBucketName := GetBucketName(BlockHashesBucket, isBeacon)
blockHeight := marshalData(h)
if invalidBlockHashBytes, err := tx.GetOne(hashesBucketName, blockHeight); err == nil {
invalidBlockHash.SetBytes(invalidBlockHashBytes)
}
return invalidBlockHash
}
// Exec progresses States stage in the forward direction
func (stg *StageStates) Exec(firstCycle bool, invalidBlockRevert bool, s *StageState, reverter Reverter, tx kv.RwTx) (err error) {
maxPeersHeight := s.state.syncStatus.MaxPeersHeight
currentHead := stg.configs.bc.CurrentBlock().NumberU64()
if currentHead >= maxPeersHeight {
return nil
}
currProgress := stg.configs.bc.CurrentBlock().NumberU64()
targetHeight := s.state.syncStatus.currentCycle.TargetHeight
if currProgress >= targetHeight {
return nil
}
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = stg.configs.db.BeginRw(stg.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
blocksBucketName := GetBucketName(DownloadedBlocksBucket, s.state.isBeacon)
isLastCycle := targetHeight >= maxPeersHeight
verifyAllSig := s.state.VerifyAllSig || isLastCycle //if it's last cycle, we have to check all signatures
startTime := time.Now()
startBlock := currProgress
var newBlocks types.Blocks
nBlock := int(0)
if stg.configs.logProgress {
fmt.Print("\033[s") // save the cursor position
}
for i := currProgress + 1; i <= targetHeight; i++ {
key := marshalData(i)
blockBytes, err := tx.GetOne(blocksBucketName, key)
if err != nil {
return err
}
// if block size is invalid, we have to break the updating state loop
// we don't need to do rollback, because the latest batch haven't added to chain yet
sz := len(blockBytes)
if sz <= 1 {
utils.Logger().Error().
Uint64("block number", i).
Msg("block size invalid")
invalidBlockHash := getBlockHashByHeight(i, s.state.isBeacon, tx)
s.state.RevertTo(stg.configs.bc.CurrentBlock().NumberU64(), invalidBlockHash)
return ErrInvalidBlockBytes
}
block, err := RlpDecodeBlockOrBlockWithSig(blockBytes)
if err != nil {
utils.Logger().Error().
Err(err).
Uint64("block number", i).
Msg("block RLP decode failed")
invalidBlockHash := getBlockHashByHeight(i, s.state.isBeacon, tx)
s.state.RevertTo(stg.configs.bc.CurrentBlock().NumberU64(), invalidBlockHash)
return err
}
/*
// TODO: use hash as key and here check key (which is hash) against block.header.hash
gotHash := block.Hash()
if !bytes.Equal(gotHash[:], tasks[i].blockHash) {
utils.Logger().Warn().
Err(errors.New("wrong block delivery")).
Str("expectHash", hex.EncodeToString(tasks[i].blockHash)).
Str("gotHash", hex.EncodeToString(gotHash[:]))
continue
}
*/
if block.NumberU64() != i {
invalidBlockHash := getBlockHashByHeight(i, s.state.isBeacon, tx)
s.state.RevertTo(stg.configs.bc.CurrentBlock().NumberU64(), invalidBlockHash)
return ErrInvalidBlockNumber
}
if block.NumberU64() <= currProgress {
continue
}
// Verify block signatures
if block.NumberU64() > 1 {
// Verify signature every N blocks (which N is verifyHeaderBatchSize and can be adjusted in configs)
haveCurrentSig := len(block.GetCurrentCommitSig()) != 0
verifySeal := block.NumberU64()%s.state.VerifyHeaderBatchSize == 0 || verifyAllSig
verifyCurrentSig := verifyAllSig && haveCurrentSig
bc := stg.configs.bc
if err = stg.verifyBlockSignatures(bc, block, verifyCurrentSig, verifySeal, verifyAllSig); err != nil {
invalidBlockHash := getBlockHashByHeight(i, s.state.isBeacon, tx)
s.state.RevertTo(stg.configs.bc.CurrentBlock().NumberU64(), invalidBlockHash)
return err
}
/*
//TODO: we are handling the bad blocks and already blocks are verified, so do we need verify header?
err := stg.configs.bc.Engine().VerifyHeader(stg.configs.bc, block.Header(), verifySeal)
if err == engine.ErrUnknownAncestor {
return err
} else if err != nil {
utils.Logger().Error().Err(err).Msgf("[STAGED_SYNC] failed verifying signatures for new block %d", block.NumberU64())
if !verifyAllSig {
utils.Logger().Info().Interface("block", stg.configs.bc.CurrentBlock()).Msg("[STAGED_SYNC] Rolling back last 99 blocks!")
for i := uint64(0); i < s.state.VerifyHeaderBatchSize-1; i++ {
if rbErr := stg.configs.bc.Rollback([]common.Hash{stg.configs.bc.CurrentBlock().Hash()}); rbErr != nil {
utils.Logger().Err(rbErr).Msg("[STAGED_SYNC] UpdateBlockAndStatus: failed to rollback")
return err
}
}
currProgress = stg.configs.bc.CurrentBlock().NumberU64()
}
return err
}
*/
}
newBlocks = append(newBlocks, block)
if nBlock < s.state.InsertChainBatchSize-1 && block.NumberU64() < targetHeight {
nBlock++
continue
}
// insert downloaded block into chain
headBeforeNewBlocks := stg.configs.bc.CurrentBlock().NumberU64()
headHashBeforeNewBlocks := stg.configs.bc.CurrentBlock().Hash()
_, err = stg.configs.bc.InsertChain(newBlocks, false) //TODO: verifyHeaders can be done here
if err != nil {
// TODO: handle chain rollback because of bad block
utils.Logger().Error().
Err(err).
Uint64("block number", block.NumberU64()).
Uint32("shard", block.ShardID()).
Msgf("[STAGED_SYNC] UpdateBlockAndStatus: Error adding new block to blockchain")
// rollback bc
utils.Logger().Info().
Interface("block", stg.configs.bc.CurrentBlock()).
Msg("[STAGED_SYNC] Rolling back last added blocks!")
if rbErr := stg.configs.bc.Rollback([]common.Hash{headHashBeforeNewBlocks}); rbErr != nil {
utils.Logger().Error().
Err(rbErr).
Msg("[STAGED_SYNC] UpdateBlockAndStatus: failed to rollback")
return err
}
s.state.RevertTo(headBeforeNewBlocks, headHashBeforeNewBlocks)
return err
}
utils.Logger().Info().
Uint64("blockHeight", block.NumberU64()).
Uint64("blockEpoch", block.Epoch().Uint64()).
Str("blockHex", block.Hash().Hex()).
Uint32("ShardID", block.ShardID()).
Msg("[STAGED_SYNC] UpdateBlockAndStatus: New Block Added to Blockchain")
// update cur progress
currProgress = stg.configs.bc.CurrentBlock().NumberU64()
for i, tx := range block.StakingTransactions() {
utils.Logger().Info().
Msgf(
"StakingTxn %d: %s, %v", i, tx.StakingType().String(), tx.StakingMessage(),
)
}
nBlock = 0
newBlocks = newBlocks[:0]
// log the stage progress in console
if stg.configs.logProgress {
//calculating block speed
dt := time.Now().Sub(startTime).Seconds()
speed := float64(0)
if dt > 0 {
speed = float64(currProgress-startBlock) / dt
}
blockSpeed := fmt.Sprintf("%.2f", speed)
fmt.Print("\033[u\033[K") // restore the cursor position and clear the line
fmt.Println("insert blocks progress:", currProgress, "/", targetHeight, "(", blockSpeed, "blocks/s", ")")
}
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
//verifyBlockSignatures verifies block signatures
func (stg *StageStates) verifyBlockSignatures(bc core.BlockChain, block *types.Block, verifyCurrentSig bool, verifySeal bool, verifyAllSig bool) (err error) {
if verifyCurrentSig {
sig, bitmap, err := chain.ParseCommitSigAndBitmap(block.GetCurrentCommitSig())
if err != nil {
return errors.Wrap(err, "parse commitSigAndBitmap")
}
startTime := time.Now()
if err := bc.Engine().VerifyHeaderSignature(bc, block.Header(), sig, bitmap); err != nil {
return errors.Wrapf(err, "verify header signature %v", block.Hash().String())
}
utils.Logger().Debug().
Int64("elapsed time", time.Now().Sub(startTime).Milliseconds()).
Msg("[STAGED_SYNC] VerifyHeaderSignature")
}
return nil
}
// saveProgress saves the stage progress
func (stg *StageStates) saveProgress(s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
var err error
tx, err = stg.configs.db.BeginRw(context.Background())
if err != nil {
return err
}
defer tx.Rollback()
}
// save progress
if err = s.Update(tx, stg.configs.bc.CurrentBlock().NumberU64()); err != nil {
utils.Logger().Error().
Err(err).
Msgf("[STAGED_SYNC] saving progress for block States stage failed")
return ErrSaveStateProgressFail
}
if useInternalTx {
if err := tx.Commit(); err != nil {
return err
}
}
return nil
}
func (stg *StageStates) Revert(firstCycle bool, u *RevertState, s *StageState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = stg.configs.db.BeginRw(stg.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if err = u.Done(tx); err != nil {
return err
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}
func (stg *StageStates) CleanUp(firstCycle bool, p *CleanUpState, tx kv.RwTx) (err error) {
useInternalTx := tx == nil
if useInternalTx {
tx, err = stg.configs.db.BeginRw(stg.configs.ctx)
if err != nil {
return err
}
defer tx.Rollback()
}
if useInternalTx {
if err = tx.Commit(); err != nil {
return err
}
}
return nil
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,94 @@
package stagedsync
import (
"encoding/binary"
"fmt"
"github.com/ledgerwatch/erigon-lib/kv"
)
// SyncStageID represents the stages in the Mode.StagedSync mode
type SyncStageID string
const (
Heads SyncStageID = "Heads" // Heads are downloaded
BlockHashes SyncStageID = "BlockHashes" // block hashes are downloaded from peers
BlockBodies SyncStageID = "BlockBodies" // Block bodies are downloaded, TxHash and UncleHash are getting verified
States SyncStageID = "States" // will construct most recent state from downloaded blocks
LastMile SyncStageID = "LastMile" // update blocks after sync and update last mile blocks as well
Finish SyncStageID = "Finish" // Nominal stage after all other stages
)
func GetStageName(stage string, isBeacon bool, prune bool) string {
name := stage
if isBeacon {
name = "beacon_" + name
}
if prune {
name = "prune_" + name
}
return name
}
func GetStageID(stage SyncStageID, isBeacon bool, prune bool) []byte {
return []byte(GetStageName(string(stage), isBeacon, prune))
}
func GetBucketName(bucketName string, isBeacon bool) string {
name := bucketName
if isBeacon {
name = "Beacon" + name
}
return name
}
// GetStageProgress retrieves saved progress of given sync stage from the database
func GetStageProgress(db kv.Getter, stage SyncStageID, isBeacon bool) (uint64, error) {
stgID := GetStageID(stage, isBeacon, false)
v, err := db.GetOne(kv.SyncStageProgress, stgID)
if err != nil {
return 0, err
}
return unmarshalData(v)
}
// SaveStageProgress saves progress of given sync stage
func SaveStageProgress(db kv.Putter, stage SyncStageID, isBeacon bool, progress uint64) error {
stgID := GetStageID(stage, isBeacon, false)
return db.Put(kv.SyncStageProgress, stgID, marshalData(progress))
}
// GetStageCleanUpProgress retrieves saved progress of given sync stage from the database
func GetStageCleanUpProgress(db kv.Getter, stage SyncStageID, isBeacon bool) (uint64, error) {
stgID := GetStageID(stage, isBeacon, true)
v, err := db.GetOne(kv.SyncStageProgress, stgID)
if err != nil {
return 0, err
}
return unmarshalData(v)
}
func SaveStageCleanUpProgress(db kv.Putter, stage SyncStageID, isBeacon bool, progress uint64) error {
stgID := GetStageID(stage, isBeacon, true)
return db.Put(kv.SyncStageProgress, stgID, marshalData(progress))
}
func marshalData(blockNumber uint64) []byte {
return encodeBigEndian(blockNumber)
}
func unmarshalData(data []byte) (uint64, error) {
if len(data) == 0 {
return 0, nil
}
if len(data) < 8 {
return 0, fmt.Errorf("value must be at least 8 bytes, got %d", len(data))
}
return binary.BigEndian.Uint64(data[:8]), nil
}
func encodeBigEndian(n uint64) []byte {
var v [8]byte
binary.BigEndian.PutUint64(v[:], n)
return v[:]
}

@ -0,0 +1,401 @@
package stagedsync
import (
"bytes"
"encoding/hex"
"errors"
"math/rand"
"reflect"
"sort"
"sync"
"github.com/harmony-one/harmony/api/service/legacysync/downloader"
pb "github.com/harmony-one/harmony/api/service/legacysync/downloader/proto"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/p2p"
)
// Constants for syncing.
const (
downloadBlocksRetryLimit = 3 // downloadBlocks service retry limit
RegistrationNumber = 3
SyncingPortDifference = 3000
inSyncThreshold = 0 // when peerBlockHeight - myBlockHeight <= inSyncThreshold, it's ready to join consensus
SyncLoopBatchSize uint32 = 30 // maximum size for one query of block hashes
verifyHeaderBatchSize uint64 = 100 // block chain header verification batch size (not used for now)
LastMileBlocksSize = 50
// after cutting off a number of connected peers, the result number of peers
// shall be between numPeersLowBound and numPeersHighBound
NumPeersLowBound = 3
numPeersHighBound = 5
// NumPeersReserved is the number reserved peers which will be replaced with any broken peer
NumPeersReserved = 2
// downloadTaskBatch is the number of tasks per each downloader request
downloadTaskBatch = 5
)
// SyncPeerConfig is peer config to sync.
type SyncPeerConfig struct {
ip string
port string
peerHash []byte
client *downloader.Client
blockHashes [][]byte // block hashes before node doing sync
newBlocks []*types.Block // blocks after node doing sync
mux sync.RWMutex
failedTimes uint64
}
// CreateTestSyncPeerConfig used for testing.
func CreateTestSyncPeerConfig(client *downloader.Client, blockHashes [][]byte) *SyncPeerConfig {
return &SyncPeerConfig{
client: client,
blockHashes: blockHashes,
}
}
// GetClient returns client pointer of downloader.Client
func (peerConfig *SyncPeerConfig) GetClient() *downloader.Client {
return peerConfig.client
}
// AddFailedTime considers one more peer failure and checks against max allowed failed times
func (peerConfig *SyncPeerConfig) AddFailedTime(maxFailures uint64) (mustStop bool) {
peerConfig.mux.Lock()
defer peerConfig.mux.Unlock()
peerConfig.failedTimes++
if peerConfig.failedTimes > maxFailures {
return true
}
return false
}
// IsEqual checks the equality between two sync peers
func (peerConfig *SyncPeerConfig) IsEqual(pc2 *SyncPeerConfig) bool {
return peerConfig.ip == pc2.ip && peerConfig.port == pc2.port
}
// GetBlocks gets blocks by calling grpc request to the corresponding peer.
func (peerConfig *SyncPeerConfig) GetBlocks(hashes [][]byte) ([][]byte, error) {
response := peerConfig.client.GetBlocksAndSigs(hashes)
if response == nil {
return nil, ErrGetBlock
}
return response.Payload, nil
}
func (peerConfig *SyncPeerConfig) registerToBroadcast(peerHash []byte, ip, port string) error {
response := peerConfig.client.Register(peerHash, ip, port)
if response == nil || response.Type == pb.DownloaderResponse_FAIL {
return ErrRegistrationFail
} else if response.Type == pb.DownloaderResponse_SUCCESS {
return nil
}
return ErrRegistrationFail
}
// CompareSyncPeerConfigByblockHashes compares two SyncPeerConfig by blockHashes.
func CompareSyncPeerConfigByblockHashes(a *SyncPeerConfig, b *SyncPeerConfig) int {
if len(a.blockHashes) != len(b.blockHashes) {
if len(a.blockHashes) < len(b.blockHashes) {
return -1
}
return 1
}
for id := range a.blockHashes {
if !reflect.DeepEqual(a.blockHashes[id], b.blockHashes[id]) {
return bytes.Compare(a.blockHashes[id], b.blockHashes[id])
}
}
return 0
}
// SyncBlockTask is the task struct to sync a specific block.
type SyncBlockTask struct {
index int
blockHash []byte
}
type syncBlockTasks []SyncBlockTask
func (tasks syncBlockTasks) blockHashes() [][]byte {
hashes := make([][]byte, 0, len(tasks))
for _, task := range tasks {
hash := make([]byte, len(task.blockHash))
copy(hash, task.blockHash)
hashes = append(hashes, task.blockHash)
}
return hashes
}
func (tasks syncBlockTasks) blockHashesStr() []string {
hashes := make([]string, 0, len(tasks))
for _, task := range tasks {
hash := hex.EncodeToString(task.blockHash)
hashes = append(hashes, hash)
}
return hashes
}
func (tasks syncBlockTasks) indexes() []int {
indexes := make([]int, 0, len(tasks))
for _, task := range tasks {
indexes = append(indexes, task.index)
}
return indexes
}
// SyncConfig contains an array of SyncPeerConfig.
type SyncConfig struct {
// mtx locks peers, and *SyncPeerConfig pointers in peers.
// SyncPeerConfig itself is guarded by its own mutex.
mtx sync.RWMutex
reservedPeers []*SyncPeerConfig
peers []*SyncPeerConfig
}
// AddPeer adds the given sync peer.
func (sc *SyncConfig) AddPeer(peer *SyncPeerConfig) {
sc.mtx.Lock()
defer sc.mtx.Unlock()
// Ensure no duplicate peers
for _, p2 := range sc.peers {
if peer.IsEqual(p2) {
return
}
}
sc.peers = append(sc.peers, peer)
}
// SelectRandomPeers limits number of peers to release some server end sources.
func (sc *SyncConfig) SelectRandomPeers(peers []p2p.Peer, randSeed int64) int {
numPeers := len(peers)
targetSize := calcNumPeersWithBound(numPeers, NumPeersLowBound, numPeersHighBound)
// if number of peers is less than required number, keep all in list
if numPeers <= targetSize {
utils.Logger().Warn().
Int("num connected peers", numPeers).
Msg("[STAGED_SYNC] not enough connected peers to sync, still sync will on going")
return numPeers
}
//shuffle peers list
r := rand.New(rand.NewSource(randSeed))
r.Shuffle(numPeers, func(i, j int) { peers[i], peers[j] = peers[j], peers[i] })
return targetSize
}
// calcNumPeersWithBound calculates the number of connected peers with bound
// peers are expected to limited at half of the size, capped between lowBound and highBound.
func calcNumPeersWithBound(size int, lowBound, highBound int) int {
if size < lowBound {
return size
}
expLen := size / 2
if expLen < lowBound {
expLen = lowBound
}
if expLen > highBound {
expLen = highBound
}
return expLen
}
// ForEachPeer calls the given function with each peer.
// It breaks the iteration iff the function returns true.
func (sc *SyncConfig) ForEachPeer(f func(peer *SyncPeerConfig) (brk bool)) {
sc.mtx.RLock()
peers := make([]*SyncPeerConfig, len(sc.peers))
copy(peers, sc.peers)
sc.mtx.RUnlock()
for _, peer := range peers {
if f(peer) {
break
}
}
}
// RemovePeer removes a peer from SyncConfig
func (sc *SyncConfig) RemovePeer(peer *SyncPeerConfig, reason string) {
sc.mtx.Lock()
defer sc.mtx.Unlock()
peer.client.Close(reason)
for i, p := range sc.peers {
if p == peer {
sc.peers = append(sc.peers[:i], sc.peers[i+1:]...)
break
}
}
utils.Logger().Info().
Str("peerIP", peer.ip).
Str("peerPortMsg", peer.port).
Str("reason", reason).
Msg("[STAGED_SYNC] remove GRPC peer")
}
// ReplacePeerWithReserved tries to replace a peer from reserved peer list
func (sc *SyncConfig) ReplacePeerWithReserved(peer *SyncPeerConfig, reason string) {
sc.mtx.Lock()
defer sc.mtx.Unlock()
peer.client.Close(reason)
for i, p := range sc.peers {
if p == peer {
if len(sc.reservedPeers) > 0 {
sc.peers = append(sc.peers[:i], sc.peers[i+1:]...)
sc.peers = append(sc.peers, sc.reservedPeers[0])
utils.Logger().Info().
Str("peerIP", peer.ip).
Str("peerPort", peer.port).
Str("reservedPeerIP", sc.reservedPeers[0].ip).
Str("reservedPeerPort", sc.reservedPeers[0].port).
Str("reason", reason).
Msg("[STAGED_SYNC] replaced GRPC peer by reserved")
sc.reservedPeers = sc.reservedPeers[1:]
} else {
sc.peers = append(sc.peers[:i], sc.peers[i+1:]...)
utils.Logger().Info().
Str("peerIP", peer.ip).
Str("peerPortMsg", peer.port).
Str("reason", reason).
Msg("[STAGED_SYNC] remove GRPC peer without replacement")
}
break
}
}
}
// CloseConnections close grpc connections for state sync clients
func (sc *SyncConfig) CloseConnections() {
sc.mtx.RLock()
defer sc.mtx.RUnlock()
for _, pc := range sc.peers {
pc.client.Close("close all connections")
}
}
// FindPeerByHash returns the peer with the given hash, or nil if not found.
func (sc *SyncConfig) FindPeerByHash(peerHash []byte) *SyncPeerConfig {
sc.mtx.RLock()
defer sc.mtx.RUnlock()
for _, pc := range sc.peers {
if bytes.Equal(pc.peerHash, peerHash) {
return pc
}
}
return nil
}
// getHowManyMaxConsensus returns max number of consensus nodes and the first ID of consensus group.
// Assumption: all peers are sorted by CompareSyncPeerConfigByBlockHashes first.
// Caller shall ensure mtx is locked for reading.
func (sc *SyncConfig) getHowManyMaxConsensus() (int, int) {
// As all peers are sorted by their blockHashes, all equal blockHashes should come together and consecutively.
if len(sc.peers) == 0 {
return -1, 0
} else if len(sc.peers) == 1 {
return 0, 1
}
maxFirstID := len(sc.peers) - 1
for i := maxFirstID - 1; i >= 0; i-- {
if CompareSyncPeerConfigByblockHashes(sc.peers[maxFirstID], sc.peers[i]) != 0 {
break
}
maxFirstID = i
}
maxCount := len(sc.peers) - maxFirstID
return maxFirstID, maxCount
}
// InitForTesting used for testing.
func (sc *SyncConfig) InitForTesting(client *downloader.Client, blockHashes [][]byte) {
sc.mtx.RLock()
defer sc.mtx.RUnlock()
for i := range sc.peers {
sc.peers[i].blockHashes = blockHashes
sc.peers[i].client = client
}
}
// cleanUpPeers cleans up all peers whose blockHashes are not equal to
// consensus block hashes. Caller shall ensure mtx is locked for RW.
func (sc *SyncConfig) cleanUpPeers(maxFirstID int) {
fixedPeer := sc.peers[maxFirstID]
countBeforeCleanUp := len(sc.peers)
for i := 0; i < len(sc.peers); i++ {
if CompareSyncPeerConfigByblockHashes(fixedPeer, sc.peers[i]) != 0 {
// TODO: move it into a util delete func.
// See tip https://github.com/golang/go/wiki/SliceTricks
// Close the client and remove the peer out of the
sc.peers[i].client.Close("close by cleanup function, because blockHashes is not equal to consensus block hashes")
copy(sc.peers[i:], sc.peers[i+1:])
sc.peers[len(sc.peers)-1] = nil
sc.peers = sc.peers[:len(sc.peers)-1]
}
}
if len(sc.peers) < countBeforeCleanUp {
utils.Logger().Debug().
Int("removed peers", len(sc.peers)-countBeforeCleanUp).
Msg("[STAGED_SYNC] cleanUpPeers: a few peers removed")
}
}
// cleanUpInvalidPeers cleans up all peers whose missed a few required block hash or sent an invalid block hash
// Caller shall ensure mtx is locked for RW.
func (sc *SyncConfig) cleanUpInvalidPeers(ipm map[string]bool) {
sc.mtx.Lock()
defer sc.mtx.Unlock()
countBeforeCleanUp := len(sc.peers)
for i := 0; i < len(sc.peers); i++ {
if ipm[string(sc.peers[i].peerHash)] == true {
sc.peers[i].client.Close("cleanup invalid peers, it may missed a few required block hashes or sent an invalid block hash")
copy(sc.peers[i:], sc.peers[i+1:])
sc.peers[len(sc.peers)-1] = nil
sc.peers = sc.peers[:len(sc.peers)-1]
}
}
if len(sc.peers) < countBeforeCleanUp {
utils.Logger().Debug().
Int("removed peers", len(sc.peers)-countBeforeCleanUp).
Msg("[STAGED_SYNC] cleanUpPeers: a few peers removed")
}
}
// GetBlockHashesConsensusAndCleanUp selects the most common peer config based on their block hashes to download/sync.
// Note that choosing the most common peer config does not guarantee that the blocks to be downloaded are the correct ones.
// The subsequent node syncing steps of verifying the block header chain will give such confirmation later.
// If later block header verification fails with the sync peer config chosen here, the entire sync loop gets retried with a new peer set.
func (sc *SyncConfig) GetBlockHashesConsensusAndCleanUp(bgMode bool) error {
sc.mtx.Lock()
defer sc.mtx.Unlock()
// Sort all peers by the blockHashes.
sort.Slice(sc.peers, func(i, j int) bool {
return CompareSyncPeerConfigByblockHashes(sc.peers[i], sc.peers[j]) == -1
})
maxFirstID, maxCount := sc.getHowManyMaxConsensus()
if maxFirstID == -1 {
return errors.New("invalid peer index -1 for block hashes query")
}
utils.Logger().Info().
Int("maxFirstID", maxFirstID).
Str("targetPeerIP", sc.peers[maxFirstID].ip).
Int("maxCount", maxCount).
Int("hashSize", len(sc.peers[maxFirstID].blockHashes)).
Msg("[STAGED_SYNC] block consensus hashes")
if bgMode {
if maxCount != len(sc.peers) {
return ErrNodeNotEnoughBlockHashes
}
} else {
sc.cleanUpPeers(maxFirstID)
}
return nil
}

@ -0,0 +1,90 @@
package stagedsync
import (
"sync"
"time"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
)
const (
// syncStatusExpiration is the expiration time out of a sync status.
// If last sync result in memory is before the expiration, the sync status
// will be updated.
syncStatusExpiration = 6 * time.Second
// syncStatusExpirationNonValidator is the expiration of sync cache for non-validators.
// Compared with non-validator, the sync check is not as strict as validator nodes.
// TODO: add this field to harmony config
syncStatusExpirationNonValidator = 12 * time.Second
)
type (
syncStatus struct {
lastResult SyncCheckResult
MaxPeersHeight uint64
currentCycle SyncCycle
lastUpdateTime time.Time
lock sync.RWMutex
expiration time.Duration
}
SyncCheckResult struct {
IsSynchronized bool
OtherHeight uint64
HeightDiff uint64
}
SyncCycle struct {
Number uint64
StartHash []byte
TargetHeight uint64
ExtraHashes map[uint64][]byte
lock sync.RWMutex
}
)
func NewSyncStatus(role nodeconfig.Role) syncStatus {
expiration := getSyncStatusExpiration(role)
return syncStatus{
expiration: expiration,
}
}
func getSyncStatusExpiration(role nodeconfig.Role) time.Duration {
switch role {
case nodeconfig.Validator:
return syncStatusExpiration
case nodeconfig.ExplorerNode:
return syncStatusExpirationNonValidator
default:
return syncStatusExpirationNonValidator
}
}
func (status *syncStatus) Get(fallback func() SyncCheckResult) SyncCheckResult {
status.lock.RLock()
if !status.expired() {
result := status.lastResult
status.lock.RUnlock()
return result
}
status.lock.RUnlock()
status.lock.Lock()
defer status.lock.Unlock()
if status.expired() {
result := fallback()
status.update(result)
}
return status.lastResult
}
func (status *syncStatus) expired() bool {
return time.Since(status.lastUpdateTime) > status.expiration
}
func (status *syncStatus) update(result SyncCheckResult) {
status.lastUpdateTime = time.Now()
status.lastResult = result
}

@ -0,0 +1,292 @@
package stagedsync
import (
"context"
"fmt"
"time"
"github.com/c2h5oh/datasize"
"github.com/harmony-one/harmony/consensus"
"github.com/harmony-one/harmony/core"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/node/worker"
"github.com/harmony-one/harmony/shard"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon-lib/kv/mdbx"
"github.com/ledgerwatch/log/v3"
)
const (
BlockHashesBucket = "BlockHashes"
BeaconBlockHashesBucket = "BeaconBlockHashes"
DownloadedBlocksBucket = "BlockBodies"
BeaconDownloadedBlocksBucket = "BeaconBlockBodies" // Beacon Block bodies are downloaded, TxHash and UncleHash are getting verified
LastMileBlocksBucket = "LastMileBlocks" // last mile blocks to catch up with the consensus
StageProgressBucket = "StageProgress"
// cache db keys
LastBlockHeight = "LastBlockHeight"
LastBlockHash = "LastBlockHash"
// cache db names
BlockHashesCacheDB = "cache_block_hashes"
BlockCacheDB = "cache_blocks"
)
var Buckets = []string{
BlockHashesBucket,
BeaconBlockHashesBucket,
DownloadedBlocksBucket,
BeaconDownloadedBlocksBucket,
LastMileBlocksBucket,
StageProgressBucket,
}
// CreateStagedSync creates an instance of staged sync
func CreateStagedSync(
ip string,
port string,
peerHash [20]byte,
bc core.BlockChain,
role nodeconfig.Role,
isExplorer bool,
TurboMode bool,
UseMemDB bool,
doubleCheckBlockHashes bool,
maxBlocksPerCycle uint64,
maxBackgroundBlocks uint64,
maxMemSyncCycleSize uint64,
verifyAllSig bool,
verifyHeaderBatchSize uint64,
insertChainBatchSize int,
logProgress bool,
) (*StagedSync, error) {
ctx := context.Background()
isBeacon := bc.ShardID() == shard.BeaconChainShardID
var db kv.RwDB
if UseMemDB {
// maximum Blocks in memory is maxMemSyncCycleSize + maxBackgroundBlocks
var dbMapSize datasize.ByteSize
if isBeacon {
// for memdb, maximum 512 kb for beacon chain each block (in average) should be enough
dbMapSize = datasize.ByteSize(maxMemSyncCycleSize+maxBackgroundBlocks) * 512 * datasize.KB
} else {
// for memdb, maximum 256 kb for each shard chains block (in average) should be enough
dbMapSize = datasize.ByteSize(maxMemSyncCycleSize+maxBackgroundBlocks) * 256 * datasize.KB
}
// we manually create memory db because "db = memdb.New()" sets the default map size (64 MB) which is not enough for some cases
db = mdbx.NewMDBX(log.New()).MapSize(dbMapSize).InMem("cache_db").MustOpen()
} else {
if isBeacon {
db = mdbx.NewMDBX(log.New()).Path("cache_beacon_db").MustOpen()
} else {
db = mdbx.NewMDBX(log.New()).Path("cache_shard_db").MustOpen()
}
}
if errInitDB := initDB(ctx, db); errInitDB != nil {
return nil, errInitDB
}
headsCfg := NewStageHeadersCfg(ctx, bc, db)
blockHashesCfg := NewStageBlockHashesCfg(ctx, bc, db, isBeacon, TurboMode, logProgress)
bodiesCfg := NewStageBodiesCfg(ctx, bc, db, isBeacon, TurboMode, logProgress)
statesCfg := NewStageStatesCfg(ctx, bc, db, logProgress)
lastMileCfg := NewStageLastMileCfg(ctx, bc, db)
finishCfg := NewStageFinishCfg(ctx, db)
stages := DefaultStages(ctx,
headsCfg,
blockHashesCfg,
bodiesCfg,
statesCfg,
lastMileCfg,
finishCfg,
)
return New(ctx,
ip,
port,
peerHash,
bc,
role,
isBeacon,
isExplorer,
db,
stages,
DefaultRevertOrder,
DefaultCleanUpOrder,
TurboMode,
UseMemDB,
doubleCheckBlockHashes,
maxBlocksPerCycle,
maxBackgroundBlocks,
maxMemSyncCycleSize,
verifyAllSig,
verifyHeaderBatchSize,
insertChainBatchSize,
logProgress,
), nil
}
// initDB inits sync loop main database and create buckets
func initDB(ctx context.Context, db kv.RwDB) error {
tx, errRW := db.BeginRw(ctx)
if errRW != nil {
return errRW
}
defer tx.Rollback()
for _, name := range Buckets {
// create bucket
if err := tx.CreateBucket(GetStageName(name, false, false)); err != nil {
return err
}
// create bucket for beacon
if err := tx.CreateBucket(GetStageName(name, true, false)); err != nil {
return err
}
}
if err := tx.Commit(); err != nil {
return err
}
return nil
}
// SyncLoop will keep syncing with peers until catches up
func (s *StagedSync) SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus, loopMinTime time.Duration) {
utils.Logger().Info().
Uint64("current height", bc.CurrentBlock().NumberU64()).
Msgf("staged sync is executing ... ")
if !s.IsBeacon() {
s.RegisterNodeInfo()
}
// get max peers height
maxPeersHeight, err := s.getMaxPeerHeight()
if err != nil {
return
}
utils.Logger().Info().
Uint64("maxPeersHeight", maxPeersHeight).
Msgf("[STAGED_SYNC] max peers height")
s.syncStatus.MaxPeersHeight = maxPeersHeight
for {
if len(s.syncConfig.peers) < NumPeersLowBound {
// TODO: try to use reserved nodes
utils.Logger().Warn().
Int("num peers", len(s.syncConfig.peers)).
Msgf("[STAGED_SYNC] Not enough connected peers")
break
}
startHead := bc.CurrentBlock().NumberU64()
if startHead >= maxPeersHeight {
utils.Logger().Info().
Bool("isBeacon", isBeacon).
Uint32("shard", bc.ShardID()).
Uint64("maxPeersHeight", maxPeersHeight).
Uint64("currentHeight", startHead).
Msgf("[STAGED_SYNC] Node is now IN SYNC!")
break
}
startTime := time.Now()
if err := s.runSyncCycle(bc, worker, isBeacon, consensus, maxPeersHeight); err != nil {
utils.Logger().Error().
Err(err).
Bool("isBeacon", isBeacon).
Uint32("shard", bc.ShardID()).
Uint64("currentHeight", startHead).
Msgf("[STAGED_SYNC] sync cycle failed")
break
}
if loopMinTime != 0 {
waitTime := loopMinTime - time.Since(startTime)
utils.Logger().Debug().
Bool("isBeacon", isBeacon).
Uint32("shard", bc.ShardID()).
Interface("duration", waitTime).
Msgf("[STAGED SYNC] Node is syncing ..., it's waiting a few seconds until next loop")
c := time.After(waitTime)
select {
case <-s.Context().Done():
return
case <-c:
}
}
// calculating sync speed (blocks/second)
currHead := bc.CurrentBlock().NumberU64()
if s.LogProgress && currHead-startHead > 0 {
dt := time.Now().Sub(startTime).Seconds()
speed := float64(0)
if dt > 0 {
speed = float64(currHead-startHead) / dt
}
syncSpeed := fmt.Sprintf("%.2f", speed)
fmt.Println("sync speed:", syncSpeed, "blocks/s (", currHead, "/", maxPeersHeight, ")")
}
s.syncStatus.currentCycle.lock.Lock()
s.syncStatus.currentCycle.Number++
s.syncStatus.currentCycle.lock.Unlock()
}
if consensus != nil {
if err := s.addConsensusLastMile(s.Blockchain(), consensus); err != nil {
utils.Logger().Error().
Err(err).
Msg("[STAGED_SYNC] Add consensus last mile")
}
// TODO: move this to explorer handler code.
if s.isExplorer {
consensus.UpdateConsensusInformation()
}
}
s.purgeAllBlocksFromCache()
utils.Logger().Info().
Uint64("new height", bc.CurrentBlock().NumberU64()).
Msgf("staged sync is executed")
return
}
// runSyncCycle will run one cycle of staged syncing
func (s *StagedSync) runSyncCycle(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus, maxPeersHeight uint64) error {
canRunCycleInOneTransaction := s.MaxBlocksPerSyncCycle > 0 && s.MaxBlocksPerSyncCycle <= s.MaxMemSyncCycleSize
var tx kv.RwTx
if canRunCycleInOneTransaction {
var err error
if tx, err = s.DB().BeginRw(context.Background()); err != nil {
return err
}
defer tx.Rollback()
}
// Do one cycle of staged sync
initialCycle := s.syncStatus.currentCycle.Number == 0
syncErr := s.Run(s.DB(), tx, initialCycle)
if syncErr != nil {
utils.Logger().Error().
Err(syncErr).
Bool("isBeacon", s.IsBeacon()).
Uint32("shard", s.Blockchain().ShardID()).
Msgf("[STAGED_SYNC] Sync loop failed")
s.purgeOldBlocksFromCache()
return syncErr
}
if tx != nil {
errTx := tx.Commit()
if errTx != nil {
return errTx
}
}
return nil
}

@ -0,0 +1,38 @@
package stagedsync
import (
"time"
"github.com/Workiva/go-datastructures/queue"
)
// downloadTaskQueue is wrapper around Queue with item to be SyncBlockTask
type downloadTaskQueue struct {
q *queue.Queue
}
func (queue downloadTaskQueue) poll(num int64, timeOut time.Duration) (syncBlockTasks, error) {
items, err := queue.q.Poll(num, timeOut)
if err != nil {
return nil, err
}
tasks := make(syncBlockTasks, 0, len(items))
for _, item := range items {
task := item.(SyncBlockTask)
tasks = append(tasks, task)
}
return tasks, nil
}
func (queue downloadTaskQueue) put(tasks syncBlockTasks) error {
for _, task := range tasks {
if err := queue.q.Put(task); err != nil {
return err
}
}
return nil
}
func (queue downloadTaskQueue) empty() bool {
return queue.q.Empty()
}

@ -9,7 +9,7 @@ import (
"path"
"github.com/ethereum/go-ethereum/log"
net "github.com/libp2p/go-libp2p-core/network"
net "github.com/libp2p/go-libp2p/core/network"
ma "github.com/multiformats/go-multiaddr"
"github.com/harmony-one/harmony/internal/utils"

@ -260,6 +260,7 @@ func init() {
confTree.Set("Version", "2.5.3")
return confTree
}
migrations["2.5.3"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("TxPool.AllowedTxsFile") == nil {
confTree.Set("TxPool.AllowedTxsFile", defaultConfig.TxPool.AllowedTxsFile)
@ -267,6 +268,7 @@ func init() {
confTree.Set("Version", "2.5.4")
return confTree
}
migrations["2.5.4"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("TxPool.GlobalSlots") == nil {
confTree.Set("TxPool.GlobalSlots", defaultConfig.TxPool.GlobalSlots)
@ -274,6 +276,7 @@ func init() {
confTree.Set("Version", "2.5.5")
return confTree
}
migrations["2.5.5"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("Log.Console") == nil {
confTree.Set("Log.Console", defaultConfig.Log.Console)
@ -281,6 +284,7 @@ func init() {
confTree.Set("Version", "2.5.6")
return confTree
}
migrations["2.5.6"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("P2P.MaxPeers") == nil {
confTree.Set("P2P.MaxPeers", defaultConfig.P2P.MaxPeers)
@ -295,6 +299,23 @@ func init() {
return confTree
}
migrations["2.5.8"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("Sync.StagedSync") == nil {
confTree.Set("Sync.StagedSync", defaultConfig.Sync.StagedSync)
confTree.Set("Sync.StagedSyncCfg", defaultConfig.Sync.StagedSyncCfg)
}
confTree.Set("Version", "2.5.9")
return confTree
}
migrations["2.5.9"] = func(confTree *toml.Tree) *toml.Tree {
if confTree.Get("P2P.WaitForEachPeerToConnect") == nil {
confTree.Set("P2P.WaitForEachPeerToConnect", defaultConfig.P2P.WaitForEachPeerToConnect)
}
confTree.Set("Version", "2.5.10")
return confTree
}
// check that the latest version here is the same as in default.go
largestKey := getNextVersion(migrations)
if largestKey != tomlConfigVersion {

@ -5,7 +5,7 @@ import (
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
)
const tomlConfigVersion = "2.5.8"
const tomlConfigVersion = "2.5.10"
const (
defNetworkType = nodeconfig.Mainnet
@ -32,6 +32,7 @@ var defaultConfig = harmonyconfig.HarmonyConfig{
MaxConnsPerIP: nodeconfig.DefaultMaxConnPerIP,
DisablePrivateIPScan: false,
MaxPeers: nodeconfig.DefaultMaxPeers,
WaitForEachPeerToConnect: nodeconfig.DefaultWaitForEachPeerToConnect,
},
HTTP: harmonyconfig.HttpConfig{
Enabled: true,
@ -143,10 +144,25 @@ var defaultPrometheusConfig = harmonyconfig.PrometheusConfig{
Gateway: "https://gateway.harmony.one",
}
var defaultStagedSyncConfig = harmonyconfig.StagedSyncConfig{
TurboMode: true,
DoubleCheckBlockHashes: false,
MaxBlocksPerSyncCycle: 512, // sync new blocks in each cycle, if set to zero means all blocks in one full cycle
MaxBackgroundBlocks: 512, // max blocks to be downloaded at background process in turbo mode
InsertChainBatchSize: 128, // number of blocks to build a batch and insert to chain in staged sync
VerifyAllSig: false, // whether it should verify signatures for all blocks
VerifyHeaderBatchSize: 100, // batch size to verify block header before insert to chain
MaxMemSyncCycleSize: 1024, // max number of blocks to use a single transaction for staged sync
UseMemDB: true, // it uses memory by default. set it to false to use disk
LogProgress: false, // log the full sync progress in console
}
var (
defaultMainnetSyncConfig = harmonyconfig.SyncConfig{
Enabled: false,
Downloader: false,
StagedSync: false,
StagedSyncCfg: defaultStagedSyncConfig,
Concurrency: 6,
MinPeers: 6,
InitStreams: 8,
@ -159,6 +175,8 @@ var (
defaultTestNetSyncConfig = harmonyconfig.SyncConfig{
Enabled: true,
Downloader: false,
StagedSync: false,
StagedSyncCfg: defaultStagedSyncConfig,
Concurrency: 2,
MinPeers: 2,
InitStreams: 2,
@ -171,6 +189,8 @@ var (
defaultLocalNetSyncConfig = harmonyconfig.SyncConfig{
Enabled: true,
Downloader: true,
StagedSync: false,
StagedSyncCfg: defaultStagedSyncConfig,
Concurrency: 4,
MinPeers: 5,
InitStreams: 5,
@ -183,6 +203,8 @@ var (
defaultElseSyncConfig = harmonyconfig.SyncConfig{
Enabled: true,
Downloader: true,
StagedSync: false,
StagedSyncCfg: defaultStagedSyncConfig,
Concurrency: 4,
MinPeers: 4,
InitStreams: 4,

@ -15,12 +15,14 @@ import (
ethRawDB "github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/params"
"github.com/ethereum/go-ethereum/rlp"
"github.com/harmony-one/harmony/block"
"github.com/harmony-one/harmony/core/rawdb"
"github.com/harmony-one/harmony/core/state"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/hmy"
"github.com/harmony-one/harmony/internal/cli"
"github.com/harmony-one/harmony/shard"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
shardingconfig "github.com/harmony-one/harmony/internal/configs/sharding"
@ -276,14 +278,18 @@ func (db *KakashiDB) offchainDataDump(block *types.Block) {
latestNumber := block.NumberU64() - uint64(i)
latestBlock := db.GetBlockByNumber(latestNumber)
db.GetBlockByHash(latestBlock.Hash())
db.GetHeaderByHash(latestBlock.Hash())
header := db.GetHeaderByHash(latestBlock.Hash())
db.GetBlockByHash(latestBlock.Hash())
rawdb.ReadBlockRewardAccumulator(db, latestNumber)
rawdb.ReadBlockCommitSig(db, latestNumber)
epoch := block.Epoch()
epochInstance := shardSchedule.InstanceForEpoch(epoch)
for shard := 0; shard < int(epochInstance.NumShards()); shard++ {
rawdb.ReadCrossLinkShardBlock(db, uint32(shard), latestNumber)
// for each header, read (and write) the cross links in it
if block.ShardID() == shard.BeaconChainShardID {
crossLinks := &types.CrossLinks{}
if err := rlp.DecodeBytes(header.CrossLinks(), crossLinks); err == nil {
for _, cl := range *crossLinks {
rawdb.ReadCrossLinkShardBlock(db, cl.ShardID(), cl.BlockNum())
}
}
}
}
headEpoch := block.Epoch()

@ -218,6 +218,7 @@ var (
syncFlags = []cli.Flag{
syncStreamEnabledFlag,
syncDownloaderFlag,
syncStagedSyncFlag,
syncConcurrencyFlag,
syncMinPeersFlag,
syncInitStreamsFlag,
@ -578,6 +579,11 @@ var (
Usage: "maximum number of peers allowed, 0 means no limit",
DefValue: defaultConfig.P2P.MaxConnsPerIP,
}
waitForEachPeerToConnectFlag = cli.BoolFlag{
Name: "p2p.wait-for-connections",
Usage: "node waits for each single peer to connect and it doesn't add them to peers list after timeout",
DefValue: defaultConfig.P2P.WaitForEachPeerToConnect,
}
)
func applyP2PFlags(cmd *cobra.Command, config *harmonyconfig.HarmonyConfig) {
@ -614,6 +620,10 @@ func applyP2PFlags(cmd *cobra.Command, config *harmonyconfig.HarmonyConfig) {
config.P2P.MaxPeers = int64(cli.GetIntFlagValue(cmd, maxPeersFlag))
}
if cli.IsFlagChanged(cmd, waitForEachPeerToConnectFlag) {
config.P2P.WaitForEachPeerToConnect = cli.GetBoolFlagValue(cmd, waitForEachPeerToConnectFlag)
}
if cli.IsFlagChanged(cmd, p2pDisablePrivateIPScanFlag) {
config.P2P.DisablePrivateIPScan = cli.GetBoolFlagValue(cmd, p2pDisablePrivateIPScanFlag)
}
@ -1661,6 +1671,12 @@ var (
Hidden: true,
DefValue: false,
}
syncStagedSyncFlag = cli.BoolFlag{
Name: "sync.stagedsync",
Usage: "Enable the staged sync",
Hidden: false,
DefValue: false,
}
syncConcurrencyFlag = cli.IntFlag{
Name: "sync.concurrency",
Usage: "Concurrency when doing p2p sync requests",
@ -1708,6 +1724,10 @@ func applySyncFlags(cmd *cobra.Command, config *harmonyconfig.HarmonyConfig) {
config.Sync.Downloader = cli.GetBoolFlagValue(cmd, syncDownloaderFlag)
}
if cli.IsFlagChanged(cmd, syncStagedSyncFlag) {
config.Sync.StagedSync = cli.GetBoolFlagValue(cmd, syncStagedSyncFlag)
}
if cli.IsFlagChanged(cmd, syncConcurrencyFlag) {
config.Sync.Concurrency = cli.GetIntFlagValue(cmd, syncConcurrencyFlag)
}

@ -65,6 +65,7 @@ func TestHarmonyFlags(t *testing.T) {
MaxConnsPerIP: 5,
DisablePrivateIPScan: false,
MaxPeers: defaultConfig.P2P.MaxPeers,
WaitForEachPeerToConnect: false,
},
HTTP: harmonyconfig.HttpConfig{
Enabled: true,
@ -373,6 +374,7 @@ func TestP2PFlags(t *testing.T) {
MaxConnsPerIP: 10,
DisablePrivateIPScan: false,
MaxPeers: defaultConfig.P2P.MaxPeers,
WaitForEachPeerToConnect: false,
},
},
{
@ -384,6 +386,7 @@ func TestP2PFlags(t *testing.T) {
MaxConnsPerIP: 10,
DisablePrivateIPScan: false,
MaxPeers: defaultConfig.P2P.MaxPeers,
WaitForEachPeerToConnect: false,
},
},
{
@ -396,6 +399,7 @@ func TestP2PFlags(t *testing.T) {
MaxConnsPerIP: 5,
DisablePrivateIPScan: false,
MaxPeers: defaultConfig.P2P.MaxPeers,
WaitForEachPeerToConnect: false,
},
},
{
@ -408,6 +412,7 @@ func TestP2PFlags(t *testing.T) {
MaxConnsPerIP: nodeconfig.DefaultMaxConnPerIP,
DisablePrivateIPScan: true,
MaxPeers: defaultConfig.P2P.MaxPeers,
WaitForEachPeerToConnect: false,
},
},
{
@ -420,6 +425,7 @@ func TestP2PFlags(t *testing.T) {
MaxConnsPerIP: nodeconfig.DefaultMaxConnPerIP,
DisablePrivateIPScan: defaultConfig.P2P.DisablePrivateIPScan,
MaxPeers: 100,
WaitForEachPeerToConnect: false,
},
},
}
@ -1345,6 +1351,7 @@ func TestSyncFlags(t *testing.T) {
cfgSync := defaultMainnetSyncConfig
cfgSync.Enabled = true
cfgSync.Downloader = true
cfgSync.StagedSync = false
cfgSync.Concurrency = 10
cfgSync.MinPeers = 10
cfgSync.InitStreams = 10

@ -17,6 +17,7 @@ import (
"github.com/harmony-one/harmony/consensus/quorum"
"github.com/harmony-one/harmony/internal/chain"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/shardchain/tikv_manage"
"github.com/harmony-one/harmony/internal/tikv/redis_helper"
"github.com/harmony-one/harmony/internal/tikv/statedb_cache"
@ -310,10 +311,11 @@ func setupNodeAndRun(hc harmonyconfig.HarmonyConfig) {
// Update ethereum compatible chain ids
params.UpdateEthChainIDByShard(nodeConfig.ShardID)
currentNode := setupConsensusAndNode(hc, nodeConfig)
currentNode := setupConsensusAndNode(hc, nodeConfig, registry.New())
nodeconfig.GetDefaultConfig().ShardID = nodeConfig.ShardID
nodeconfig.GetDefaultConfig().IsOffline = nodeConfig.IsOffline
nodeconfig.GetDefaultConfig().Downloader = nodeConfig.Downloader
nodeconfig.GetDefaultConfig().StagedSync = nodeConfig.StagedSync
// Check NTP configuration
accurate, err := ntp.CheckLocalTimeAccurate(nodeConfig.NtpServer)
@ -599,7 +601,17 @@ func createGlobalConfig(hc harmonyconfig.HarmonyConfig) (*nodeconfig.ConfigType,
nodeConfig.SetArchival(hc.General.IsBeaconArchival, hc.General.IsArchival)
nodeConfig.IsOffline = hc.General.IsOffline
nodeConfig.Downloader = hc.Sync.Downloader
nodeConfig.StagedSync = hc.Sync.StagedSync
nodeConfig.StagedSyncTurboMode = hc.Sync.StagedSyncCfg.TurboMode
nodeConfig.UseMemDB = hc.Sync.StagedSyncCfg.UseMemDB
nodeConfig.DoubleCheckBlockHashes = hc.Sync.StagedSyncCfg.DoubleCheckBlockHashes
nodeConfig.MaxBlocksPerSyncCycle = hc.Sync.StagedSyncCfg.MaxBlocksPerSyncCycle
nodeConfig.MaxBackgroundBlocks = hc.Sync.StagedSyncCfg.MaxBackgroundBlocks
nodeConfig.MaxMemSyncCycleSize = hc.Sync.StagedSyncCfg.MaxMemSyncCycleSize
nodeConfig.VerifyAllSig = hc.Sync.StagedSyncCfg.VerifyAllSig
nodeConfig.VerifyHeaderBatchSize = hc.Sync.StagedSyncCfg.VerifyHeaderBatchSize
nodeConfig.InsertChainBatchSize = hc.Sync.StagedSyncCfg.InsertChainBatchSize
nodeConfig.LogProgress = hc.Sync.StagedSyncCfg.LogProgress
// P2P private key is used for secure message transfer between p2p nodes.
nodeConfig.P2PPriKey, _, err = utils.LoadKeyFromFile(hc.P2P.KeyFile)
if err != nil {
@ -621,6 +633,7 @@ func createGlobalConfig(hc harmonyconfig.HarmonyConfig) (*nodeconfig.ConfigType,
MaxConnPerIP: hc.P2P.MaxConnsPerIP,
DisablePrivateIPScan: hc.P2P.DisablePrivateIPScan,
MaxPeers: hc.P2P.MaxPeers,
WaitForEachPeerToConnect: hc.P2P.WaitForEachPeerToConnect,
})
if err != nil {
return nil, errors.Wrap(err, "cannot create P2P network host")
@ -647,7 +660,7 @@ func createGlobalConfig(hc harmonyconfig.HarmonyConfig) (*nodeconfig.ConfigType,
return nodeConfig, nil
}
func setupConsensusAndNode(hc harmonyconfig.HarmonyConfig, nodeConfig *nodeconfig.ConfigType) *node.Node {
func setupConsensusAndNode(hc harmonyconfig.HarmonyConfig, nodeConfig *nodeconfig.ConfigType, registry *registry.Registry) *node.Node {
// Parse minPeers from harmonyconfig.HarmonyConfig
var minPeers int
var aggregateSig bool
@ -695,6 +708,11 @@ func setupConsensusAndNode(hc harmonyconfig.HarmonyConfig, nodeConfig *nodeconfi
collection := shardchain.NewCollection(
&hc, chainDBFactory, &core.GenesisInitializer{NetworkType: nodeConfig.GetNetworkType()}, engine, &chainConfig,
)
for shardID, archival := range nodeConfig.ArchiveModes() {
if archival {
collection.DisableCache(shardID)
}
}
var blockchain core.BlockChain
@ -716,14 +734,14 @@ func setupConsensusAndNode(hc harmonyconfig.HarmonyConfig, nodeConfig *nodeconfi
// Consensus object.
decider := quorum.NewDecider(quorum.SuperMajorityVote, nodeConfig.ShardID)
currentConsensus, err := consensus.New(
myHost, nodeConfig.ShardID, nodeConfig.ConsensusPriKey, blockchain, decider, minPeers, aggregateSig)
myHost, nodeConfig.ShardID, nodeConfig.ConsensusPriKey, registry.SetBlockchain(blockchain), decider, minPeers, aggregateSig)
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Error :%v \n", err)
os.Exit(1)
}
currentNode := node.New(myHost, currentConsensus, engine, collection, blacklist, allowedTxs, localAccounts, nodeConfig.ArchiveModes(), &hc)
currentNode := node.New(myHost, currentConsensus, engine, collection, blacklist, allowedTxs, localAccounts, nodeConfig.ArchiveModes(), &hc, registry)
if hc.Legacy != nil && hc.Legacy.TPBroadcastInvalidTxn != nil {
currentNode.BroadcastInvalidTx = *hc.Legacy.TPBroadcastInvalidTxn

@ -6,12 +6,13 @@ import (
"sync/atomic"
"time"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/abool"
bls_core "github.com/harmony-one/bls/ffi/go/bls"
"github.com/harmony-one/harmony/consensus/quorum"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/core/types"
bls_cosi "github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/utils"
@ -62,8 +63,8 @@ type Consensus struct {
multiSigBitmap *bls_cosi.Mask // Bitmap for parsing multisig bitmap from validators
multiSigMutex sync.RWMutex
// The blockchain this consensus is working on
Blockchain core.BlockChain
// Registry for services.
registry *registry.Registry
// Minimal number of peers in the shard
// If the number of validators is less than minPeers, the consensus won't start
MinPeers int
@ -137,7 +138,12 @@ type Consensus struct {
dHelper *downloadHelper
}
// VerifyBlock is a function used to verify the block and keep trace of verified blocks
// Blockchain returns the blockchain.
func (consensus *Consensus) Blockchain() core.BlockChain {
return consensus.registry.GetBlockchain()
}
// VerifyBlock is a function used to verify the block and keep trace of verified blocks.
func (consensus *Consensus) VerifyBlock(block *types.Block) error {
if !consensus.FBFTLog.IsBlockVerified(block.Hash()) {
if err := consensus.BlockVerifier(block); err != nil {
@ -211,12 +217,12 @@ func (consensus *Consensus) BlockNum() uint64 {
// New create a new Consensus record
func New(
host p2p.Host, shard uint32, multiBLSPriKey multibls.PrivateKeys,
blockchain core.BlockChain,
registry *registry.Registry,
Decider quorum.Decider, minPeers int, aggregateSig bool,
) (*Consensus, error) {
consensus := Consensus{}
consensus.Decider = Decider
consensus.Blockchain = blockchain
consensus.registry = registry
consensus.MinPeers = minPeers
consensus.AggregateSig = aggregateSig
consensus.host = host

@ -264,7 +264,7 @@ func (consensus *Consensus) ReadSignatureBitmapPayload(
// (b) node in committed but has any err during processing: Syncing mode
// (c) node in committed and everything looks good: Normal mode
func (consensus *Consensus) UpdateConsensusInformation() Mode {
curHeader := consensus.Blockchain.CurrentHeader()
curHeader := consensus.Blockchain().CurrentHeader()
curEpoch := curHeader.Epoch()
nextEpoch := new(big.Int).Add(curHeader.Epoch(), common.Big1)
@ -286,13 +286,13 @@ func (consensus *Consensus) UpdateConsensusInformation() Mode {
consensus.BlockPeriod = 5 * time.Second
// Enable 2s block time at the twoSecondsEpoch
if consensus.Blockchain.Config().IsTwoSeconds(nextEpoch) {
if consensus.Blockchain().Config().IsTwoSeconds(nextEpoch) {
consensus.BlockPeriod = 2 * time.Second
}
isFirstTimeStaking := consensus.Blockchain.Config().IsStaking(nextEpoch) &&
curHeader.IsLastBlockInEpoch() && !consensus.Blockchain.Config().IsStaking(curEpoch)
haventUpdatedDecider := consensus.Blockchain.Config().IsStaking(curEpoch) &&
isFirstTimeStaking := consensus.Blockchain().Config().IsStaking(nextEpoch) &&
curHeader.IsLastBlockInEpoch() && !consensus.Blockchain().Config().IsStaking(curEpoch)
haventUpdatedDecider := consensus.Blockchain().Config().IsStaking(curEpoch) &&
consensus.Decider.Policy() != quorum.SuperMajorityStake
// Only happens once, the flip-over to a new Decider policy
@ -305,7 +305,7 @@ func (consensus *Consensus) UpdateConsensusInformation() Mode {
epochToSet := curEpoch
hasError := false
curShardState, err := committee.WithStakingEnabled.ReadFromDB(
curEpoch, consensus.Blockchain,
curEpoch, consensus.Blockchain(),
)
if err != nil {
consensus.getLogger().Error().
@ -321,7 +321,7 @@ func (consensus *Consensus) UpdateConsensusInformation() Mode {
if curHeader.IsLastBlockInEpoch() && isNotGenesisBlock {
nextShardState, err := committee.WithStakingEnabled.ReadFromDB(
nextEpoch, consensus.Blockchain,
nextEpoch, consensus.Blockchain(),
)
if err != nil {
consensus.getLogger().Error().
@ -389,7 +389,7 @@ func (consensus *Consensus) UpdateConsensusInformation() Mode {
// a solution to take care of this case because the coinbase of the latest block doesn't really represent the
// the real current leader in case of M1 view change.
if !curHeader.IsLastBlockInEpoch() && curHeader.Number().Uint64() != 0 {
leaderPubKey, err := chain.GetLeaderPubKeyFromCoinbase(consensus.Blockchain, curHeader)
leaderPubKey, err := chain.GetLeaderPubKeyFromCoinbase(consensus.Blockchain(), curHeader)
if err != nil || leaderPubKey == nil {
consensus.getLogger().Error().Err(err).
Msg("[UpdateConsensusInformation] Unable to get leaderPubKey from coinbase")
@ -527,7 +527,7 @@ func (consensus *Consensus) selfCommit(payload []byte) error {
consensus.switchPhase("selfCommit", FBFTCommit)
consensus.aggregatedPrepareSig = aggSig
consensus.prepareBitmap = mask
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain,
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain(),
block.Epoch(), block.Hash(), block.NumberU64(), block.Header().ViewID().Uint64())
for i, key := range consensus.priKey {
if err := consensus.commitBitmap.SetKey(key.Pub.Bytes, true); err != nil {

@ -7,6 +7,7 @@ import (
"github.com/harmony-one/abool"
"github.com/harmony-one/harmony/consensus/quorum"
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/multibls"
"github.com/harmony-one/harmony/p2p"
@ -90,7 +91,7 @@ func GenerateConsensusForTesting() (p2p.Host, multibls.PrivateKeys, *Consensus,
decider := quorum.NewDecider(quorum.SuperMajorityVote, shard.BeaconChainShardID)
multiBLSPrivateKey := multibls.GetPrivateKeys(bls.RandPrivateKey())
consensus, err := New(host, shard.BeaconChainShardID, multiBLSPrivateKey, nil, decider, 3, false)
consensus, err := New(host, shard.BeaconChainShardID, multiBLSPrivateKey, registry.New(), decider, 3, false)
if err != nil {
return nil, nil, nil, nil, err
}

@ -169,7 +169,7 @@ func (consensus *Consensus) finalCommit() {
return
}
consensus.getLogger().Info().Hex("new", commitSigAndBitmap).Msg("[finalCommit] Overriding commit signatures!!")
consensus.Blockchain.WriteCommitSig(block.NumberU64(), commitSigAndBitmap)
consensus.Blockchain().WriteCommitSig(block.NumberU64(), commitSigAndBitmap)
// Send committed message before block insertion.
// if leader successfully finalizes the block, send committed message to validators
@ -267,7 +267,7 @@ func (consensus *Consensus) BlockCommitSigs(blockNum uint64) ([]byte, error) {
if consensus.BlockNum() <= 1 {
return nil, nil
}
lastCommits, err := consensus.Blockchain.ReadCommitSig(blockNum)
lastCommits, err := consensus.Blockchain().ReadCommitSig(blockNum)
if err != nil ||
len(lastCommits) < bls.BLSSignatureSizeInBytes {
msgs := consensus.FBFTLog.GetMessagesByTypeSeq(
@ -363,9 +363,9 @@ func (consensus *Consensus) Start(
case <-consensus.syncReadyChan:
consensus.getLogger().Info().Msg("[ConsensusMainLoop] syncReadyChan")
consensus.mutex.Lock()
if consensus.BlockNum() < consensus.Blockchain.CurrentHeader().Number().Uint64()+1 {
consensus.SetBlockNum(consensus.Blockchain.CurrentHeader().Number().Uint64() + 1)
consensus.SetViewIDs(consensus.Blockchain.CurrentHeader().ViewID().Uint64() + 1)
if consensus.BlockNum() < consensus.Blockchain().CurrentHeader().Number().Uint64()+1 {
consensus.SetBlockNum(consensus.Blockchain().CurrentHeader().Number().Uint64() + 1)
consensus.SetViewIDs(consensus.Blockchain().CurrentHeader().ViewID().Uint64() + 1)
mode := consensus.UpdateConsensusInformation()
consensus.current.SetMode(mode)
consensus.getLogger().Info().Msg("[syncReadyChan] Start consensus timer")
@ -386,7 +386,7 @@ func (consensus *Consensus) Start(
// TODO: Refactor this piece of code to consensus/downloader.go after DNS legacy sync is removed
case <-consensus.syncNotReadyChan:
consensus.getLogger().Info().Msg("[ConsensusMainLoop] syncNotReadyChan")
consensus.SetBlockNum(consensus.Blockchain.CurrentHeader().Number().Uint64() + 1)
consensus.SetBlockNum(consensus.Blockchain().CurrentHeader().Number().Uint64() + 1)
consensus.current.SetMode(Syncing)
consensus.getLogger().Info().Msg("[ConsensusMainLoop] Node is OUT OF SYNC")
consensusSyncCounterVec.With(prometheus.Labels{"consensus": "out_of_sync"}).Inc()
@ -574,7 +574,7 @@ func (consensus *Consensus) preCommitAndPropose(blk *types.Block) error {
Msg("[preCommitAndPropose] Sent Committed Message")
}
if _, err := consensus.Blockchain.InsertChain([]*types.Block{blk}, !consensus.FBFTLog.IsBlockVerified(blk.Hash())); err != nil {
if _, err := consensus.Blockchain().InsertChain([]*types.Block{blk}, !consensus.FBFTLog.IsBlockVerified(blk.Hash())); err != nil {
consensus.getLogger().Error().Err(err).Msg("[preCommitAndPropose] Failed to add block to chain")
return
}
@ -606,7 +606,7 @@ func (consensus *Consensus) verifyLastCommitSig(lastCommitSig []byte, blk *types
}
aggPubKey := consensus.commitBitmap.AggregatePublic
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain,
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain(),
blk.Epoch(), blk.Hash(), blk.NumberU64(), blk.Header().ViewID().Uint64())
if !aggSig.VerifyHash(aggPubKey, commitPayload) {
@ -658,8 +658,8 @@ func (consensus *Consensus) tryCatchup() error {
}
func (consensus *Consensus) commitBlock(blk *types.Block, committedMsg *FBFTMessage) error {
if consensus.Blockchain.CurrentBlock().NumberU64() < blk.NumberU64() {
if _, err := consensus.Blockchain.InsertChain([]*types.Block{blk}, !consensus.FBFTLog.IsBlockVerified(blk.Hash())); err != nil {
if consensus.Blockchain().CurrentBlock().NumberU64() < blk.NumberU64() {
if _, err := consensus.Blockchain().InsertChain([]*types.Block{blk}, !consensus.FBFTLog.IsBlockVerified(blk.Hash())); err != nil {
consensus.getLogger().Error().Err(err).Msg("[commitBlock] Failed to add block to chain")
return err
}
@ -716,7 +716,7 @@ func (consensus *Consensus) GenerateVrfAndProof(newHeader *block.Header) error {
return errors.New("[GenerateVrfAndProof] no leader private key provided")
}
sk := vrf_bls.NewVRFSigner(key.Pri)
previousHeader := consensus.Blockchain.GetHeaderByNumber(
previousHeader := consensus.Blockchain().GetHeaderByNumber(
newHeader.Number().Uint64() - 1,
)
if previousHeader == nil {
@ -745,7 +745,7 @@ func (consensus *Consensus) GenerateVdfAndProof(newBlock *types.Block, vrfBlockN
//derive VDF seed from VRFs generated in the current epoch
seed := [32]byte{}
for i := 0; i < consensus.VdfSeedSize(); i++ {
previousVrf := consensus.Blockchain.GetVrfByNumber(vrfBlockNumbers[i])
previousVrf := consensus.Blockchain().GetVrfByNumber(vrfBlockNumbers[i])
for j := 0; j < len(seed); j++ {
seed[j] = seed[j] ^ previousVrf[j]
}
@ -779,7 +779,7 @@ func (consensus *Consensus) GenerateVdfAndProof(newBlock *types.Block, vrfBlockN
// ValidateVdfAndProof validates the VDF/proof in the current epoch
func (consensus *Consensus) ValidateVdfAndProof(headerObj *block.Header) bool {
vrfBlockNumbers, err := consensus.Blockchain.ReadEpochVrfBlockNums(headerObj.Epoch())
vrfBlockNumbers, err := consensus.Blockchain().ReadEpochVrfBlockNums(headerObj.Epoch())
if err != nil {
consensus.getLogger().Error().Err(err).
Str("MsgBlockNum", headerObj.Number().String()).
@ -794,7 +794,7 @@ func (consensus *Consensus) ValidateVdfAndProof(headerObj *block.Header) bool {
seed := [32]byte{}
for i := 0; i < consensus.VdfSeedSize(); i++ {
previousVrf := consensus.Blockchain.GetVrfByNumber(vrfBlockNumbers[i])
previousVrf := consensus.Blockchain().GetVrfByNumber(vrfBlockNumbers[i])
for j := 0; j < len(seed); j++ {
seed[j] = seed[j] ^ previousVrf[j]
}

@ -40,8 +40,8 @@ func (consensus *Consensus) checkDoubleSign(recvMsg *FBFTMessage) bool {
return true
}
curHeader := consensus.Blockchain.CurrentHeader()
committee, err := consensus.Blockchain.ReadShardState(curHeader.Epoch())
curHeader := consensus.Blockchain().CurrentHeader()
committee, err := consensus.Blockchain().ReadShardState(curHeader.Epoch())
if err != nil {
consensus.getLogger().Err(err).
Uint32("shard", consensus.ShardID).

@ -90,7 +90,7 @@ func (dh *downloadHelper) downloadFinishedLoop() {
}
func (consensus *Consensus) addConsensusLastMile() error {
curBN := consensus.Blockchain.CurrentBlock().NumberU64()
curBN := consensus.Blockchain().CurrentBlock().NumberU64()
blockIter, err := consensus.GetLastMileBlockIter(curBN + 1)
if err != nil {
return err
@ -100,7 +100,7 @@ func (consensus *Consensus) addConsensusLastMile() error {
if block == nil {
break
}
if _, err := consensus.Blockchain.InsertChain(types.Blocks{block}, true); err != nil {
if _, err := consensus.Blockchain().InsertChain(types.Blocks{block}, true); err != nil {
return errors.Wrap(err, "failed to InsertChain")
}
}

@ -247,7 +247,7 @@ func (consensus *Consensus) onCommit(recvMsg *FBFTMessage) {
Msg("[OnCommit] Failed finding a matching block for committed message")
return
}
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain,
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain(),
blockObj.Epoch(), blockObj.Hash(), blockObj.NumberU64(), blockObj.Header().ViewID().Uint64())
logger = logger.With().
Uint64("MsgViewID", recvMsg.ViewID).

@ -46,7 +46,7 @@ func (consensus *Consensus) didReachPrepareQuorum() error {
Msg("[didReachPrepareQuorum] Unparseable block data")
return err
}
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain,
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain(),
blockObj.Epoch(), blockObj.Hash(), blockObj.NumberU64(), blockObj.Header().ViewID().Uint64())
// so by this point, everyone has committed to the blockhash of this block

@ -166,7 +166,7 @@ func (consensus *Consensus) sendCommitMessages(blockObj *types.Block) {
priKeys := consensus.getPriKeysInCommittee()
// Sign commit signature on the received block and construct the p2p messages
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain,
commitPayload := signature.ConstructCommitPayload(consensus.Blockchain(),
blockObj.Epoch(), blockObj.Hash(), blockObj.NumberU64(), blockObj.Header().ViewID().Uint64())
p2pMsgs := consensus.constructP2pMessages(msg_pb.MessageType_COMMIT, commitPayload, priKeys)
@ -336,7 +336,7 @@ func (consensus *Consensus) onCommitted(recvMsg *FBFTMessage) {
Msg("[OnCommitted] Failed to parse commit sigBytes and bitmap")
return
}
if err := consensus.Blockchain.Engine().VerifyHeaderSignature(consensus.Blockchain, blockObj.Header(),
if err := consensus.Blockchain().Engine().VerifyHeaderSignature(consensus.Blockchain(), blockObj.Header(),
sigBytes, bitmap); err != nil {
consensus.getLogger().Error().
Uint64("blockNum", recvMsg.BlockNum).
@ -358,17 +358,17 @@ func (consensus *Consensus) onCommitted(recvMsg *FBFTMessage) {
// If we already have a committed signature received before, check whether the new one
// has more signatures and if yes, override the old data.
// Otherwise, simply write the commit signature in db.
commitSigBitmap, err := consensus.Blockchain.ReadCommitSig(blockObj.NumberU64())
commitSigBitmap, err := consensus.Blockchain().ReadCommitSig(blockObj.NumberU64())
// Need to check whether this block actually was committed, because it could be another block
// with the same number that's committed and overriding its commit sigBytes is wrong.
blk := consensus.Blockchain.GetBlockByHash(blockObj.Hash())
blk := consensus.Blockchain().GetBlockByHash(blockObj.Hash())
if err == nil && len(commitSigBitmap) == len(recvMsg.Payload) && blk != nil {
new := mask.CountEnabled()
mask.SetMask(commitSigBitmap[bls.BLSSignatureSizeInBytes:])
cur := mask.CountEnabled()
if new > cur {
consensus.getLogger().Info().Hex("old", commitSigBitmap).Hex("new", recvMsg.Payload).Msg("[OnCommitted] Overriding commit signatures!!")
consensus.Blockchain.WriteCommitSig(blockObj.NumberU64(), recvMsg.Payload)
consensus.Blockchain().WriteCommitSig(blockObj.NumberU64(), recvMsg.Payload)
}
}

@ -129,10 +129,10 @@ func (consensus *Consensus) fallbackNextViewID() (uint64, time.Duration) {
// viewID is only used as the fallback mechansim to determine the nextViewID
func (consensus *Consensus) getNextViewID() (uint64, time.Duration) {
// handle corner case at first
if consensus.Blockchain == nil {
if consensus.Blockchain() == nil {
return consensus.fallbackNextViewID()
}
curHeader := consensus.Blockchain.CurrentHeader()
curHeader := consensus.Blockchain().CurrentHeader()
if curHeader == nil {
return consensus.fallbackNextViewID()
}
@ -172,12 +172,13 @@ func (consensus *Consensus) getNextLeaderKey(viewID uint64) *bls.PublicKeyWrappe
}
var lastLeaderPubKey *bls.PublicKeyWrapper
var err error
blockchain := consensus.Blockchain()
epoch := big.NewInt(0)
if consensus.Blockchain == nil {
if blockchain == nil {
consensus.getLogger().Error().Msg("[getNextLeaderKey] Blockchain is nil. Use consensus.LeaderPubKey")
lastLeaderPubKey = consensus.LeaderPubKey
} else {
curHeader := consensus.Blockchain.CurrentHeader()
curHeader := blockchain.CurrentHeader()
if curHeader == nil {
consensus.getLogger().Error().Msg("[getNextLeaderKey] Failed to get current header from blockchain")
lastLeaderPubKey = consensus.LeaderPubKey
@ -185,7 +186,7 @@ func (consensus *Consensus) getNextLeaderKey(viewID uint64) *bls.PublicKeyWrappe
stuckBlockViewID := curHeader.ViewID().Uint64() + 1
gap = int(viewID - stuckBlockViewID)
// this is the truth of the leader based on blockchain blocks
lastLeaderPubKey, err = chain.GetLeaderPubKeyFromCoinbase(consensus.Blockchain, curHeader)
lastLeaderPubKey, err = chain.GetLeaderPubKeyFromCoinbase(blockchain, curHeader)
if err != nil || lastLeaderPubKey == nil {
consensus.getLogger().Error().Err(err).
Msg("[getNextLeaderKey] Unable to get leaderPubKey from coinbase. Set it to consensus.LeaderPubKey")
@ -215,7 +216,7 @@ func (consensus *Consensus) getNextLeaderKey(viewID uint64) *bls.PublicKeyWrappe
// FIXME: rotate leader on harmony nodes only before fully externalization
var wasFound bool
var next *bls.PublicKeyWrapper
if consensus.Blockchain != nil && consensus.Blockchain.Config().IsAllowlistEpoch(epoch) {
if blockchain != nil && blockchain.Config().IsAllowlistEpoch(epoch) {
wasFound, next = consensus.Decider.NthNextHmyExt(
shard.Schedule.InstanceForEpoch(epoch),
lastLeaderPubKey,

@ -109,6 +109,10 @@ func (vc *viewChange) GetPreparedBlock(fbftlog *FBFTLog) ([]byte, []byte) {
// First 32 bytes of m1 payload is the correct block hash
copy(blockHash[:], vc.GetM1Payload())
if !fbftlog.IsBlockVerified(blockHash) {
return nil, nil
}
if block := fbftlog.GetBlockByHash(blockHash); block != nil {
encodedBlock, err := rlp.EncodeToBytes(block)
if err != nil || len(encodedBlock) == 0 {

@ -167,6 +167,8 @@ func (bc *BlockChainImpl) CommitOffChainData(
cl0, _ := bc.ReadShardLastCrossLink(crossLink.ShardID())
if cl0 == nil {
// make sure it is written at least once, so that it is overwritten below
// under "Roll up latest crosslinks"
rawdb.WriteShardLastCrossLink(batch, crossLink.ShardID(), crossLink.Serialize())
}
}

240
go.mod

@ -1,9 +1,9 @@
module github.com/harmony-one/harmony
go 1.18
go 1.19
require (
github.com/RoaringBitmap/roaring v1.1.0
github.com/RoaringBitmap/roaring v1.2.1
github.com/VictoriaMetrics/fastcache v1.5.7
github.com/Workiva/go-datastructures v1.0.50
github.com/allegro/bigcache v1.2.1
@ -21,80 +21,90 @@ require (
github.com/golang/protobuf v1.5.2
github.com/golangci/golangci-lint v1.22.2
github.com/gorilla/mux v1.8.0
github.com/gorilla/websocket v1.4.2
github.com/gorilla/websocket v1.5.0
github.com/harmony-one/abool v1.0.1
github.com/harmony-one/bls v0.0.6
github.com/harmony-one/taggedrlp v0.1.4
github.com/harmony-one/vdf v0.0.0-20190924175951-620379da8849
github.com/hashicorp/go-version v1.2.0
github.com/hashicorp/golang-lru v0.5.4
github.com/ipfs/go-ds-badger v0.2.7
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d
github.com/ipfs/go-ds-badger v0.3.0
github.com/json-iterator/go v1.1.12
github.com/libp2p/go-libp2p v0.14.4
github.com/libp2p/go-libp2p-core v0.8.6
github.com/libp2p/go-libp2p-crypto v0.1.0
github.com/libp2p/go-libp2p-discovery v0.5.1
github.com/libp2p/go-libp2p-kad-dht v0.11.1
github.com/libp2p/go-libp2p-pubsub v0.5.6
github.com/multiformats/go-multiaddr v0.3.3
github.com/libp2p/go-libp2p v0.24.0
github.com/libp2p/go-libp2p-kad-dht v0.19.0
github.com/libp2p/go-libp2p-pubsub v0.8.2
github.com/multiformats/go-multiaddr v0.8.0
github.com/multiformats/go-multiaddr-dns v0.3.1
github.com/natefinch/lumberjack v2.0.0+incompatible
github.com/pborman/uuid v1.2.0
github.com/pelletier/go-toml v1.9.3
github.com/pelletier/go-toml v1.9.5
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_golang v1.14.0
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0
github.com/rjeczalik/notify v0.9.2
github.com/rs/cors v1.7.0
github.com/rs/zerolog v1.18.0
github.com/spf13/cobra v0.0.5
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.6.1
github.com/stretchr/testify v1.7.0
github.com/spf13/viper v1.14.0
github.com/stretchr/testify v1.8.1
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7
github.com/tikv/client-go/v2 v2.0.1
github.com/whyrusleeping/timecache v0.0.0-20160911033111-cfcb2f1abfee
go.uber.org/ratelimit v0.1.0
go.uber.org/zap v1.20.0
golang.org/x/crypto v0.0.0-20210506145944-38f3c27a63bf
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba
golang.org/x/tools v0.1.7 // indirect
google.golang.org/grpc v1.43.0
google.golang.org/protobuf v1.26.0
go.uber.org/zap v1.24.0
golang.org/x/crypto v0.4.0
golang.org/x/net v0.3.0 // indirect
golang.org/x/sync v0.1.0
golang.org/x/sys v0.3.0 // indirect
golang.org/x/time v0.2.0
golang.org/x/tools v0.3.0 // indirect
google.golang.org/grpc v1.51.0
google.golang.org/protobuf v1.28.1
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce
gopkg.in/olebedev/go-duktape.v3 v3.0.0-20200619000410-60c24ae608a6
gopkg.in/yaml.v2 v2.4.0
)
require (
github.com/c2h5oh/datasize v0.0.0-20220606134207-859f65c6625b
github.com/ledgerwatch/erigon-lib v0.0.0-20221218022306-0f8fdd40c2db
github.com/ledgerwatch/log/v3 v3.6.0
)
require (
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/OpenPeeDeeP/depguard v1.0.1 // indirect
github.com/StackExchange/wmi v0.0.0-20180116203802-5d049714c4a6 // indirect
github.com/VictoriaMetrics/metrics v1.23.0 // indirect
github.com/aristanetworks/goarista v0.0.0-20190607111240-52c2a7864a08 // indirect
github.com/benbjohnson/clock v1.1.0 // indirect
github.com/benbjohnson/clock v1.3.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.2.0 // indirect
github.com/bits-and-blooms/bitset v1.2.2 // indirect
github.com/bombsimon/wsl/v2 v2.0.0 // indirect
github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/containerd/cgroups v1.0.4 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
github.com/dgraph-io/badger v1.6.2 // indirect
github.com/dgraph-io/ristretto v0.0.3 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/elastic/gosigar v0.8.1-0.20180330100440-37f05ff46ffa // indirect
github.com/fatih/color v1.10.0 // indirect
github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/elastic/gosigar v0.14.2 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/flynn/noise v1.0.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff // indirect
github.com/go-critic/go-critic v0.4.0 // indirect
github.com/go-lintpack/lintpack v0.5.2 // indirect
github.com/go-ole/go-ole v1.2.1 // indirect
github.com/go-stack/stack v1.8.0 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
github.com/go-toolsmith/astcast v1.0.0 // indirect
github.com/go-toolsmith/astcopy v1.0.0 // indirect
github.com/go-toolsmith/astequal v1.0.0 // indirect
@ -103,7 +113,8 @@ require (
github.com/go-toolsmith/strparse v1.0.0 // indirect
github.com/go-toolsmith/typep v1.0.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gofrs/flock v0.0.0-20190320160742-5135e617513b // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect
@ -120,140 +131,135 @@ require (
github.com/golangci/prealloc v0.0.0-20180630174525-215b22d4de21 // indirect
github.com/golangci/revgrep v0.0.0-20180526074752-d9c87f5ffaf0 // indirect
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect
github.com/google/btree v1.0.0 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/google/pprof v0.0.0-20221203041831-ce31453925ec // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gostaticanalysis/analysisutil v0.0.0-20190318220348-4088753ea4d3 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.1.0 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/huin/goupnp v1.0.0 // indirect
github.com/huin/goupnp v1.0.3 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/ipfs/go-cid v0.0.7 // indirect
github.com/ipfs/go-datastore v0.4.5 // indirect
github.com/ipfs/go-cid v0.3.2 // indirect
github.com/ipfs/go-datastore v0.6.0 // indirect
github.com/ipfs/go-ipfs-util v0.0.2 // indirect
github.com/ipfs/go-ipns v0.0.2 // indirect
github.com/ipfs/go-ipns v0.2.0 // indirect
github.com/ipfs/go-log v1.0.5 // indirect
github.com/ipfs/go-log/v2 v2.1.3 // indirect
github.com/ipfs/go-log/v2 v2.5.1 // indirect
github.com/ipld/go-ipld-prime v0.9.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/jbenet/goprocess v0.1.4 // indirect
github.com/jmespath/go-jmespath v0.3.0 // indirect
github.com/karalabe/usb v0.0.0-20190919080040-51dc0efba356 // indirect
github.com/kisielk/gotool v1.0.0 // indirect
github.com/klauspost/cpuid/v2 v2.0.4 // indirect
github.com/konsorten/go-windows-terminal-sequences v1.0.3 // indirect
github.com/koron/go-ssdp v0.0.0-20191105050749-2e1c40ed0b5d // indirect
github.com/kr/pretty v0.2.1 // indirect
github.com/kr/text v0.1.0 // indirect
github.com/libp2p/go-addr-util v0.1.0 // indirect
github.com/libp2p/go-buffer-pool v0.0.2 // indirect
github.com/klauspost/compress v1.15.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.1 // indirect
github.com/koron/go-ssdp v0.0.3 // indirect
github.com/kr/pretty v0.3.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-conn-security-multistream v0.2.1 // indirect
github.com/libp2p/go-eventbus v0.2.1 // indirect
github.com/libp2p/go-flow-metrics v0.0.3 // indirect
github.com/libp2p/go-libp2p-asn-util v0.0.0-20200825225859-85005c6cf052 // indirect
github.com/libp2p/go-libp2p-autonat v0.4.2 // indirect
github.com/libp2p/go-libp2p-blankhost v0.2.0 // indirect
github.com/libp2p/go-libp2p-circuit v0.4.0 // indirect
github.com/libp2p/go-libp2p-kbucket v0.4.7 // indirect
github.com/libp2p/go-libp2p-mplex v0.4.1 // indirect
github.com/libp2p/go-libp2p-nat v0.0.6 // indirect
github.com/libp2p/go-libp2p-noise v0.2.0 // indirect
github.com/libp2p/go-libp2p-peerstore v0.2.8 // indirect
github.com/libp2p/go-libp2p-pnet v0.2.0 // indirect
github.com/libp2p/go-libp2p-record v0.1.3 // indirect
github.com/libp2p/go-libp2p-swarm v0.5.3 // indirect
github.com/libp2p/go-libp2p-tls v0.1.3 // indirect
github.com/libp2p/go-libp2p-transport-upgrader v0.4.6 // indirect
github.com/libp2p/go-libp2p-yamux v0.5.4 // indirect
github.com/libp2p/go-maddr-filter v0.1.0 // indirect
github.com/libp2p/go-mplex v0.3.0 // indirect
github.com/libp2p/go-msgio v0.0.6 // indirect
github.com/libp2p/go-nat v0.0.5 // indirect
github.com/libp2p/go-netroute v0.1.6 // indirect
github.com/libp2p/go-openssl v0.0.7 // indirect
github.com/libp2p/go-reuseport v0.0.2 // indirect
github.com/libp2p/go-reuseport-transport v0.0.5 // indirect
github.com/libp2p/go-sockaddr v0.1.1 // indirect
github.com/libp2p/go-stream-muxer-multistream v0.3.0 // indirect
github.com/libp2p/go-tcp-transport v0.2.7 // indirect
github.com/libp2p/go-ws-transport v0.4.0 // indirect
github.com/libp2p/go-yamux/v2 v2.2.0 // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.2.0 // indirect
github.com/libp2p/go-libp2p-kbucket v0.5.0 // indirect
github.com/libp2p/go-libp2p-record v0.2.0 // indirect
github.com/libp2p/go-msgio v0.2.0 // indirect
github.com/libp2p/go-nat v0.1.0 // indirect
github.com/libp2p/go-netroute v0.2.1 // indirect
github.com/libp2p/go-openssl v0.1.0 // indirect
github.com/libp2p/go-reuseport v0.2.0 // indirect
github.com/libp2p/go-yamux/v4 v4.0.0 // indirect
github.com/lucas-clemente/quic-go v0.31.0 // indirect
github.com/magiconair/properties v1.8.6 // indirect
github.com/marten-seemann/qtls-go1-18 v0.1.3 // indirect
github.com/marten-seemann/qtls-go1-19 v0.1.1 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/matoous/godox v0.0.0-20190911065817-5d6d842e92eb // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/mattn/go-pointer v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.4 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/miekg/dns v1.1.41 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/miekg/dns v1.1.50 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.3.3 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/mschoch/smat v0.2.0 // indirect
github.com/multiformats/go-base32 v0.0.3 // indirect
github.com/multiformats/go-base36 v0.1.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.0.3 // indirect
github.com/multiformats/go-multihash v0.0.15 // indirect
github.com/multiformats/go-multistream v0.2.2 // indirect
github.com/multiformats/go-varint v0.0.6 // indirect
github.com/multiformats/go-multibase v0.1.1 // indirect
github.com/multiformats/go-multicodec v0.7.0 // indirect
github.com/multiformats/go-multihash v0.2.1 // indirect
github.com/multiformats/go-multistream v0.3.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/nbutton23/zxcvbn-go v0.0.0-20180912185939-ae427f1e4c1d // indirect
github.com/olekukonko/tablewriter v0.0.2-0.20190409134802-7e037d187b0c // indirect
github.com/onsi/ginkgo/v2 v2.5.1 // indirect
github.com/opencontainers/runtime-spec v1.0.2 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pelletier/go-toml/v2 v2.0.5 // indirect
github.com/pingcap/errors v0.11.5-0.20211224045212-9687c2b0f87c // indirect
github.com/pingcap/failpoint v0.0.0-20210918120811-547c13e3eb00 // indirect
github.com/pingcap/kvproto v0.0.0-20220106070556-3fa8fa04f898 // indirect
github.com/pingcap/log v0.0.0-20211215031037-e024ba4eb0ee // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/polydawn/refmt v0.0.0-20190807091052-3d65705ee9f1 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/prometheus/tsdb v0.7.1 // indirect
github.com/raulk/go-watchdog v1.3.0 // indirect
github.com/rogpeppe/go-internal v1.6.1 // indirect
github.com/securego/gosec v0.0.0-20191002120514-e680875ea14d // indirect
github.com/sirupsen/logrus v1.6.0 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/sourcegraph/go-diff v0.5.1 // indirect
github.com/spacemonkeygo/spacelog v0.0.0-20180420211403-2296661a0572 // indirect
github.com/spf13/afero v1.1.2 // indirect
github.com/spf13/cast v1.3.0 // indirect
github.com/spf13/jwalterweatherman v1.0.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/afero v1.9.2 // indirect
github.com/spf13/cast v1.5.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4 // indirect
github.com/steakknife/bloomfilter v0.0.0-20180922174646-6819c0d2a570 // indirect
github.com/steakknife/hamming v0.0.0-20180906055917-c99c65617cd3 // indirect
github.com/stretchr/objx v0.2.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/subosito/gotenv v1.4.1 // indirect
github.com/tikv/pd/client v0.0.0-20220216070739-26c668271201 // indirect
github.com/timakin/bodyclose v0.0.0-20190930140734-f7f2e9bca95e // indirect
github.com/tommy-muehle/go-mnd v1.1.1 // indirect
github.com/torquem-ch/mdbx-go v0.27.0 // indirect
github.com/tyler-smith/go-bip39 v1.0.2 // indirect
github.com/ultraware/funlen v0.0.2 // indirect
github.com/ultraware/whitespace v0.0.4 // indirect
github.com/uudashr/gocognit v1.0.1 // indirect
github.com/valyala/fastrand v1.1.0 // indirect
github.com/valyala/histogram v1.2.0 // indirect
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7 // indirect
github.com/wsddn/go-ecdh v0.0.0-20161211032359-48726bab9208 // indirect
go.opencensus.io v0.23.0 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.7.0 // indirect
golang.org/x/mod v0.4.2 // indirect
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d // indirect
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e // indirect
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 // indirect
golang.org/x/text v0.3.6 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.uber.org/atomic v1.10.0 // indirect
go.uber.org/dig v1.15.0 // indirect
go.uber.org/fx v1.18.2 // indirect
go.uber.org/multierr v1.8.0 // indirect
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect
golang.org/x/mod v0.7.0 // indirect
golang.org/x/term v0.3.0 // indirect
golang.org/x/text v0.5.0 // indirect
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
honnef.co/go/tools v0.0.1-2020.1.5 // indirect
lukechampine.com/blake3 v1.1.7 // indirect
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect
mvdan.cc/unparam v0.0.0-20190720180237-d51796306d8f // indirect

1162
go.sum

File diff suppressed because it is too large Load Diff

@ -22,7 +22,7 @@ import (
"github.com/harmony-one/harmony/shard"
staking "github.com/harmony-one/harmony/staking/types"
lru "github.com/hashicorp/golang-lru"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"golang.org/x/sync/singleflight"
)

@ -4,7 +4,7 @@ import (
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
commonRPC "github.com/harmony-one/harmony/rpc/common"
"github.com/harmony-one/harmony/staking/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
)
// GetCurrentUtilityMetrics ..

@ -471,43 +471,24 @@ func applySlashes(
// Do the slashing by groups in the sorted order
for _, key := range sortedKeys {
records := groupedRecords[key]
superCommittee, err := chain.ReadShardState(big.NewInt(int64(key.epoch)))
if err != nil {
return errors.New("could not read shard state")
}
subComm, err := superCommittee.FindCommitteeByID(key.shardID)
if err != nil {
return errors.New("could not find shard committee")
}
// Apply the slashes, invariant: assume been verified as legit slash by this point
var slashApplied *slash.Application
votingPower, err := lookupVotingPower(
big.NewInt(int64(key.epoch)), subComm,
)
if err != nil {
return errors.Wrapf(err, "could not lookup cached voting power in slash application")
}
rate := slash.Rate(votingPower, records)
utils.Logger().Info().
Str("rate", rate.String()).
RawJSON("records", []byte(records.String())).
Msg("now applying slash to state during block finalization")
if slashApplied, err = slash.Apply(
// Apply the slashes, invariant: assume been verified as legit slash by this point
slashApplied, err := slash.Apply(
chain,
state,
records,
rate,
slashRewardBeneficiary,
); err != nil {
)
if err != nil {
return errors.New("[Finalize] could not apply slash")
}
utils.Logger().Info().
Str("rate", rate.String()).
RawJSON("records", []byte(records.String())).
RawJSON("applied", []byte(slashApplied.String())).
Msg("slash applied successfully")

@ -0,0 +1,387 @@
package chain
import (
"fmt"
"math/big"
"testing"
bls_core "github.com/harmony-one/bls/ffi/go/bls"
"github.com/harmony-one/harmony/block"
blockfactory "github.com/harmony-one/harmony/block/factory"
"github.com/harmony-one/harmony/consensus/engine"
consensus_sig "github.com/harmony-one/harmony/consensus/signature"
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/numeric"
"github.com/harmony-one/harmony/shard"
"github.com/harmony-one/harmony/staking/effective"
"github.com/harmony-one/harmony/staking/slash"
staking "github.com/harmony-one/harmony/staking/types"
types2 "github.com/harmony-one/harmony/staking/types"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/harmony-one/harmony/core/state"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/internal/params"
)
var (
bigOne = big.NewInt(1e18)
tenKOnes = new(big.Int).Mul(big.NewInt(10000), bigOne)
twentyKOnes = new(big.Int).Mul(big.NewInt(20000), bigOne)
fourtyKOnes = new(big.Int).Mul(big.NewInt(40000), bigOne)
thousandKOnes = new(big.Int).Mul(big.NewInt(1000000), bigOne)
)
const (
// validator creation parameters
doubleSignShardID = 0
doubleSignEpoch = 4
doubleSignBlockNumber = 37
doubleSignViewID = 38
creationHeight = 33
lastEpochInComm = 5
currentEpoch = 5
numShard = 4
numNodePerShard = 5
offenderShard = doubleSignShardID
offenderShardIndex = 0
)
var (
doubleSignBlock1 = makeBlockForTest(doubleSignEpoch, 0)
doubleSignBlock2 = makeBlockForTest(doubleSignEpoch, 1)
)
var (
keyPairs = genKeyPairs(25)
offIndex = offenderShard*numNodePerShard + offenderShardIndex
offAddr = makeTestAddress(offIndex)
offKey = keyPairs[offIndex]
offPub = offKey.Pub()
leaderAddr = makeTestAddress("leader")
)
// Tests that slashing works on the engine level. Since all slashing is
// thoroughly unit tested on `double-sign_test.go`, it just makes sure that
// slashing is applied to the state.
func TestApplySlashing(t *testing.T) {
chain := makeFakeBlockChain()
state := makeTestStateDB()
header := makeFakeHeader()
current := makeDefaultValidatorWrapper()
slashes := slash.Records{makeSlashRecord()}
if err := state.UpdateValidatorWrapper(current.Address, current); err != nil {
t.Error(err)
}
if _, err := state.Commit(true); err != nil {
t.Error(err)
}
// Inital Leader's balance: 0
// Initial Validator's self-delegation: FourtyKOnes
if err := applySlashes(chain, header, state, slashes); err != nil {
t.Error(err)
}
expDelAmountAfterSlash := twentyKOnes
expRewardToBeneficiary := tenKOnes
if current.Delegations[0].Amount.Cmp(expDelAmountAfterSlash) != 0 {
t.Errorf("Slashing was not applied properly to validator: %v/%v", expDelAmountAfterSlash, current.Delegations[0].Amount)
}
beneficiaryBalanceAfterSlash := state.GetBalance(leaderAddr)
if beneficiaryBalanceAfterSlash.Cmp(expRewardToBeneficiary) != 0 {
t.Errorf("Slashing reward was not added properly to beneficiary: %v/%v", expRewardToBeneficiary, beneficiaryBalanceAfterSlash)
}
}
//
// Make slash record for testing
//
func makeSlashRecord() slash.Record {
return slash.Record{
Evidence: slash.Evidence{
ConflictingVotes: slash.ConflictingVotes{
FirstVote: makeVoteData(offKey, doubleSignBlock1),
SecondVote: makeVoteData(offKey, doubleSignBlock2),
},
Moment: slash.Moment{
Epoch: big.NewInt(doubleSignEpoch),
ShardID: doubleSignShardID,
Height: doubleSignBlockNumber,
ViewID: doubleSignViewID,
},
Offender: offAddr,
},
Reporter: makeTestAddress("reporter"),
}
}
//
// Make validator for testing
//
func makeDefaultValidatorWrapper() *staking.ValidatorWrapper {
pubKeys := []bls.SerializedPublicKey{offPub}
v := defaultTestValidator(pubKeys)
ds := staking.Delegations{}
ds = append(ds, staking.Delegation{
DelegatorAddress: offAddr,
Amount: new(big.Int).Set(fourtyKOnes),
})
return &staking.ValidatorWrapper{
Validator: v,
Delegations: ds,
}
}
func defaultTestValidator(pubKeys []bls.SerializedPublicKey) staking.Validator {
comm := staking.Commission{
CommissionRates: staking.CommissionRates{
Rate: numeric.MustNewDecFromStr("0.167983520183826780"),
MaxRate: numeric.MustNewDecFromStr("0.179184469782137200"),
MaxChangeRate: numeric.MustNewDecFromStr("0.152212761523253600"),
},
UpdateHeight: big.NewInt(10),
}
desc := staking.Description{
Name: "someoneA",
Identity: "someoneB",
Website: "someoneC",
SecurityContact: "someoneD",
Details: "someoneE",
}
return staking.Validator{
Address: offAddr,
SlotPubKeys: pubKeys,
LastEpochInCommittee: big.NewInt(lastEpochInComm),
MinSelfDelegation: new(big.Int).Set(tenKOnes),
MaxTotalDelegation: new(big.Int).Set(thousandKOnes),
Status: effective.Active,
Commission: comm,
Description: desc,
CreationHeight: big.NewInt(creationHeight),
}
}
//
// Make commitee for testing
//
func makeDefaultCommittee() shard.State {
epoch := big.NewInt(doubleSignEpoch)
maker := newShardSlotMaker(keyPairs)
sstate := shard.State{
Epoch: epoch,
Shards: make([]shard.Committee, 0, int(numShard)),
}
for sid := uint32(0); sid != numNodePerShard; sid++ {
sstate.Shards = append(sstate.Shards, makeShardBySlotMaker(sid, maker))
}
return sstate
}
type shardSlotMaker struct {
kps []blsKeyPair
i int
}
func makeShardBySlotMaker(shardID uint32, maker shardSlotMaker) shard.Committee {
cmt := shard.Committee{
ShardID: shardID,
Slots: make(shard.SlotList, 0, numNodePerShard),
}
for nid := 0; nid != numNodePerShard; nid++ {
cmt.Slots = append(cmt.Slots, maker.makeSlot())
}
return cmt
}
func newShardSlotMaker(kps []blsKeyPair) shardSlotMaker {
return shardSlotMaker{kps, 0}
}
func (maker *shardSlotMaker) makeSlot() shard.Slot {
s := shard.Slot{
EcdsaAddress: makeTestAddress(maker.i),
BLSPublicKey: maker.kps[maker.i].Pub(), // Yes, will panic when not enough kps
}
maker.i++
return s
}
//
// State DB for testing
//
func makeTestStateDB() *state.DB {
db := state.NewDatabase(rawdb.NewMemoryDatabase())
sdb, err := state.New(common.Hash{}, db)
if err != nil {
panic(err)
}
err = sdb.UpdateValidatorWrapper(offAddr, makeDefaultValidatorWrapper())
if err != nil {
panic(err)
}
return sdb
}
//
// BLS keys for testing
//
type blsKeyPair struct {
pri *bls_core.SecretKey
pub *bls_core.PublicKey
}
func genKeyPairs(size int) []blsKeyPair {
kps := make([]blsKeyPair, 0, size)
for i := 0; i != size; i++ {
kps = append(kps, genKeyPair())
}
return kps
}
func genKeyPair() blsKeyPair {
pri := bls.RandPrivateKey()
pub := pri.GetPublicKey()
return blsKeyPair{
pri: pri,
pub: pub,
}
}
func (kp blsKeyPair) Pub() bls.SerializedPublicKey {
var pub bls.SerializedPublicKey
copy(pub[:], kp.pub.Serialize())
return pub
}
func (kp blsKeyPair) Sign(block *types.Block) []byte {
chain := &fakeBlockChain{config: *params.LocalnetChainConfig}
msg := consensus_sig.ConstructCommitPayload(chain, block.Epoch(), block.Hash(),
block.Number().Uint64(), block.Header().ViewID().Uint64())
sig := kp.pri.SignHash(msg)
return sig.Serialize()
}
//
// Mock blockchain for testing
//
type fakeBlockChain struct {
config params.ChainConfig
currentBlock types.Block
superCommittee shard.State
snapshots map[common.Address]staking.ValidatorWrapper
}
func makeFakeBlockChain() *fakeBlockChain {
return &fakeBlockChain{
config: *params.LocalnetChainConfig,
currentBlock: *makeBlockForTest(currentEpoch, 0),
superCommittee: makeDefaultCommittee(),
snapshots: make(map[common.Address]staking.ValidatorWrapper),
}
}
func makeBlockForTest(epoch int64, index int) *types.Block {
h := blockfactory.NewTestHeader()
h.SetEpoch(big.NewInt(epoch))
h.SetNumber(big.NewInt(doubleSignBlockNumber))
h.SetViewID(big.NewInt(doubleSignViewID))
h.SetRoot(common.BigToHash(big.NewInt(int64(index))))
return types.NewBlockWithHeader(h)
}
func (bc *fakeBlockChain) CurrentBlock() *types.Block {
return &bc.currentBlock
}
func (bc *fakeBlockChain) CurrentHeader() *block.Header {
return bc.currentBlock.Header()
}
func (bc *fakeBlockChain) GetBlock(hash common.Hash, number uint64) *types.Block { return nil }
func (bc *fakeBlockChain) GetHeader(hash common.Hash, number uint64) *block.Header { return nil }
func (bc *fakeBlockChain) GetHeaderByHash(hash common.Hash) *block.Header { return nil }
func (bc *fakeBlockChain) ShardID() uint32 { return 0 }
func (bc *fakeBlockChain) ReadShardState(epoch *big.Int) (*shard.State, error) { return nil, nil }
func (bc *fakeBlockChain) WriteCommitSig(blockNum uint64, lastCommits []byte) error { return nil }
func (bc *fakeBlockChain) GetHeaderByNumber(number uint64) *block.Header { return nil }
func (bc *fakeBlockChain) ReadValidatorList() ([]common.Address, error) { return nil, nil }
func (bc *fakeBlockChain) ReadCommitSig(blockNum uint64) ([]byte, error) { return nil, nil }
func (bc *fakeBlockChain) ReadBlockRewardAccumulator(uint64) (*big.Int, error) { return nil, nil }
func (bc *fakeBlockChain) ValidatorCandidates() []common.Address { return nil }
func (cr *fakeBlockChain) ReadValidatorInformationAtState(addr common.Address, state *state.DB) (*staking.ValidatorWrapper, error) {
return nil, nil
}
func (bc *fakeBlockChain) ReadValidatorSnapshotAtEpoch(epoch *big.Int, offender common.Address) (*types2.ValidatorSnapshot, error) {
return &types2.ValidatorSnapshot{
Validator: makeDefaultValidatorWrapper(),
Epoch: epoch,
}, nil
}
func (bc *fakeBlockChain) ReadValidatorInformation(addr common.Address) (*staking.ValidatorWrapper, error) {
return nil, nil
}
func (bc *fakeBlockChain) Config() *params.ChainConfig {
return params.LocalnetChainConfig
}
func (cr *fakeBlockChain) StateAt(root common.Hash) (*state.DB, error) {
return nil, nil
}
func (bc *fakeBlockChain) ReadValidatorSnapshot(addr common.Address) (*staking.ValidatorSnapshot, error) {
return nil, nil
}
func (bc *fakeBlockChain) ReadValidatorStats(addr common.Address) (*staking.ValidatorStats, error) {
return nil, nil
}
func (bc *fakeBlockChain) SuperCommitteeForNextEpoch(beacon engine.ChainReader, header *block.Header, isVerify bool) (*shard.State, error) {
return nil, nil
}
//
// Fake header for testing
//
func makeFakeHeader() *block.Header {
h := blockfactory.NewTestHeader()
h.SetCoinbase(leaderAddr)
return h
}
//
// Utilities for testing
//
func makeTestAddress(item interface{}) common.Address {
s := fmt.Sprintf("harmony.one.%v", item)
return common.BytesToAddress([]byte(s))
}
func makeVoteData(kp blsKeyPair, block *types.Block) slash.Vote {
return slash.Vote{
SignerPubKeys: []bls.SerializedPublicKey{kp.Pub()},
BlockHeaderHash: block.Hash(),
Signature: kp.Sign(block),
}
}

@ -54,6 +54,7 @@ type P2pConfig struct {
MaxConnsPerIP int
DisablePrivateIPScan bool
MaxPeers int64
WaitForEachPeerToConnect bool
}
type GeneralConfig struct {
@ -223,6 +224,8 @@ type SyncConfig struct {
// TODO: Remove this bool after stream sync is fully up.
Enabled bool // enable the stream sync protocol
Downloader bool // start the sync downloader client
StagedSync bool // use staged sync
StagedSyncCfg StagedSyncConfig // staged sync configurations
Concurrency int // concurrency used for stream sync protocol
MinPeers int // minimum streams to start a sync task.
InitStreams int // minimum streams in bootstrap to start sync loop.
@ -231,3 +234,16 @@ type SyncConfig struct {
DiscHighCap int // upper limit of streams in one sync protocol
DiscBatch int // size of each discovery
}
type StagedSyncConfig struct {
TurboMode bool // turn on turbo mode
DoubleCheckBlockHashes bool // double check all block hashes before download blocks
MaxBlocksPerSyncCycle uint64 // max number of blocks per each sync cycle, if set to zero, all blocks will be synced in one full cycle
MaxBackgroundBlocks uint64 // max number of background blocks in turbo mode
InsertChainBatchSize int // number of blocks to build a batch and insert to chain in staged sync
MaxMemSyncCycleSize uint64 // max number of blocks to use a single transaction for staged sync
VerifyAllSig bool // verify signatures for all blocks regardless of height and batch size
VerifyHeaderBatchSize uint64 // batch size to verify header before insert to chain
UseMemDB bool // it uses memory by default. set it to false to use disk
LogProgress bool // log the full sync progress in console
}

@ -16,8 +16,8 @@ import (
"github.com/harmony-one/harmony/multibls"
"github.com/harmony-one/harmony/shard"
"github.com/harmony-one/harmony/webhooks"
p2p_crypto "github.com/libp2p/go-libp2p-core/crypto"
"github.com/libp2p/go-libp2p-core/peer"
p2p_crypto "github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
@ -81,6 +81,17 @@ type ConfigType struct {
RosettaServer RosettaServerConfig // rosetta server port and ip
IsOffline bool
Downloader bool // Whether stream downloader is running; TODO: remove this after sync up
StagedSync bool // use staged sync
StagedSyncTurboMode bool // use Turbo mode for staged sync
UseMemDB bool // use mem db for staged sync
DoubleCheckBlockHashes bool
MaxBlocksPerSyncCycle uint64 // Maximum number of blocks per each cycle. if set to zero, all blocks will be downloaded and synced in one full cycle.
MaxMemSyncCycleSize uint64 // max number of blocks to use a single transaction for staged sync
MaxBackgroundBlocks uint64 // max number of background blocks in turbo mode
InsertChainBatchSize int // number of blocks to build a batch and insert to chain in staged sync
VerifyAllSig bool // verify signatures for all blocks regardless of height and batch size
VerifyHeaderBatchSize uint64 // batch size to verify header before insert to chain
LogProgress bool // log the full sync progress in console
NtpServer string
StringRole string
P2PPriKey p2p_crypto.PrivKey `json:"-"`

@ -63,6 +63,8 @@ const (
DefaultMaxConnPerIP = 10
// DefaultMaxPeers is the maximum number of remote peers, with 0 representing no limit
DefaultMaxPeers = 0
// DefaultWaitForEachPeerToConnect sets the sync configs to connect to neighbor peers one by one and waits for each peer to connect
DefaultWaitForEachPeerToConnect = false
)
const (

@ -0,0 +1,35 @@
package registry
import (
"sync"
"github.com/harmony-one/harmony/core"
)
// Registry consolidates services at one place.
type Registry struct {
mu sync.Mutex
blockchain core.BlockChain
}
// New creates a new registry.
func New() *Registry {
return &Registry{}
}
// SetBlockchain sets the blockchain to registry.
func (r *Registry) SetBlockchain(bc core.BlockChain) *Registry {
r.mu.Lock()
defer r.mu.Unlock()
r.blockchain = bc
return r
}
// GetBlockchain gets the blockchain from registry.
func (r *Registry) GetBlockchain() core.BlockChain {
r.mu.Lock()
defer r.mu.Unlock()
return r.blockchain
}

@ -0,0 +1,16 @@
package registry
import (
"testing"
"github.com/harmony-one/harmony/core"
"github.com/stretchr/testify/require"
)
func TestRegistry(t *testing.T) {
registry := New()
require.Nil(t, registry.GetBlockchain())
registry.SetBlockchain(core.Stub{})
require.NotNil(t, registry.GetBlockchain())
}

@ -17,7 +17,7 @@ import (
"github.com/ethereum/go-ethereum/common"
bls_core "github.com/harmony-one/bls/ffi/go/bls"
"github.com/harmony-one/harmony/crypto/bls"
p2p_crypto "github.com/libp2p/go-libp2p-core/crypto"
p2p_crypto "github.com/libp2p/go-libp2p/core/crypto"
"github.com/pkg/errors"
)

@ -5,7 +5,7 @@ import (
"os"
"testing"
crypto "github.com/libp2p/go-libp2p-core/crypto"
crypto "github.com/libp2p/go-libp2p/core/crypto"
)
// Test for GenKeyP2P, noted the length of private key can be random
@ -52,8 +52,8 @@ func TestSaveLoadPrivateKey(t *testing.T) {
if !crypto.KeyEqual(pk, pk1) {
t.Errorf("loaded key is not right")
b1, _ := pk.Bytes()
b2, _ := pk1.Bytes()
b1, _ := pk.Raw()
b2, _ := pk1.Raw()
t.Errorf("expecting pk: %v\n", b1)
t.Errorf("got pk1: %v\n", b2)
}

@ -10,7 +10,7 @@ import (
hmy_rpc "github.com/harmony-one/harmony/rpc"
rpc_common "github.com/harmony-one/harmony/rpc/common"
"github.com/harmony-one/harmony/rpc/filters"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
)
// IsCurrentlyLeader exposes if node is currently the leader node

@ -13,6 +13,7 @@ import (
"time"
"github.com/harmony-one/harmony/consensus/engine"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/shardchain/tikv_manage"
"github.com/harmony-one/harmony/internal/tikv"
"github.com/harmony-one/harmony/internal/tikv/redis_helper"
@ -26,8 +27,8 @@ import (
"github.com/harmony-one/abool"
bls_core "github.com/harmony-one/bls/ffi/go/bls"
lru "github.com/hashicorp/golang-lru"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_pubsub "github.com/libp2p/go-libp2p-pubsub"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/rcrowley/go-metrics"
@ -39,6 +40,7 @@ import (
"github.com/harmony-one/harmony/api/service"
"github.com/harmony-one/harmony/api/service/legacysync"
"github.com/harmony-one/harmony/api/service/legacysync/downloader"
"github.com/harmony-one/harmony/api/service/stagedsync"
"github.com/harmony-one/harmony/consensus"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/core/rawdb"
@ -81,6 +83,20 @@ type syncConfig struct {
withSig bool
}
type ISync interface {
UpdateBlockAndStatus(block *types.Block, bc core.BlockChain, verifyAllSig bool) error
AddLastMileBlock(block *types.Block)
GetActivePeerNumber() int
CreateSyncConfig(peers []p2p.Peer, shardID uint32, waitForEachPeerToConnect bool) error
SyncLoop(bc core.BlockChain, worker *worker.Worker, isBeacon bool, consensus *consensus.Consensus, loopMinTime time.Duration)
IsSynchronized() bool
IsSameBlockchainHeight(bc core.BlockChain) (uint64, bool)
AddNewBlock(peerHash []byte, block *types.Block)
RegisterNodeInfo() int
GetParsedSyncStatus() (IsSynchronized bool, OtherHeight uint64, HeightDiff uint64)
GetParsedSyncStatusDoubleChecked() (IsSynchronized bool, OtherHeight uint64, HeightDiff uint64)
}
// Node represents a protocol-participating node in the network
type Node struct {
Consensus *consensus.Consensus // Consensus object containing all Consensus related data (e.g. committee members, signatures, commits)
@ -102,6 +118,7 @@ type Node struct {
syncID [SyncIDLength]byte // a unique ID for the node during the state syncing process with peers
stateSync *legacysync.StateSync
epochSync *legacysync.EpochSync
stateStagedSync *stagedsync.StagedSync
peerRegistrationRecord map[string]*syncConfig // record registration time (unixtime) of peers begin in syncing
SyncingPeerProvider SyncingPeerProvider
// The p2p host used to send/receive p2p messages
@ -127,7 +144,7 @@ type Node struct {
// BroadcastInvalidTx flag is considered when adding pending tx to tx-pool
BroadcastInvalidTx bool
// InSync flag indicates the node is in-sync or not
IsInSync *abool.AtomicBool
IsSynchronized *abool.AtomicBool
proposedBlock map[uint64]*types.Block
deciderCache *lru.Cache
@ -138,22 +155,40 @@ type Node struct {
// context control for pub-sub handling
psCtx context.Context
psCancel func()
registry *registry.Registry
}
// Blockchain returns the blockchain for the node's current shard.
func (node *Node) Blockchain() core.BlockChain {
shardID := node.NodeConfig.ShardID
bc, err := node.shardChains.ShardChain(shardID)
if err != nil {
utils.Logger().Error().
Uint32("shardID", shardID).
Err(err).
Msg("cannot get shard chain")
return node.registry.GetBlockchain()
}
func (node *Node) SyncInstance() ISync {
return node.GetOrCreateSyncInstance(true)
}
func (node *Node) CurrentSyncInstance() bool {
return node.GetOrCreateSyncInstance(false) != nil
}
// GetOrCreateSyncInstance returns an instance of state sync, either legacy or staged
// if initiate sets to true, it generates a new instance
func (node *Node) GetOrCreateSyncInstance(initiate bool) ISync {
if node.NodeConfig.StagedSync {
if initiate && node.stateStagedSync == nil {
utils.Logger().Info().Msg("initializing staged state sync")
node.stateStagedSync = node.createStagedSync(node.Blockchain())
}
return bc
return node.stateStagedSync
}
if initiate && node.stateSync == nil {
utils.Logger().Info().Msg("initializing legacy state sync")
node.stateSync = node.createStateSync(node.Beaconchain())
}
return node.stateSync
}
// Beaconchain returns the beaconchain from node.
// Beaconchain returns the beacon chain from node.
func (node *Node) Beaconchain() core.BlockChain {
// tikv mode not have the BeaconChain storage
if node.HarmonyConfig != nil && node.HarmonyConfig.General.RunElasticMode && node.HarmonyConfig.General.ShardID != shard.BeaconChainShardID {
@ -990,11 +1025,15 @@ func New(
localAccounts []common.Address,
isArchival map[uint32]bool,
harmonyconfig *harmonyconfig.HarmonyConfig,
registry *registry.Registry,
) *Node {
node := Node{}
node.unixTimeAtNodeStart = time.Now().Unix()
node.TransactionErrorSink = types.NewTransactionErrorSink()
node.crosslinks = crosslinks.New()
node := Node{
registry: registry,
unixTimeAtNodeStart: time.Now().Unix(),
TransactionErrorSink: types.NewTransactionErrorSink(),
crosslinks: crosslinks.New(),
}
// Get the node config that's created in the harmony.go program.
if consensusObj != nil {
node.NodeConfig = nodeconfig.GetShardConfig(consensusObj.ShardID)
@ -1012,14 +1051,8 @@ func New(
networkType := node.NodeConfig.GetNetworkType()
chainConfig := networkType.ChainConfig()
node.chainConfig = chainConfig
for shardID, archival := range isArchival {
if archival {
collection.DisableCache(shardID)
}
}
node.shardChains = collection
node.IsInSync = abool.NewBool(false)
node.IsSynchronized = abool.NewBool(false)
if host != nil && consensusObj != nil {
// Consensus and associated channel to communicate blocks
@ -1179,7 +1212,7 @@ func (node *Node) InitConsensusWithValidators() (err error) {
Uint64("epoch", epoch.Uint64()).
Msg("[InitConsensusWithValidators] Try To Get PublicKeys")
shardState, err := committee.WithStakingEnabled.Compute(
epoch, node.Consensus.Blockchain,
epoch, node.Consensus.Blockchain(),
)
if err != nil {
utils.Logger().Err(err).
@ -1301,7 +1334,7 @@ func (node *Node) populateSelfAddresses(epoch *big.Int) {
node.keysToAddrsEpoch = epoch
shardID := node.Consensus.ShardID
shardState, err := node.Consensus.Blockchain.ReadShardState(epoch)
shardState, err := node.Consensus.Blockchain().ReadShardState(epoch)
if err != nil {
utils.Logger().Error().Err(err).
Int64("epoch", epoch.Int64()).

@ -125,7 +125,7 @@ func (node *Node) stakingMessageHandler(msgPayload []byte) {
switch txMessageType {
case proto_node.Send:
txs := staking.StakingTransactions{}
err := rlp.Decode(bytes.NewReader(msgPayload[1:]), &txs) // skip the Send messge type
err := rlp.Decode(bytes.NewReader(msgPayload[1:]), &txs) // skip the Send message type
if err != nil {
utils.Logger().Error().
Err(err).
@ -209,7 +209,7 @@ func (node *Node) BroadcastCrossLinkFromShardsToBeacon() { // leader of 1-3 shar
err = node.host.SendMessageToGroups(
[]nodeconfig.GroupID{nodeconfig.NewGroupIDByShardID(shard.BeaconChainShardID)},
p2p.ConstructMessage(
proto_node.ConstructCrossLinkMessage(node.Consensus.Blockchain, headers)),
proto_node.ConstructCrossLinkMessage(node.Consensus.Blockchain(), headers)),
)
if err != nil {
utils.Logger().Error().Err(err).Msgf("[BroadcastCrossLink] failed to broadcast message")

@ -12,6 +12,7 @@ import (
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/chain"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/shardchain"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/multibls"
@ -40,14 +41,19 @@ func TestAddNewBlock(t *testing.T) {
decider := quorum.NewDecider(
quorum.SuperMajorityVote, shard.BeaconChainShardID,
)
blockchain, err := collection.ShardChain(shard.BeaconChainShardID)
if err != nil {
t.Fatal("cannot get blockchain")
}
reg := registry.New().SetBlockchain(blockchain)
consensus, err := consensus.New(
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), nil, decider, 3, false,
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), reg, decider, 3, false,
)
if err != nil {
t.Fatalf("Cannot craeate consensus: %v", err)
}
nodeconfig.SetNetworkType(nodeconfig.Devnet)
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil)
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil, reg)
txs := make(map[common.Address]types.Transactions)
stks := staking.StakingTransactions{}
@ -92,8 +98,13 @@ func TestVerifyNewBlock(t *testing.T) {
decider := quorum.NewDecider(
quorum.SuperMajorityVote, shard.BeaconChainShardID,
)
blockchain, err := collection.ShardChain(shard.BeaconChainShardID)
if err != nil {
t.Fatal("cannot get blockchain")
}
reg := registry.New().SetBlockchain(blockchain)
consensus, err := consensus.New(
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), nil, decider, 3, false,
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), reg, decider, 3, false,
)
if err != nil {
t.Fatalf("Cannot craeate consensus: %v", err)
@ -101,7 +112,7 @@ func TestVerifyNewBlock(t *testing.T) {
archiveMode := make(map[uint32]bool)
archiveMode[0] = true
archiveMode[1] = false
node := New(host, consensus, engine, collection, nil, nil, nil, archiveMode, nil)
node := New(host, consensus, engine, collection, nil, nil, nil, archiveMode, nil, reg)
txs := make(map[common.Address]types.Transactions)
stks := staking.StakingTransactions{}
@ -147,8 +158,9 @@ func TestVerifyVRF(t *testing.T) {
decider := quorum.NewDecider(
quorum.SuperMajorityVote, shard.BeaconChainShardID,
)
reg := registry.New().SetBlockchain(blockchain)
consensus, err := consensus.New(
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), blockchain, decider, 3, false,
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), reg, decider, 3, false,
)
if err != nil {
t.Fatalf("Cannot craeate consensus: %v", err)
@ -156,7 +168,7 @@ func TestVerifyVRF(t *testing.T) {
archiveMode := make(map[uint32]bool)
archiveMode[0] = true
archiveMode[1] = false
node := New(host, consensus, engine, collection, nil, nil, nil, archiveMode, nil)
node := New(host, consensus, engine, collection, nil, nil, nil, archiveMode, nil, reg)
txs := make(map[common.Address]types.Transactions)
stks := staking.StakingTransactions{}

@ -12,6 +12,7 @@ import (
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/chain"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/shardchain"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/multibls"
@ -52,7 +53,7 @@ func TestFinalizeNewBlockAsync(t *testing.T) {
t.Fatalf("Cannot craeate consensus: %v", err)
}
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil)
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil, registry.New().SetBlockchain(blockchain))
node.Worker.UpdateCurrent()

@ -22,6 +22,7 @@ import (
"github.com/harmony-one/harmony/api/service/legacysync"
legdownloader "github.com/harmony-one/harmony/api/service/legacysync/downloader"
downloader_pb "github.com/harmony-one/harmony/api/service/legacysync/downloader/proto"
"github.com/harmony-one/harmony/api/service/stagedsync"
"github.com/harmony-one/harmony/api/service/synchronize"
"github.com/harmony-one/harmony/core"
"github.com/harmony-one/harmony/core/types"
@ -84,10 +85,7 @@ func (node *Node) DoSyncWithoutConsensus() {
// IsSameHeight tells whether node is at same bc height as a peer
func (node *Node) IsSameHeight() (uint64, bool) {
if node.stateSync == nil {
node.stateSync = node.createStateSync(node.Blockchain())
}
return node.stateSync.IsSameBlockchainHeight(node.Blockchain())
return node.SyncInstance().IsSameBlockchainHeight(node.Blockchain())
}
func (node *Node) createStateSync(bc core.BlockChain) *legacysync.StateSync {
@ -110,6 +108,42 @@ func (node *Node) createStateSync(bc core.BlockChain) *legacysync.StateSync {
node.GetSyncID(), node.NodeConfig.Role() == nodeconfig.ExplorerNode, role)
}
func (node *Node) createStagedSync(bc core.BlockChain) *stagedsync.StagedSync {
// Temp hack: The actual port used in dns sync is node.downloaderServer.Port.
// But registration is done through an old way of port arithmetics (syncPort + 3000).
// Thus for compatibility, we are doing the arithmetics here, and not to change the
// protocol itself. This is just the temporary hack and will not be a concern after
// state sync.
var mySyncPort int
if node.downloaderServer != nil {
mySyncPort = node.downloaderServer.Port
} else {
// If local sync server is not started, the port field in protocol is actually not
// functional, simply set it to default value.
mySyncPort = nodeconfig.DefaultDNSPort
}
mutatedPort := strconv.Itoa(mySyncPort + legacysync.SyncingPortDifference)
role := node.NodeConfig.Role()
isExplorer := node.NodeConfig.Role() == nodeconfig.ExplorerNode
if s, err := stagedsync.CreateStagedSync(node.SelfPeer.IP, mutatedPort,
node.GetSyncID(), bc, role, isExplorer,
node.NodeConfig.StagedSyncTurboMode,
node.NodeConfig.UseMemDB,
node.NodeConfig.DoubleCheckBlockHashes,
node.NodeConfig.MaxBlocksPerSyncCycle,
node.NodeConfig.MaxBackgroundBlocks,
node.NodeConfig.MaxMemSyncCycleSize,
node.NodeConfig.VerifyAllSig,
node.NodeConfig.VerifyHeaderBatchSize,
node.NodeConfig.InsertChainBatchSize,
node.NodeConfig.LogProgress); err != nil {
return nil
} else {
return s
}
}
// SyncingPeerProvider is an interface for getting the peers in the given shard.
type SyncingPeerProvider interface {
SyncingPeers(shardID uint32) (peers []p2p.Peer, err error)
@ -219,13 +253,13 @@ func (node *Node) doBeaconSyncing() {
continue
}
if err := node.epochSync.CreateSyncConfig(peers, shard.BeaconChainShardID); err != nil {
if err := node.epochSync.CreateSyncConfig(peers, shard.BeaconChainShardID, node.HarmonyConfig.P2P.WaitForEachPeerToConnect); err != nil {
utils.Logger().Warn().Err(err).Msg("[EPOCHSYNC] cannot create beacon sync config")
continue
}
}
<-time.After(node.epochSync.SyncLoop(node.EpochChain(), true, nil))
<-time.After(node.epochSync.SyncLoop(node.EpochChain(), nil))
}
}
@ -250,7 +284,9 @@ func (node *Node) DoSyncing(bc core.BlockChain, worker *worker.Worker, willJoinC
// doSync keep the node in sync with other peers, willJoinConsensus means the node will try to join consensus after catch up
func (node *Node) doSync(bc core.BlockChain, worker *worker.Worker, willJoinConsensus bool) {
if node.stateSync.GetActivePeerNumber() < legacysync.NumPeersLowBound {
syncInstance := node.SyncInstance()
if syncInstance.GetActivePeerNumber() < legacysync.NumPeersLowBound {
shardID := bc.ShardID()
peers, err := node.SyncingPeerProvider.SyncingPeers(shardID)
if err != nil {
@ -260,28 +296,28 @@ func (node *Node) doSync(bc core.BlockChain, worker *worker.Worker, willJoinCons
Msg("cannot retrieve syncing peers")
return
}
if err := node.stateSync.CreateSyncConfig(peers, shardID); err != nil {
if err := syncInstance.CreateSyncConfig(peers, shardID, node.HarmonyConfig.P2P.WaitForEachPeerToConnect); err != nil {
utils.Logger().Warn().
Err(err).
Interface("peers", peers).
Msg("[SYNC] create peers error")
return
}
utils.Logger().Debug().Int("len", node.stateSync.GetActivePeerNumber()).Msg("[SYNC] Get Active Peers")
utils.Logger().Debug().Int("len", syncInstance.GetActivePeerNumber()).Msg("[SYNC] Get Active Peers")
}
// TODO: treat fake maximum height
if result := node.stateSync.GetSyncStatusDoubleChecked(); !result.IsInSync {
node.IsInSync.UnSet()
if isSynchronized, _, _ := syncInstance.GetParsedSyncStatusDoubleChecked(); !isSynchronized {
node.IsSynchronized.UnSet()
if willJoinConsensus {
node.Consensus.BlocksNotSynchronized()
}
node.stateSync.SyncLoop(bc, worker, false, node.Consensus)
syncInstance.SyncLoop(bc, worker, false, node.Consensus, legacysync.LoopMinTime)
if willJoinConsensus {
node.IsInSync.Set()
node.IsSynchronized.Set()
node.Consensus.BlocksSynchronized()
}
}
node.IsInSync.Set()
node.IsSynchronized.Set()
}
// SupportGRPCSyncServer do gRPC sync server
@ -331,11 +367,16 @@ func (node *Node) supportSyncing() {
go node.SendNewBlockToUnsync()
}
if node.stateSync == nil {
if !node.NodeConfig.StagedSync && node.stateSync == nil {
node.stateSync = node.createStateSync(node.Blockchain())
utils.Logger().Debug().Msg("[SYNC] initialized state sync")
}
if node.NodeConfig.StagedSync && node.stateStagedSync == nil {
node.stateStagedSync = node.createStagedSync(node.Blockchain())
utils.Logger().Debug().Msg("[SYNC] initialized state for staged sync")
}
go node.DoSyncing(node.Blockchain(), node.Worker, joinConsensus)
}
@ -356,6 +397,7 @@ func (node *Node) StartSyncingServer(port int) {
// SendNewBlockToUnsync send latest verified block to unsync, registered nodes
func (node *Node) SendNewBlockToUnsync() {
for {
block := <-node.Consensus.VerifiedNewBlock
blockBytes, err := rlp.EncodeToBytes(block)
@ -374,7 +416,7 @@ func (node *Node) SendNewBlockToUnsync() {
elapseTime := time.Now().UnixNano() - config.timestamp
if elapseTime > broadcastTimeout {
utils.Logger().Warn().Str("peerID", peerID).Msg("[SYNC] SendNewBlockToUnsync to peer timeout")
node.peerRegistrationRecord[peerID].client.Close()
node.peerRegistrationRecord[peerID].client.Close("send new block to peer timeout")
delete(node.peerRegistrationRecord, peerID)
continue
}
@ -383,13 +425,13 @@ func (node *Node) SendNewBlockToUnsync() {
sendBytes = blockWithSigBytes
}
response, err := config.client.PushNewBlock(node.GetSyncID(), sendBytes, false)
// close the connection if cannot push new block to unsync node
// close the connection if cannot push new block to not synchronized node
if err != nil {
node.peerRegistrationRecord[peerID].client.Close()
node.peerRegistrationRecord[peerID].client.Close("cannot push new block to not synchronized node")
delete(node.peerRegistrationRecord, peerID)
}
if response != nil && response.Type == downloader_pb.DownloaderResponse_INSYNC {
node.peerRegistrationRecord[peerID].client.Close()
node.peerRegistrationRecord[peerID].client.Close("node is synchronized")
delete(node.peerRegistrationRecord, peerID)
}
}
@ -403,7 +445,6 @@ func (node *Node) CalculateResponse(request *downloader_pb.DownloaderRequest, in
if node.NodeConfig.IsOffline {
return response, nil
}
switch request.Type {
case downloader_pb.DownloaderRequest_BLOCKHASH:
dnsServerRequestCounterVec.With(dnsReqMetricLabel("block_hash")).Inc()
@ -493,7 +534,7 @@ func (node *Node) CalculateResponse(request *downloader_pb.DownloaderRequest, in
// this is the out of sync node acts as grpc server side
case downloader_pb.DownloaderRequest_NEWBLOCK:
dnsServerRequestCounterVec.With(dnsReqMetricLabel("new block")).Inc()
if node.IsInSync.IsSet() {
if node.IsSynchronized.IsSet() {
response.Type = downloader_pb.DownloaderResponse_INSYNC
return response, nil
}
@ -502,7 +543,7 @@ func (node *Node) CalculateResponse(request *downloader_pb.DownloaderRequest, in
utils.Logger().Warn().Err(err).Msg("[SYNC] unable to decode received new block")
return response, err
}
node.stateSync.AddNewBlock(request.PeerHash, block)
node.SyncInstance().AddNewBlock(request.PeerHash, block)
case downloader_pb.DownloaderRequest_REGISTER:
peerID := string(request.PeerHash[:])
@ -528,7 +569,7 @@ func (node *Node) CalculateResponse(request *downloader_pb.DownloaderRequest, in
} else {
response.Type = downloader_pb.DownloaderResponse_FAIL
syncPort := legacysync.GetSyncingPort(port)
client := legdownloader.ClientSetup(ip, syncPort)
client := legdownloader.ClientSetup(ip, syncPort, false)
if client == nil {
utils.Logger().Warn().
Str("ip", ip).
@ -546,8 +587,8 @@ func (node *Node) CalculateResponse(request *downloader_pb.DownloaderRequest, in
}
case downloader_pb.DownloaderRequest_REGISTERTIMEOUT:
if !node.IsInSync.IsSet() {
count := node.stateSync.RegisterNodeInfo()
if !node.IsSynchronized.IsSet() {
count := node.SyncInstance().RegisterNodeInfo()
utils.Logger().Debug().
Int("number", count).
Msg("[SYNC] extra node registered")
@ -752,18 +793,17 @@ func (node *Node) SyncStatus(shardID uint32) (bool, uint64, uint64) {
func (node *Node) legacySyncStatus(shardID uint32) (bool, uint64, uint64) {
switch shardID {
case node.NodeConfig.ShardID:
if node.stateSync == nil {
if node.SyncInstance() == nil {
return false, 0, 0
}
result := node.stateSync.GetSyncStatus()
return result.IsInSync, result.OtherHeight, result.HeightDiff
return node.SyncInstance().GetParsedSyncStatus()
case shard.BeaconChainShardID:
if node.epochSync == nil {
return false, 0, 0
}
result := node.epochSync.GetSyncStatus()
return result.IsInSync, result.OtherHeight, result.HeightDiff
return result.IsSynchronized, result.OtherHeight, result.HeightDiff
default:
// Shard node is not working on
@ -785,18 +825,19 @@ func (node *Node) IsOutOfSync(shardID uint32) bool {
func (node *Node) legacyIsOutOfSync(shardID uint32) bool {
switch shardID {
case node.NodeConfig.ShardID:
if node.stateSync == nil {
if !node.NodeConfig.StagedSync && node.stateSync == nil {
return true
} else if node.NodeConfig.StagedSync && node.stateStagedSync == nil {
return true
}
result := node.stateSync.GetSyncStatus()
return !result.IsInSync
return node.SyncInstance().IsSynchronized()
case shard.BeaconChainShardID:
if node.epochSync == nil {
return true
}
result := node.epochSync.GetSyncStatus()
return !result.IsInSync
return !result.IsSynchronized
default:
return true

@ -10,6 +10,7 @@ import (
"github.com/harmony-one/harmony/crypto/bls"
"github.com/harmony-one/harmony/internal/chain"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/registry"
"github.com/harmony-one/harmony/internal/shardchain"
"github.com/harmony-one/harmony/internal/utils"
"github.com/harmony-one/harmony/multibls"
@ -36,17 +37,23 @@ func TestNewNode(t *testing.T) {
decider := quorum.NewDecider(
quorum.SuperMajorityVote, shard.BeaconChainShardID,
)
chainconfig := nodeconfig.GetShardConfig(shard.BeaconChainShardID).GetNetworkType().ChainConfig()
collection := shardchain.NewCollection(
nil, testDBFactory, &core.GenesisInitializer{NetworkType: nodeconfig.GetShardConfig(shard.BeaconChainShardID).GetNetworkType()}, engine, &chainconfig,
)
blockchain, err := collection.ShardChain(shard.BeaconChainShardID)
if err != nil {
t.Fatal("cannot get blockchain")
}
reg := registry.New().SetBlockchain(blockchain)
consensus, err := consensus.New(
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), nil, decider, 3, false,
host, shard.BeaconChainShardID, multibls.GetPrivateKeys(blsKey), reg, decider, 3, false,
)
if err != nil {
t.Fatalf("Cannot craeate consensus: %v", err)
}
chainconfig := nodeconfig.GetShardConfig(shard.BeaconChainShardID).GetNetworkType().ChainConfig()
collection := shardchain.NewCollection(
nil, testDBFactory, &core.GenesisInitializer{NetworkType: nodeconfig.GetShardConfig(shard.BeaconChainShardID).GetNetworkType()}, engine, &chainconfig,
)
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil)
node := New(host, consensus, engine, collection, nil, nil, nil, nil, nil, reg)
if node.Consensus == nil {
t.Error("Consensus is not initialized for the node")
}

@ -5,11 +5,11 @@ import (
"time"
"github.com/harmony-one/harmony/internal/utils"
"github.com/libp2p/go-libp2p-core/discovery"
libp2p_host "github.com/libp2p/go-libp2p-core/host"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_dis "github.com/libp2p/go-libp2p-discovery"
libp2p_dht "github.com/libp2p/go-libp2p-kad-dht"
"github.com/libp2p/go-libp2p/core/discovery"
libp2p_host "github.com/libp2p/go-libp2p/core/host"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
libp2p_dis "github.com/libp2p/go-libp2p/p2p/discovery/routing"
"github.com/rs/zerolog"
)

@ -3,14 +3,13 @@ package discovery
// TODO: test this module
import (
"context"
"testing"
"github.com/libp2p/go-libp2p"
)
func TestNewDHTDiscovery(t *testing.T) {
host, err := libp2p.New(context.Background())
host, err := libp2p.New()
if err != nil {
t.Fatal(err)
}

@ -1,11 +1,11 @@
package p2p
import (
"github.com/libp2p/go-libp2p-core/connmgr"
"github.com/libp2p/go-libp2p-core/control"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
libp2p_dht "github.com/libp2p/go-libp2p-kad-dht"
"github.com/libp2p/go-libp2p/core/connmgr"
"github.com/libp2p/go-libp2p/core/control"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
)

@ -12,13 +12,13 @@ import (
"time"
"github.com/libp2p/go-libp2p"
libp2p_crypto "github.com/libp2p/go-libp2p-core/crypto"
libp2p_host "github.com/libp2p/go-libp2p-core/host"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_peerstore "github.com/libp2p/go-libp2p-core/peerstore"
"github.com/libp2p/go-libp2p-core/protocol"
libp2p_pubsub "github.com/libp2p/go-libp2p-pubsub"
libp2p_crypto "github.com/libp2p/go-libp2p/core/crypto"
libp2p_host "github.com/libp2p/go-libp2p/core/host"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
libp2p_peerstore "github.com/libp2p/go-libp2p/core/peerstore"
"github.com/libp2p/go-libp2p/core/protocol"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/rs/zerolog"
@ -88,6 +88,7 @@ type HostConfig struct {
MaxConnPerIP int
DisablePrivateIPScan bool
MaxPeers int64
WaitForEachPeerToConnect bool
}
func init() {
@ -117,7 +118,7 @@ func NewHost(cfg HostConfig) (Host, error) {
}
ctx, cancel := context.WithCancel(context.Background())
p2pHost, err := libp2p.New(ctx,
p2pHost, err := libp2p.New(
libp2p.ListenAddrs(listenAddr),
libp2p.Identity(key),
libp2p.EnableNATService(),

@ -1,9 +1,9 @@
//package p2p
// package p2p
package p2p
import (
eth_metrics "github.com/ethereum/go-ethereum/metrics"
"github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p/core/metrics"
)
const (

@ -6,7 +6,7 @@ import (
"sync/atomic"
"github.com/harmony-one/harmony/internal/utils"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
)

@ -7,25 +7,25 @@ import (
"time"
"github.com/libp2p/go-libp2p"
ic "github.com/libp2p/go-libp2p-core/crypto"
"github.com/libp2p/go-libp2p-core/host"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
ic "github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/host"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/stretchr/testify/assert"
)
type ConnectCallback func(net network.Network, conn network.Conn) error
type DisconnectCallback func(conn network.Conn) error
type ConnectCallback func(net libp2p_network.Network, conn libp2p_network.Conn) error
type DisconnectCallback func(conn libp2p_network.Conn) error
type fakeHost struct {
onConnections []ConnectCallback
onDisconnects []DisconnectCallback
}
func (fh *fakeHost) Listen(network.Network, ma.Multiaddr) {}
func (fh *fakeHost) ListenClose(network.Network, ma.Multiaddr) {}
func (fh *fakeHost) Connected(net network.Network, conn network.Conn) {
func (fh *fakeHost) Listen(libp2p_network.Network, ma.Multiaddr) {}
func (fh *fakeHost) ListenClose(libp2p_network.Network, ma.Multiaddr) {}
func (fh *fakeHost) Connected(net libp2p_network.Network, conn libp2p_network.Conn) {
for _, function := range fh.onConnections {
if err := function(net, conn); err != nil {
fmt.Println("failed on peer connected callback")
@ -33,7 +33,7 @@ func (fh *fakeHost) Connected(net network.Network, conn network.Conn) {
}
}
func (fh *fakeHost) Disconnected(net network.Network, conn network.Conn) {
func (fh *fakeHost) Disconnected(net libp2p_network.Network, conn libp2p_network.Conn) {
for _, function := range fh.onDisconnects {
if err := function(conn); err != nil {
fmt.Println("failed on peer disconnected callback")
@ -41,8 +41,8 @@ func (fh *fakeHost) Disconnected(net network.Network, conn network.Conn) {
}
}
func (mh *fakeHost) OpenedStream(network.Network, network.Stream) {}
func (mh *fakeHost) ClosedStream(network.Network, network.Stream) {}
func (mh *fakeHost) OpenedStream(libp2p_network.Network, libp2p_network.Stream) {}
func (mh *fakeHost) ClosedStream(libp2p_network.Network, libp2p_network.Stream) {}
func (mh *fakeHost) SetConnectCallback(callback ConnectCallback) {
mh.onConnections = append(mh.onConnections, callback)
}
@ -135,7 +135,7 @@ func newPeer(port int) (host.Host, error) {
}
listenAddr := fmt.Sprintf("/ip4/0.0.0.0/tcp/%d", port)
host, err := libp2p.New(context.Background(), libp2p.ListenAddrStrings(listenAddr), libp2p.DisableRelay(), libp2p.Identity(priv), libp2p.NoSecurity)
host, err := libp2p.New(libp2p.ListenAddrStrings(listenAddr), libp2p.DisableRelay(), libp2p.Identity(priv), libp2p.NoSecurity)
if err != nil {
return nil, err
}
@ -145,20 +145,25 @@ func newPeer(port int) (host.Host, error) {
type fakeConn struct{}
func (conn *fakeConn) ID() string { return "" }
func (conn *fakeConn) NewStream(context.Context) (libp2p_network.Stream, error) { return nil, nil }
func (conn *fakeConn) GetStreams() []libp2p_network.Stream { return nil }
func (conn *fakeConn) Close() error { return nil }
func (conn *fakeConn) LocalPeer() peer.ID { return "" }
func (conn *fakeConn) LocalPrivateKey() ic.PrivKey { return nil }
func (conn *fakeConn) RemotePeer() peer.ID { return "" }
func (conn *fakeConn) RemotePublicKey() ic.PubKey { return nil }
func (conn *fakeConn) ConnState() libp2p_network.ConnectionState {
return libp2p_network.ConnectionState{}
}
func (conn *fakeConn) LocalMultiaddr() ma.Multiaddr { return nil }
func (conn *fakeConn) RemoteMultiaddr() ma.Multiaddr {
addr, _ := ma.NewMultiaddr("/ip6/fe80::7802:31ff:fee9:c093/tcp/50550")
return addr
}
func (conn *fakeConn) ID() string { return "" }
func (conn *fakeConn) NewStream(context.Context) (network.Stream, error) { return nil, nil }
func (conn *fakeConn) GetStreams() []network.Stream { return nil }
func (conn *fakeConn) Stat() network.Stat { return network.Stat{} }
func (conn *fakeConn) Stat() libp2p_network.ConnStats { return libp2p_network.ConnStats{} }
func (conn *fakeConn) Scope() libp2p_network.ConnScope { return nil }
func TestGetRemoteIP(t *testing.T) {
ip, err := getRemoteIP(&fakeConn{})
assert.Nil(t, err)

@ -4,7 +4,7 @@ import (
"container/list"
"time"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/whyrusleeping/timecache"
)

@ -6,9 +6,9 @@ import (
"github.com/ethereum/go-ethereum/event"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
p2ptypes "github.com/harmony-one/harmony/p2p/types"
"github.com/libp2p/go-libp2p-core/network"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
"github.com/libp2p/go-libp2p/core/network"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
// StreamManager is the interface for streamManager

@ -8,9 +8,9 @@ import (
"sync/atomic"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
"github.com/libp2p/go-libp2p-core/network"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
"github.com/libp2p/go-libp2p/core/network"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
var _ StreamManager = &streamManager{}

@ -10,9 +10,9 @@ import (
"github.com/harmony-one/abool"
"github.com/harmony-one/harmony/internal/utils"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
"github.com/libp2p/go-libp2p-core/network"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
"github.com/libp2p/go-libp2p/core/network"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/rs/zerolog"

@ -9,7 +9,7 @@ import (
"time"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
)
const (

@ -16,8 +16,8 @@ import (
"github.com/harmony-one/harmony/p2p/stream/common/streammanager"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
"github.com/hashicorp/go-version"
libp2p_host "github.com/libp2p/go-libp2p-core/host"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_host "github.com/libp2p/go-libp2p/core/host"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/rs/zerolog"
)

@ -6,8 +6,8 @@ import (
"testing"
"time"
"github.com/libp2p/go-libp2p-core/discovery"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/discovery"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
)
func TestProtocol_Match(t *testing.T) {

@ -12,7 +12,7 @@ import (
protobuf "github.com/golang/protobuf/proto"
syncpb "github.com/harmony-one/harmony/p2p/stream/protocols/sync/message"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/pkg/errors"
"github.com/rs/zerolog"
)

@ -10,10 +10,10 @@ import (
protobuf "github.com/golang/protobuf/proto"
syncpb "github.com/harmony-one/harmony/p2p/stream/protocols/sync/message"
sttypes "github.com/harmony-one/harmony/p2p/stream/types"
ic "github.com/libp2p/go-libp2p-core/crypto"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p-core/protocol"
ic "github.com/libp2p/go-libp2p/core/crypto"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
ma "github.com/multiformats/go-multiaddr"
)
@ -204,9 +204,10 @@ func (st *testP2PStream) SetReadDeadline(time.Time) error { return nil }
func (st *testP2PStream) SetWriteDeadline(time.Time) error { return nil }
func (st *testP2PStream) ID() string { return "" }
func (st *testP2PStream) Protocol() protocol.ID { return "" }
func (st *testP2PStream) SetProtocol(protocol.ID) {}
func (st *testP2PStream) Stat() libp2p_network.Stat { return libp2p_network.Stat{} }
func (st *testP2PStream) SetProtocol(protocol.ID) error { return nil }
func (st *testP2PStream) Stat() libp2p_network.Stats { return libp2p_network.Stats{} }
func (st *testP2PStream) Conn() libp2p_network.Conn { return &fakeConn{} }
func (st *testP2PStream) Scope() libp2p_network.StreamScope { return nil }
type testRemoteBaseStream struct {
base *sttypes.BaseStream
@ -229,14 +230,18 @@ func (st *testRemoteBaseStream) WriteBytes(b []byte) error {
type fakeConn struct{}
func (conn *fakeConn) ID() string { return "" }
func (conn *fakeConn) NewStream(context.Context) (libp2p_network.Stream, error) { return nil, nil }
func (conn *fakeConn) GetStreams() []libp2p_network.Stream { return nil }
func (conn *fakeConn) Close() error { return nil }
func (conn *fakeConn) LocalPeer() peer.ID { return "" }
func (conn *fakeConn) LocalPrivateKey() ic.PrivKey { return nil }
func (conn *fakeConn) RemotePeer() peer.ID { return "" }
func (conn *fakeConn) RemotePublicKey() ic.PubKey { return nil }
func (conn *fakeConn) ConnState() libp2p_network.ConnectionState {
return libp2p_network.ConnectionState{}
}
func (conn *fakeConn) LocalMultiaddr() ma.Multiaddr { return nil }
func (conn *fakeConn) RemoteMultiaddr() ma.Multiaddr { return nil }
func (conn *fakeConn) ID() string { return "" }
func (conn *fakeConn) NewStream(context.Context) (libp2p_network.Stream, error) { return nil, nil }
func (conn *fakeConn) GetStreams() []libp2p_network.Stream { return nil }
func (conn *fakeConn) Stat() libp2p_network.Stat { return libp2p_network.Stat{} }
func (conn *fakeConn) Stat() libp2p_network.ConnStats { return libp2p_network.ConnStats{} }
func (conn *fakeConn) Scope() libp2p_network.ConnScope { return nil }

@ -3,7 +3,7 @@ package sttypes
import (
p2ptypes "github.com/harmony-one/harmony/p2p/types"
"github.com/hashicorp/go-version"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
)
// Protocol is the interface of protocol to be registered to libp2p.

@ -6,7 +6,7 @@ import (
"io"
"sync"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
)

@ -11,7 +11,7 @@ import (
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/hashicorp/go-version"
libp2p_proto "github.com/libp2p/go-libp2p-core/protocol"
libp2p_proto "github.com/libp2p/go-libp2p/core/protocol"
"github.com/pkg/errors"
)

@ -6,7 +6,7 @@ import (
"strings"
"time"
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
madns "github.com/multiformats/go-multiaddr-dns"
)

@ -1,7 +1,7 @@
package p2ptypes
import (
libp2p_peer "github.com/libp2p/go-libp2p-core/peer"
libp2p_peer "github.com/libp2p/go-libp2p/core/peer"
)
// PeerID is the alias for libp2p peer ID

@ -4,7 +4,7 @@ import (
"reflect"
"testing"
libp2p_network "github.com/libp2p/go-libp2p-core/network"
libp2p_network "github.com/libp2p/go-libp2p/core/network"
"github.com/stretchr/testify/require"
)

@ -1,4 +1,4 @@
Version = "2.5.8"
Version = "2.5.9"
[BLSKeys]
KMSConfigFile = ""
@ -100,6 +100,7 @@ Version = "2.5.8"
DiscHighCap = 128
DiscSoftLowCap = 8
Downloader = false
StagedSync = false
Enabled = false
InitStreams = 8
MinPeers = 5

@ -1,4 +1,4 @@
Version = "2.5.8"
Version = "2.5.9"
[BLSKeys]
KMSConfigFile = ""
@ -100,6 +100,7 @@ Version = "2.5.8"
DiscHighCap = 128
DiscSoftLowCap = 8
Downloader = false
StagedSync = false
Enabled = false
InitStreams = 8
MinPeers = 2

@ -9,7 +9,7 @@ import (
"github.com/coinbase/rosetta-sdk-go/server"
"github.com/coinbase/rosetta-sdk-go/types"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/harmony-one/harmony/block"
"github.com/harmony-one/harmony/eth/rpc"

@ -9,7 +9,7 @@ import (
"github.com/coinbase/rosetta-sdk-go/types"
"github.com/harmony-one/harmony/rosetta/common"
commonRPC "github.com/harmony-one/harmony/rpc/common"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
)
func TestErrors(t *testing.T) {

@ -7,7 +7,7 @@ import (
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/internal/params"
"github.com/libp2p/go-libp2p-core/peer"
"github.com/libp2p/go-libp2p/core/peer"
)
// BlockArgs is struct to include optional block formatting params.

@ -41,9 +41,9 @@ if [ "$(uname -s)" == "Darwin" ]; then
GOOS=darwin
LIB[libbls384_256.dylib]=${BLS_DIR}/lib/libbls384_256.dylib
LIB[libmcl.dylib]=${MCL_DIR}/lib/libmcl.dylib
LIB[libgmp.10.dylib]=/usr/local/opt/gmp/lib/libgmp.10.dylib
LIB[libgmpxx.4.dylib]=/usr/local/opt/gmp/lib/libgmpxx.4.dylib
LIB[libcrypto.1.1.dylib]=/usr/local/opt/openssl/lib/libcrypto.1.1.dylib
LIB[libgmp.10.dylib]=/opt/homebrew/opt/gmp/lib/libgmp.10.dylib
LIB[libgmpxx.4.dylib]=/opt/homebrew/opt/gmp/lib/libgmpxx.4.dylib
LIB[libcrypto.1.1.dylib]=/opt/homebrew/opt/openssl@1.1/lib/libcrypto.1.1.dylib
else
MD5=md5sum
LIB[libbls384_256.so]=${BLS_DIR}/lib/libbls384_256.so

@ -19,7 +19,7 @@ case "${HMY_PATH+set}" in
fi
;;
esac
: ${OPENSSL_DIR="/usr/local/opt/openssl"}
: ${OPENSSL_DIR="/opt/homebrew/opt/openssl@1.1"}
: ${MCL_DIR="${HMY_PATH}/mcl"}
: ${BLS_DIR="${HMY_PATH}/bls"}
export CGO_CFLAGS="-I${BLS_DIR}/include -I${MCL_DIR}/include"

@ -1,7 +1,6 @@
package slash
import (
"bytes"
"encoding/hex"
"encoding/json"
"math/big"
@ -10,10 +9,8 @@ import (
"github.com/harmony-one/harmony/shard"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rlp"
bls_core "github.com/harmony-one/bls/ffi/go/bls"
consensus_sig "github.com/harmony-one/harmony/consensus/signature"
"github.com/harmony-one/harmony/consensus/votepower"
"github.com/harmony-one/harmony/core/state"
"github.com/harmony-one/harmony/crypto/hash"
common2 "github.com/harmony-one/harmony/internal/common"
@ -24,33 +21,6 @@ import (
"github.com/pkg/errors"
)
// invariant assumes snapshot, current can be rlp.EncodeToBytes
func payDebt(
snapshot, current *staking.ValidatorWrapper,
slashDebt, payment *big.Int,
slashDiff *Application,
) error {
utils.Logger().Info().
RawJSON("snapshot", []byte(snapshot.String())).
RawJSON("current", []byte(current.String())).
Uint64("slash-debt", slashDebt.Uint64()).
Uint64("payment", payment.Uint64()).
RawJSON("slash-track", []byte(slashDiff.String())).
Msg("slash debt payment before application")
slashDiff.TotalSlashed.Add(slashDiff.TotalSlashed, payment)
slashDebt.Sub(slashDebt, payment)
if slashDebt.Cmp(common.Big0) == -1 {
x1, _ := rlp.EncodeToBytes(snapshot)
x2, _ := rlp.EncodeToBytes(current)
utils.Logger().Info().
Str("snapshot-rlp", hex.EncodeToString(x1)).
Str("current-rlp", hex.EncodeToString(x2)).
Msg("slashdebt balance cannot go below zero")
return errSlashDebtCannotBeNegative
}
return nil
}
// Moment ..
type Moment struct {
Epoch *big.Int `json:"epoch"`
@ -130,7 +100,7 @@ func (r Record) MarshalJSON() ([]byte, error) {
common2.MustAddressToBech32(r.Evidence.Offender)
return json.Marshal(struct {
Evidence Evidence `json:"evidence"`
Beneficiary string `json:"beneficiary"`
Reporter string `json:"reporter"`
AddressForBLSKey string `json:"offender"`
}{r.Evidence, reporter, offender})
}
@ -302,7 +272,7 @@ var (
oneDoubleSignerRate = numeric.MustNewDecFromStr("0.02")
)
// applySlashRate returns (amountPostSlash, amountOfReduction, amountOfReduction / 2)
// applySlashRate applies a decimal percentage to a bigInt value
func applySlashRate(amount *big.Int, rate numeric.Dec) *big.Int {
return numeric.NewDecFromBigInt(
amount,
@ -334,149 +304,147 @@ func (r Records) SetDifference(ys Records) Records {
return diff
}
func payDownAsMuchAsCan(
snapshot, current *staking.ValidatorWrapper,
slashDebt, nowAmt *big.Int,
slashDiff *Application,
) error {
if nowAmt.Cmp(common.Big0) == 1 && slashDebt.Cmp(common.Big0) == 1 {
// 0.50_amount > 0.06_debt => slash == 0.0, nowAmt == 0.44
if nowAmt.Cmp(slashDebt) >= 0 {
nowAmt.Sub(nowAmt, slashDebt)
if err := payDebt(
snapshot, current, slashDebt, slashDebt, slashDiff,
); err != nil {
return err
func payDownByDelegationStaked(
delegation *staking.Delegation,
slashDebt, totalSlashed *big.Int,
) {
payDown(delegation.Amount, slashDebt, totalSlashed)
}
func payDownByUndelegation(
undelegation *staking.Undelegation,
slashDebt, totalSlashed *big.Int,
) {
payDown(undelegation.Amount, slashDebt, totalSlashed)
}
func payDownByReward(
delegation *staking.Delegation,
slashDebt, totalSlashed *big.Int,
) {
payDown(delegation.Reward, slashDebt, totalSlashed)
}
func payDown(
balance, debt, totalSlashed *big.Int,
) {
slashAmount := new(big.Int).Set(debt)
if balance.Cmp(debt) < 0 {
slashAmount.Set(balance)
}
} else {
// 0.50_amount < 2.4_debt =>, slash == 1.9, nowAmt == 0.0
if err := payDebt(
snapshot, current, slashDebt, nowAmt, slashDiff,
); err != nil {
return err
balance.Sub(balance, slashAmount)
debt.Sub(debt, slashAmount)
totalSlashed.Add(totalSlashed, slashAmount)
}
func makeSlashList(snapshot, current *staking.ValidatorWrapper) ([][2]int, *big.Int) {
slashIndexPairs := make([][2]int, 0, len(snapshot.Delegations))
slashDelegations := make(map[common.Address]int, len(snapshot.Delegations))
totalStake := big.NewInt(0)
for index, delegation := range snapshot.Delegations {
slashDelegations[delegation.DelegatorAddress] = index
totalStake.Add(totalStake, delegation.Amount)
}
nowAmt.Sub(nowAmt, nowAmt)
for index, delegation := range current.Delegations {
if oldIndex, exist := slashDelegations[delegation.DelegatorAddress]; exist {
slashIndexPairs = append(slashIndexPairs, [2]int{oldIndex, index})
}
}
return nil
return slashIndexPairs, totalStake
}
// delegatorSlashApply applies slashing to all delegators including the validator.
// The validator’s self-owned stake is slashed by 50%.
// The stake of external delegators is slashed by 80% of the leader’s self-owned slashed stake, each one proportionally to their stake.
func delegatorSlashApply(
snapshot, current *staking.ValidatorWrapper,
rate numeric.Dec,
state *state.DB,
rewardBeneficiary common.Address,
doubleSignEpoch *big.Int,
slashTrack *Application,
) error {
// First delegation is validator's own stake
validatorDebt := new(big.Int).Div(snapshot.Delegations[0].Amount, common.Big2)
return delegatorSlashApplyDebt(snapshot, current, state, validatorDebt, rewardBeneficiary, doubleSignEpoch, slashTrack)
}
for _, delegationSnapshot := range snapshot.Delegations {
slashDebt := applySlashRate(delegationSnapshot.Amount, rate)
slashDiff := &Application{big.NewInt(0), big.NewInt(0)}
snapshotAddr := delegationSnapshot.DelegatorAddress
for i := range current.Delegations {
delegationNow := current.Delegations[i]
if nowAmt := delegationNow.Amount; delegationNow.DelegatorAddress == snapshotAddr {
utils.Logger().Info().
RawJSON("delegation-snapshot", []byte(delegationSnapshot.String())).
RawJSON("delegation-current", []byte(delegationNow.String())).
Uint64("initial-slash-debt", slashDebt.Uint64()).
Str("rate", rate.String()).
Msg("attempt to apply slashing based on snapshot amount to current state")
// Current delegation has some money and slashdebt is still not paid off
// so contribute as much as can with current delegation amount
if err := payDownAsMuchAsCan(
snapshot, current, slashDebt, nowAmt, slashDiff,
); err != nil {
return err
}
// delegatorSlashApply applies slashing to all delegators including the validator.
// The validator’s self-owned stake is slashed by 50%.
// The stake of external delegators is slashed by 80% of the leader’s self-owned slashed stake, each one proportionally to their stake.
func delegatorSlashApplyDebt(
snapshot, current *staking.ValidatorWrapper,
state *state.DB,
validatorDebt *big.Int,
rewardBeneficiary common.Address,
doubleSignEpoch *big.Int,
slashTrack *Application,
) error {
slashIndexPairs, totalStake := makeSlashList(snapshot, current)
validatorDelegation := &current.Delegations[0]
totalExternalStake := new(big.Int).Sub(totalStake, validatorDelegation.Amount)
validatorSlashed := applySlashingToDelegation(validatorDelegation, state, rewardBeneficiary, doubleSignEpoch, validatorDebt)
totalSlahsed := new(big.Int).Set(validatorSlashed)
// External delegators
// NOTE Assume did as much as could above, now check the undelegations
for i := range delegationNow.Undelegations {
undelegate := delegationNow.Undelegations[i]
// the epoch matters, only those undelegation
// such that epoch>= doubleSignEpoch should be slashable
if undelegate.Epoch.Cmp(doubleSignEpoch) >= 0 {
if slashDebt.Cmp(common.Big0) <= 0 {
utils.Logger().Info().
RawJSON("delegation-snapshot", []byte(delegationSnapshot.String())).
RawJSON("delegation-current", []byte(delegationNow.String())).
Msg("paid off the slash debt")
break
}
nowAmt := undelegate.Amount
if err := payDownAsMuchAsCan(
snapshot, current, slashDebt, nowAmt, slashDiff,
); err != nil {
return err
}
aggregateDebt := applySlashRate(validatorSlashed, numeric.MustNewDecFromStr("0.8"))
if nowAmt.Cmp(common.Big0) == 0 {
utils.Logger().Info().
RawJSON("delegation-snapshot", []byte(delegationSnapshot.String())).
RawJSON("delegation-current", []byte(delegationNow.String())).
Msg("delegation amount after paying slash debt is 0")
}
}
for _, indexPair := range slashIndexPairs[1:] {
snapshotIndex := indexPair[0]
currentIndex := indexPair[1]
delegationSnapshot := snapshot.Delegations[snapshotIndex]
delegationCurrent := &current.Delegations[currentIndex]
// A*(B/C) => (A*B)/C
// slashDebt = aggregateDebt*(Amount/totalExternalStake)
slashDebt := new(big.Int).Mul(delegationSnapshot.Amount, aggregateDebt)
slashDebt.Div(slashDebt, totalExternalStake)
slahsed := applySlashingToDelegation(delegationCurrent, state, rewardBeneficiary, doubleSignEpoch, slashDebt)
totalSlahsed.Add(totalSlahsed, slahsed)
}
// if we still have a slashdebt
// even after taking away from delegation amount
// and even after taking away from undelegate,
// then we need to take from their pending rewards
if slashDebt.Cmp(common.Big0) == 1 {
nowAmt := delegationNow.Reward
utils.Logger().Info().
RawJSON("delegation-snapshot", []byte(delegationSnapshot.String())).
RawJSON("delegation-current", []byte(delegationNow.String())).
Uint64("slash-debt", slashDebt.Uint64()).
Uint64("now-amount-reward", nowAmt.Uint64()).
Msg("needed to dig into reward to pay off slash debt")
if err := payDownAsMuchAsCan(
snapshot, current, slashDebt, nowAmt, slashDiff,
); err != nil {
// finally, kick them off forever
current.Status = effective.Banned
if err := current.SanityCheck(); err != nil {
return err
}
}
state.UpdateValidatorWrapper(current.Address, current)
beneficiaryReward := new(big.Int).Div(totalSlahsed, common.Big2)
state.AddBalance(rewardBeneficiary, beneficiaryReward)
slashTrack.TotalBeneficiaryReward.Add(slashTrack.TotalBeneficiaryReward, beneficiaryReward)
slashTrack.TotalSlashed.Add(slashTrack.TotalSlashed, totalSlahsed)
return nil
}
// NOTE only need to pay beneficiary here,
// they only get half of what was actually dispersed
halfOfSlashDebt := new(big.Int).Div(slashDiff.TotalSlashed, common.Big2)
slashDiff.TotalBeneficiaryReward.Add(slashDiff.TotalBeneficiaryReward, halfOfSlashDebt)
utils.Logger().Info().
RawJSON("delegation-snapshot", []byte(delegationSnapshot.String())).
RawJSON("delegation-current", []byte(delegationNow.String())).
Uint64("beneficiary-reward", halfOfSlashDebt.Uint64()).
RawJSON("application", []byte(slashDiff.String())).
Msg("completed an application of slashing")
state.AddBalance(rewardBeneficiary, halfOfSlashDebt)
slashTrack.TotalBeneficiaryReward.Add(
slashTrack.TotalBeneficiaryReward, slashDiff.TotalBeneficiaryReward,
)
slashTrack.TotalSlashed.Add(
slashTrack.TotalSlashed, slashDiff.TotalSlashed,
)
// applySlashingToDelegation applies slashing to a delegator, given the amount that should be slashed.
// Also, rewards the beneficiary half of the amount that was successfully slashed.
func applySlashingToDelegation(delegation *staking.Delegation, state *state.DB, rewardBeneficiary common.Address, doubleSignEpoch *big.Int, slashDebt *big.Int) *big.Int {
slashed := big.NewInt(0)
debtCopy := new(big.Int).Set(slashDebt)
payDownByDelegationStaked(delegation, debtCopy, slashed)
// NOTE Assume did as much as could above, now check the undelegations
for i := range delegation.Undelegations {
if debtCopy.Sign() == 0 {
break
}
undelegation := &delegation.Undelegations[i]
// the epoch matters, only those undelegation
// such that epoch>= doubleSignEpoch should be slashable
if undelegation.Epoch.Cmp(doubleSignEpoch) >= 0 {
payDownByUndelegation(undelegation, debtCopy, slashed)
}
// after the loops, paid off as much as could
if slashDebt.Cmp(common.Big0) == -1 {
x1, _ := rlp.EncodeToBytes(snapshot)
x2, _ := rlp.EncodeToBytes(current)
utils.Logger().Error().Str("slash-rate", rate.String()).
Str("snapshot-rlp", hex.EncodeToString(x1)).
Str("current-rlp", hex.EncodeToString(x2)).
Msg("slash debt not paid off")
return errors.Wrapf(errSlashDebtCannotBeNegative, "amt %v", slashDebt)
}
if debtCopy.Sign() == 1 {
payDownByReward(delegation, debtCopy, slashed)
}
return nil
return slashed
}
// Apply ..
func Apply(
chain staking.ValidatorSnapshotReader, state *state.DB,
slashes Records, rate numeric.Dec, rewardBeneficiary common.Address,
slashes Records, rewardBeneficiary common.Address,
) (*Application, error) {
slashDiff := &Application{big.NewInt(0), big.NewInt(0)}
for _, slash := range slashes {
@ -498,26 +466,20 @@ func Apply(
errValidatorNotFoundDuringSlash, " %s ", err.Error(),
)
}
// NOTE invariant: first delegation is the validators own
// stake, rest are external delegations.
// Bottom line: everyone will be slashed under the same rule.
// NOTE invariant: first delegation is the validators own stake,
// rest are external delegations.
if err := delegatorSlashApply(
snapshot.Validator, current, rate, state,
snapshot.Validator, current, state,
rewardBeneficiary, slash.Evidence.Epoch, slashDiff,
); err != nil {
return nil, err
}
// finally, kick them off forever
current.Status = effective.Banned
utils.Logger().Info().
RawJSON("delegation-current", []byte(current.String())).
RawJSON("slash", []byte(slash.String())).
Msg("about to update staking info for a validator after a slash")
Msg("slash applyed")
if err := current.SanityCheck(); err != nil {
return nil, err
}
}
return slashDiff, nil
}
@ -526,39 +488,3 @@ func Apply(
func IsBanned(wrapper *staking.ValidatorWrapper) bool {
return wrapper.Status == effective.Banned
}
// Rate is the slashing % rate
func Rate(votingPower *votepower.Roster, records Records) numeric.Dec {
rate := numeric.ZeroDec()
for i := range records {
doubleSignKeys := []bls.SerializedPublicKey{}
for _, pubKey1 := range records[i].Evidence.FirstVote.SignerPubKeys {
for _, pubKey2 := range records[i].Evidence.SecondVote.SignerPubKeys {
if shard.CompareBLSPublicKey(pubKey1, pubKey2) == 0 {
doubleSignKeys = append(doubleSignKeys, pubKey1)
break
}
}
}
for _, key := range doubleSignKeys {
if card, exists := votingPower.Voters[key]; exists &&
bytes.Equal(card.EarningAccount[:], records[i].Evidence.Offender[:]) {
rate = rate.Add(card.GroupPercent)
} else {
utils.Logger().Debug().
RawJSON("roster", []byte(votingPower.String())).
RawJSON("double-sign-record", []byte(records[i].String())).
Msg("did not have offenders voter card in roster as expected")
}
}
}
if rate.LT(oneDoubleSignerRate) {
rate = oneDoubleSignerRate
}
return rate
}

@ -16,7 +16,6 @@ import (
bls_core "github.com/harmony-one/bls/ffi/go/bls"
blockfactory "github.com/harmony-one/harmony/block/factory"
consensus_sig "github.com/harmony-one/harmony/consensus/signature"
"github.com/harmony-one/harmony/consensus/votepower"
"github.com/harmony-one/harmony/core/state"
"github.com/harmony-one/harmony/core/types"
"github.com/harmony-one/harmony/internal/params"
@ -35,6 +34,8 @@ var (
thirtyKOnes = new(big.Int).Mul(big.NewInt(30000), bigOne)
thirtyFiveKOnes = new(big.Int).Mul(big.NewInt(35000), bigOne)
fourtyKOnes = new(big.Int).Mul(big.NewInt(40000), bigOne)
fiftyKOnes = new(big.Int).Mul(big.NewInt(50000), bigOne)
hundredKOnes = new(big.Int).Mul(big.NewInt(100000), bigOne)
thousandKOnes = new(big.Int).Mul(big.NewInt(1000000), bigOne)
)
@ -307,7 +308,7 @@ func makeSimpleRecords(indexes []int) Records {
return rs
}
func TestPayDownAsMuchAsCan(t *testing.T) {
func TestPayDown(t *testing.T) {
tests := []struct {
debt, amt *big.Int
diff *Application
@ -346,17 +347,7 @@ func TestPayDownAsMuchAsCan(t *testing.T) {
},
}
for i, test := range tests {
vwSnap := defaultValidatorWrapper()
vwCur := defaultCurrentValidatorWrapper()
err := payDownAsMuchAsCan(vwSnap, vwCur, test.debt, test.amt, test.diff)
if assErr := assertError(err, test.expErr); assErr != nil {
t.Errorf("Test %v: %v", i, assErr)
}
if err != nil || test.expErr != nil {
continue
}
payDown(test.amt, test.debt, test.diff.TotalSlashed)
if test.debt.Cmp(test.expDebt) != 0 {
t.Errorf("Test %v: unexpected debt %v / %v", i, test.debt, test.expDebt)
}
@ -374,104 +365,275 @@ func TestPayDownAsMuchAsCan(t *testing.T) {
}
}
func TestDelegatorSlashApply(t *testing.T) {
tests := []slashApplyTestCase{
func TestApplySlashingToDelegator(t *testing.T) {
tests := []applySlashingToDelegatorTestCase{
{
rate: numeric.ZeroDec(),
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
expDels: []expDelegation{
{
delegationIdx: 0,
debt: big.NewInt(0),
expDel: expDelegation{
expAmt: twentyKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{tenKOnes, tenKOnes},
},
{
expAmt: fourtyKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
},
expSlashed: common.Big0,
expBeneficiaryReward: common.Big0,
},
{
rate: numeric.NewDecWithPrec(25, 2),
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
expDels: []expDelegation{
{
expAmt: tenKOnes,
delegationIdx: 0,
debt: twentyKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: tenKOnes,
expUndelAmt: []*big.Int{tenKOnes, tenKOnes},
},
expSlashed: twentyKOnes,
expBeneficiaryReward: tenKOnes,
},
{
expAmt: fourtyKOnes,
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
delegationIdx: 0,
debt: twentyFiveKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
expUndelAmt: []*big.Int{tenKOnes, fiveKOnes},
},
expSlashed: twentyFiveKOnes,
expBeneficiaryReward: new(big.Int).Div(twentyFiveKOnes, common.Big2),
},
{
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
delegationIdx: 0,
debt: thirtyKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: tenKOnes,
expUndelAmt: []*big.Int{tenKOnes, big.NewInt(0)},
},
expSlashed: thirtyKOnes,
expBeneficiaryReward: new(big.Int).Div(thirtyKOnes, common.Big2),
},
{
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
delegationIdx: 0,
debt: thirtyFiveKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: fiveKOnes,
expUndelAmt: []*big.Int{tenKOnes, big.NewInt(0)},
},
expSlashed: thirtyFiveKOnes,
expBeneficiaryReward: new(big.Int).Div(thirtyFiveKOnes, common.Big2),
},
{
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
delegationIdx: 0,
debt: fourtyKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: big.NewInt(0),
expUndelAmt: []*big.Int{tenKOnes, big.NewInt(0)},
},
expSlashed: tenKOnes,
expBeneficiaryReward: fiveKOnes,
expSlashed: fourtyKOnes,
expBeneficiaryReward: twentyKOnes,
},
{
rate: numeric.NewDecWithPrec(625, 3),
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
delegationIdx: 0,
debt: fiftyKOnes,
expDel: expDelegation{
expAmt: big.NewInt(0),
expReward: big.NewInt(0),
expUndelAmt: []*big.Int{tenKOnes, big.NewInt(0)},
},
expSlashed: fourtyKOnes,
expBeneficiaryReward: twentyKOnes,
},
}
for i, tc := range tests {
tc.makeData()
tc.apply()
if err := tc.checkResult(); err != nil {
t.Errorf("Test %v: %v", i, err)
}
}
}
func TestDelegatorSlashApply(t *testing.T) {
tests := []slashApplyTestCase{
{
snapshot: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fourtyKOnes,
},
}),
current: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fourtyKOnes,
},
}),
expDels: []expDelegation{
{
expAmt: common.Big0,
expAmt: twentyKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{tenKOnes, fiveKOnes},
expUndelAmt: []*big.Int{},
},
},
expSlashed: twentyKOnes,
expBeneficiaryReward: tenKOnes,
},
{
expAmt: fourtyKOnes,
snapshot: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fourtyKOnes,
},
{
address: "del1",
amount: fourtyKOnes,
},
}),
current: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fourtyKOnes,
},
{
address: "del1",
amount: fourtyKOnes,
},
}),
expDels: []expDelegation{
{
expAmt: twentyKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
{
expAmt: new(big.Int).Mul(big.NewInt(24000), bigOne),
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
expSlashed: twentyFiveKOnes,
expBeneficiaryReward: new(big.Int).Div(twentyFiveKOnes, common.Big2),
},
expSlashed: new(big.Int).Mul(big.NewInt(36000), bigOne),
expBeneficiaryReward: new(big.Int).Mul(big.NewInt(18000), bigOne),
},
{
rate: numeric.NewDecWithPrec(875, 3),
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
snapshot: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fiftyKOnes,
},
{
address: "del1",
amount: fourtyKOnes,
},
{
address: "del2",
amount: tenKOnes,
},
}),
current: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: fiftyKOnes,
},
{
address: "del1",
amount: fourtyKOnes,
},
{
address: "del2",
amount: tenKOnes,
},
}),
expDels: []expDelegation{
{
expAmt: common.Big0,
expReward: fiveKOnes,
expUndelAmt: []*big.Int{tenKOnes, common.Big0},
expAmt: twentyFiveKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
{
expAmt: fourtyKOnes,
expAmt: new(big.Int).Mul(big.NewInt(24000), bigOne),
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
{
expAmt: new(big.Int).Mul(big.NewInt(6000), bigOne),
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
expSlashed: thirtyFiveKOnes,
expBeneficiaryReward: new(big.Int).Div(thirtyFiveKOnes, common.Big2),
},
expSlashed: new(big.Int).Mul(big.NewInt(45000), bigOne),
expBeneficiaryReward: new(big.Int).Mul(big.NewInt(22500), bigOne),
},
{
rate: numeric.NewDecWithPrec(150, 2),
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
snapshot: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: hundredKOnes,
},
{
address: "del1",
amount: twentyKOnes,
},
{
address: "del2",
amount: twentyKOnes,
},
}),
current: generateValidatorWrapper([]testDelegation{
{
address: "off",
amount: hundredKOnes,
},
{
address: "del1",
amount: common.Big0,
historyUndel: twentyKOnes,
afterSignUndel: common.Big0,
},
{
address: "del2",
amount: common.Big0,
historyUndel: common.Big0,
afterSignUndel: twentyKOnes,
},
}),
expDels: []expDelegation{
{
expAmt: fiftyKOnes,
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
},
{
expAmt: common.Big0,
expReward: common.Big0,
expUndelAmt: []*big.Int{tenKOnes, common.Big0},
expUndelAmt: []*big.Int{twentyKOnes, common.Big0},
},
{
expAmt: fourtyKOnes,
expAmt: common.Big0,
expReward: tenKOnes,
expUndelAmt: []*big.Int{},
expUndelAmt: []*big.Int{common.Big0, common.Big0},
},
},
expSlashed: fourtyKOnes,
expBeneficiaryReward: twentyKOnes,
expSlashed: new(big.Int).Mul(big.NewInt(80000), bigOne),
expBeneficiaryReward: fourtyKOnes,
},
}
for i, tc := range tests {
tc.makeData()
tc.apply()
@ -482,9 +644,23 @@ func TestDelegatorSlashApply(t *testing.T) {
}
}
type applySlashingToDelegatorTestCase struct {
snapshot, current *staking.ValidatorWrapper
state *state.DB
beneficiary common.Address
slashTrack *Application
debt *big.Int
delegationIdx int
gotErr error
expDel expDelegation
expSlashed, expBeneficiaryReward *big.Int
expErr error
}
type slashApplyTestCase struct {
snapshot, current *staking.ValidatorWrapper
rate numeric.Dec
beneficiary common.Address
state *state.DB
@ -496,6 +672,41 @@ type slashApplyTestCase struct {
expErr error
}
func (tc *applySlashingToDelegatorTestCase) makeData() {
tc.beneficiary = leaderAddr
tc.state = makeTestStateDB()
tc.slashTrack = &Application{
TotalSlashed: new(big.Int).Set(common.Big0),
TotalBeneficiaryReward: new(big.Int).Set(common.Big0),
}
}
func (tc *applySlashingToDelegatorTestCase) apply() {
tc.gotErr = delegatorSlashApplyDebt(tc.snapshot, tc.current, tc.state, tc.debt, tc.beneficiary,
big.NewInt(doubleSignEpoch), tc.slashTrack)
}
func (tc *applySlashingToDelegatorTestCase) checkResult() error {
if err := assertError(tc.gotErr, tc.expErr); err != nil {
return err
}
if err := tc.expDel.checkDelegation(tc.current.Delegations[tc.delegationIdx]); err != nil {
return fmt.Errorf("delegations[%v]: %v", tc.delegationIdx, err)
}
if tc.slashTrack.TotalSlashed.Cmp(tc.expSlashed) != 0 {
return fmt.Errorf("unexpected total slash %v / %v", tc.slashTrack.TotalSlashed,
tc.expSlashed)
}
if tc.slashTrack.TotalBeneficiaryReward.Cmp(tc.expBeneficiaryReward) != 0 {
return fmt.Errorf("unexpected beneficiary reward %v / %v", tc.slashTrack.TotalBeneficiaryReward,
tc.expBeneficiaryReward)
}
if bal := tc.state.GetBalance(tc.beneficiary); bal.Cmp(tc.expBeneficiaryReward) != 0 {
return fmt.Errorf("unexpected balance for beneficiary %v / %v", bal, tc.expBeneficiaryReward)
}
return nil
}
func (tc *slashApplyTestCase) makeData() {
tc.beneficiary = leaderAddr
tc.state = makeTestStateDB()
@ -506,8 +717,7 @@ func (tc *slashApplyTestCase) makeData() {
}
func (tc *slashApplyTestCase) apply() {
tc.gotErr = delegatorSlashApply(tc.snapshot, tc.current, tc.rate, tc.state, tc.beneficiary,
big.NewInt(doubleSignEpoch), tc.slashTrack)
tc.gotErr = delegatorSlashApply(tc.snapshot, tc.current, tc.state, tc.beneficiary, big.NewInt(doubleSignEpoch), tc.slashTrack)
}
func (tc *slashApplyTestCase) checkResult() error {
@ -542,6 +752,13 @@ type expDelegation struct {
expUndelAmt []*big.Int
}
type testDelegation struct {
address string
amount *big.Int
historyUndel *big.Int
afterSignUndel *big.Int
}
func (ed expDelegation) checkDelegation(d staking.Delegation) error {
if d.Amount.Cmp(ed.expAmt) != 0 {
return fmt.Errorf("unexpected amount %v / %v", d.Amount, ed.expAmt)
@ -569,16 +786,14 @@ func TestApply(t *testing.T) {
snapshot: defaultSnapValidatorWrapper(),
current: defaultCurrentValidatorWrapper(),
slashes: Records{defaultSlashRecord()},
rate: numeric.NewDecWithPrec(625, 3),
expSlashed: twentyFiveKOnes,
expBeneficiaryReward: new(big.Int).Div(twentyFiveKOnes, common.Big2),
expSlashed: twentyKOnes,
expBeneficiaryReward: tenKOnes,
},
{
// missing snapshot in chain
current: defaultCurrentValidatorWrapper(),
slashes: Records{defaultSlashRecord()},
rate: numeric.NewDecWithPrec(625, 3),
expErr: errors.New("could not find validator"),
},
@ -586,7 +801,6 @@ func TestApply(t *testing.T) {
// missing vWrapper in state
snapshot: defaultSnapValidatorWrapper(),
slashes: Records{defaultSlashRecord()},
rate: numeric.NewDecWithPrec(625, 3),
expErr: errValidatorNotFoundDuringSlash,
},
@ -605,7 +819,6 @@ func TestApply(t *testing.T) {
type applyTestCase struct {
snapshot, current *staking.ValidatorWrapper
slashes Records
rate numeric.Dec
chain *fakeBlockChain
state, stateSnap *state.DB
@ -636,7 +849,7 @@ func (tc *applyTestCase) makeData(t *testing.T) {
}
func (tc *applyTestCase) apply() {
tc.gotDiff, tc.gotErr = Apply(tc.chain, tc.state, tc.slashes, tc.rate, leaderAddr)
tc.gotDiff, tc.gotErr = Apply(tc.chain, tc.state, tc.slashes, leaderAddr)
}
func (tc *applyTestCase) checkResult() error {
@ -646,14 +859,14 @@ func (tc *applyTestCase) checkResult() error {
if (tc.gotErr != nil) || (tc.expErr != nil) {
return nil
}
if tc.gotDiff.TotalBeneficiaryReward.Cmp(tc.expBeneficiaryReward) != 0 {
return fmt.Errorf("unexpected beneficiry reward %v / %v", tc.gotDiff.TotalBeneficiaryReward,
tc.expBeneficiaryReward)
}
if tc.gotDiff.TotalSlashed.Cmp(tc.expSlashed) != 0 {
return fmt.Errorf("unexpected total slash %v / %v", tc.gotDiff.TotalSlashed,
tc.expSlashed)
}
if tc.gotDiff.TotalBeneficiaryReward.Cmp(tc.expBeneficiaryReward) != 0 {
return fmt.Errorf("unexpected beneficiary reward %v / %v", tc.gotDiff.TotalBeneficiaryReward,
tc.expBeneficiaryReward)
}
if err := tc.checkState(); err != nil {
return fmt.Errorf("state check: %v", err)
}
@ -675,87 +888,12 @@ func (tc *applyTestCase) checkState() error {
if err != nil {
return err
}
if tc.rate != numeric.ZeroDec() && reflect.DeepEqual(vwSnap.Delegations, vw.Delegations) {
if reflect.DeepEqual(vwSnap.Delegations, vw.Delegations) {
return fmt.Errorf("status still unchanged")
}
return nil
}
func TestRate(t *testing.T) {
tests := []struct {
votingPower *votepower.Roster
records Records
expRate numeric.Dec
}{
{
votingPower: makeVotingPower(map[bls.SerializedPublicKey]numeric.Dec{
keyPairs[0].Pub(): numeric.NewDecWithPrec(1, 2),
keyPairs[1].Pub(): numeric.NewDecWithPrec(2, 2),
keyPairs[2].Pub(): numeric.NewDecWithPrec(3, 2),
}),
records: Records{
makeEmptyRecordWithSignerKey(keyPairs[0].Pub()),
makeEmptyRecordWithSignerKey(keyPairs[1].Pub()),
makeEmptyRecordWithSignerKey(keyPairs[2].Pub()),
},
expRate: numeric.NewDecWithPrec(6, 2),
},
{
votingPower: makeVotingPower(map[bls.SerializedPublicKey]numeric.Dec{
keyPairs[0].Pub(): numeric.NewDecWithPrec(1, 2),
}),
records: Records{
makeEmptyRecordWithSignerKey(keyPairs[0].Pub()),
},
expRate: oneDoubleSignerRate,
},
{
votingPower: makeVotingPower(map[bls.SerializedPublicKey]numeric.Dec{}),
records: Records{},
expRate: oneDoubleSignerRate,
},
{
votingPower: makeVotingPower(map[bls.SerializedPublicKey]numeric.Dec{
keyPairs[0].Pub(): numeric.NewDecWithPrec(1, 2),
keyPairs[1].Pub(): numeric.NewDecWithPrec(2, 2),
keyPairs[3].Pub(): numeric.NewDecWithPrec(3, 2),
}),
records: Records{
makeEmptyRecordWithSignerKey(keyPairs[0].Pub()),
makeEmptyRecordWithSignerKey(keyPairs[1].Pub()),
makeEmptyRecordWithSignerKey(keyPairs[2].Pub()),
},
expRate: numeric.NewDecWithPrec(3, 2),
},
}
for i, test := range tests {
rate := Rate(test.votingPower, test.records)
if rate.IsNil() || !rate.Equal(test.expRate) {
t.Errorf("Test %v: unexpected rate %v / %v", i, rate, test.expRate)
}
}
}
func makeEmptyRecordWithSignerKey(pub bls.SerializedPublicKey) Record {
var r Record
r.Evidence.SecondVote.SignerPubKeys = []bls.SerializedPublicKey{pub}
r.Evidence.FirstVote.SignerPubKeys = []bls.SerializedPublicKey{pub}
return r
}
func makeVotingPower(m map[bls.SerializedPublicKey]numeric.Dec) *votepower.Roster {
r := &votepower.Roster{
Voters: make(map[bls.SerializedPublicKey]*votepower.AccommodateHarmonyVote),
}
for pub, pct := range m {
r.Voters[pub] = &votepower.AccommodateHarmonyVote{
PureStakedVote: votepower.PureStakedVote{GroupPercent: pct},
}
}
return r
}
func defaultSlashRecord() Record {
return Record{
Evidence: Evidence{
@ -856,6 +994,45 @@ func defaultTestValidator(pubKeys []bls.SerializedPublicKey) staking.Validator {
}
}
func generateValidatorWrapper(delData []testDelegation) *staking.ValidatorWrapper {
pubKeys := []bls.SerializedPublicKey{offPub}
v := defaultTestValidator(pubKeys)
ds := generateDelegations(delData)
return &staking.ValidatorWrapper{
Validator: v,
Delegations: ds,
}
}
func generateDelegations(delData []testDelegation) staking.Delegations {
delegations := make(staking.Delegations, len(delData))
for i, del := range delData {
delegations[i] = makeDelegation(makeTestAddress(del.address), new(big.Int).Set(del.amount))
if del.historyUndel != nil {
delegations[i].Undelegations = append(
delegations[i].Undelegations,
staking.Undelegation{
Amount: new(big.Int).Set(del.historyUndel),
Epoch: big.NewInt(doubleSignEpoch - 1),
},
)
}
if del.afterSignUndel != nil {
delegations[i].Undelegations = append(
delegations[i].Undelegations,
staking.Undelegation{
Amount: new(big.Int).Set(del.afterSignUndel),
Epoch: big.NewInt(doubleSignEpoch + 1),
},
)
}
}
return delegations
}
func defaultTestDelegations() staking.Delegations {
return staking.Delegations{
makeDelegation(offAddr, new(big.Int).Set(fourtyKOnes)),

@ -5,7 +5,7 @@ import (
harmony_bls "github.com/harmony-one/harmony/crypto/bls"
nodeconfig "github.com/harmony-one/harmony/internal/configs/node"
"github.com/harmony-one/harmony/p2p"
libp2p_crypto "github.com/libp2p/go-libp2p-crypto"
libp2p_crypto "github.com/libp2p/go-libp2p/core/crypto"
"github.com/pkg/errors"
)

Loading…
Cancel
Save