utils.Logger().Info().Msgf("[SYNC] Node is now IN SYNC! (isBeacon: %t, ShardID: %d, otherHeight: %d, currentHeight: %d)",isBeacon,bc.ShardID(),otherHeight,currentHeight)
utils.Logger().Info().Msgf("[SYNC] Node is now IN SYNC! (isBeacon: %t, ShardID: %d, otherHeight: %d, currentHeight: %d)",isBeacon,bc.ShardID(),otherHeight,currentHeight)
@ -13,38 +13,45 @@ and scalable to traditional PBFT. For brevity, we will still call the whole proc
To reach the consensus of the next block, there are 3 phases: announce(i.e. pre-prepare in PBFT), prepare and commit.
To reach the consensus of the next block, there are 3 phases: announce(i.e. pre-prepare in PBFT), prepare and commit.
* Announce(leader): The leader broadcasts ANNOUNCE message along with candidate of the next block.
- Announce(leader): The leader broadcasts ANNOUNCE message along with candidate of the next block.
* Prepare(validator): The validator will validate the block sent by leader and send PREPARE message; if the block is invalid, the validator will propose view change. If the prepare timeout, the validator will also propose view change.
- Prepare(validator): The validator will validate the block sent by leader and send PREPARE message;
* Prepared(leader): The leader will collect 2f+1 PREPARE message including itself and broadcast PREPARED message with the aggregated signature
if the block is invalid, the validator will propose view change. If the prepare timeout, the validator will also propose view change.
* Commit(validator): The validator will check the validity of aggregated signature (# of signatures >= 2f+1) and send COMMIT message; if the commit timeout, the validator will also propose view change.
- Prepared(leader): The leader will collect 2f+1 PREPARE message including itself and broadcast PREPARED message with the aggregated signature
* Committed(leader): The leader will collect 2f+1 COMMIT message including itself and broadcast COMMITTED message with the aggregated signature
- Commit(validator): The validator will check the validity of aggregated signature (# of signatures >= 2f+1) and
* Finalize(leader and validators): Both the leader and validators will finalize the block into blockchain together with 2f+1 aggregated signatures.
send COMMIT message; if the commit timeout, the validator will also propose view change.
- Committed(leader): The leader will collect 2f+1 COMMIT message including itself and broadcast COMMITTED message with the aggregated signature
- Finalize(leader and validators): Both the leader and validators will finalize the block into blockchain together with 2f+1 aggregated signatures.
### View changing mode
### View changing mode
* ViewChange(validator): whenever a validator receives invalid block/signature from the leader, it should send VIEWCHANGE message with view v+1 together with its own prepared message(>=2f+1 aggregated prepare signatures) from previous views.
- ViewChange(validator): whenever a validator receives invalid block/signature from the leader,
* NewView(new leader): when the new leader (uniquely determined) collect enough (2f+1) view change messages, it broadcasts the NEWVIEW message with aggregated VIEWCHANGE signatures.
it should send VIEWCHANGE message with view v+1 together with its own prepared message(>=2f+1 aggregated prepare signatures) from previous views.
* During the view changing process, if the new leader not send NEWVIEW message on time, the validator will propose ViewChange for the next view v+2 and so on...
- NewView(new leader): when the new leader (uniquely determined) collect enough (2f+1) view change
messages, it broadcasts the NEWVIEW message with aggregated VIEWCHANGE signatures.
- During the view changing process, if the new leader not send NEWVIEW message on time, the
validator will propose ViewChange for the next view v+2 and so on...
## State Machine
## State Machine
The whole process of PBFT can be described as a state machine. We don't separate the roles of leader and validators, instead we use PbftState structure to describe the role and phase of a given node who is joining the consensus process. When a node receives a new message from its peer, its state will be updated. i.e. pbft_state --(upon receive new PbftMessage)--> new_pbft_state. Thus the most nature and clear way is to describe the whole process as state machine.
The whole process of PBFT can be described as a state machine. We don't separate the roles of leader
and validators, instead we use PBFTState structure to describe the role and phase of a given node
who is joining the consensus process. When a node receives a new message from its peer, its state will be updated. i.e. pbft_state --(upon
receive new PBFTMessage)-->
new_pbft_state. Thus the most nature and clear way is to describe the whole process as state machine.
```
```golang
// PbftState holds the state of a node in PBFT process
// PBFTState holds the state of a node in PBFT process
// Consensus is the main struct with all states and data related to consensus process.
// Consensus is the main struct with all states and data related to consensus process.
typeConsensusstruct{
typeConsensusstruct{
// PbftLog stores the pbft messages and blocks during PBFT process
Deciderquorum.Decider
PbftLog*PbftLog
// phase: different phase of PBFT protocol: pre-prepare, prepare, commit, finish etc
// FBFTLog stores the pbft messages and blocks during FBFT process
phasePbftPhase
FBFTLog*FBFTLog
// mode: indicate a node is in normal or viewchanging mode
// phase: different phase of FBFT protocol: pre-prepare, prepare, commit, finish etc
modePbftMode
phaseFBFTPhase
// current indicates what state a node is in
currentState
// epoch: current epoch number
// epoch: current epoch number
epochuint64
epochuint64
// blockNum: the next blockNumber that PBFT is going to agree on, should be equal to the blockNumber of next block
// blockNum: the next blockNumber that FBFT is going to agree on,
// should be equal to the blockNumber of next block
blockNumuint64
blockNumuint64
// channel to receive consensus message
// channel to receive consensus message
MsgChanchan[]byte
MsgChanchan[]byte
@ -49,17 +52,17 @@ type Consensus struct {
consensusTimeoutmap[TimeoutType]*utils.Timeout
consensusTimeoutmap[TimeoutType]*utils.Timeout
// Commits collected from validators.
// Commits collected from validators.
prepareSigsmap[string]*bls.Sign// key is the bls public key
commitSigsmap[string]*bls.Sign// key is the bls public key
aggregatedPrepareSig*bls.Sign
aggregatedPrepareSig*bls.Sign
aggregatedCommitSig*bls.Sign
aggregatedCommitSig*bls.Sign
prepareBitmap*bls_cosi.Mask
prepareBitmap*bls_cosi.Mask
commitBitmap*bls_cosi.Mask
commitBitmap*bls_cosi.Mask
// Commits collected from view change
// Commits collected from view change
bhpSigsmap[string]*bls.Sign// bhpSigs: blockHashPreparedSigs is the signature on m1 type message
// bhpSigs: blockHashPreparedSigs is the signature on m1 type message
nilSigsmap[string]*bls.Sign// nilSigs: there is no prepared message when view change, it's signature on m2 type (i.e. nil) messages
bhpSigsmap[string]*bls.Sign
viewIDSigsmap[string]*bls.Sign// viewIDSigs: every validator sign on |viewID|blockHash| in view changing message
// nilSigs: there is no prepared message when view change,
// it's signature on m2 type (i.e. nil) messages
nilSigsmap[string]*bls.Sign
bhpBitmap*bls_cosi.Mask
bhpBitmap*bls_cosi.Mask
nilBitmap*bls_cosi.Mask
nilBitmap*bls_cosi.Mask
viewIDBitmap*bls_cosi.Mask
viewIDBitmap*bls_cosi.Mask
@ -79,8 +82,6 @@ type Consensus struct {
// Leader's address
// Leader's address
leaderp2p.Peer
leaderp2p.Peer
// Public keys of the committee including leader and validators
PublicKeys[]*bls.PublicKey
CommitteePublicKeysmap[string]bool
CommitteePublicKeysmap[string]bool
pubKeyLocksync.Mutex
pubKeyLocksync.Mutex
@ -131,9 +132,11 @@ type Consensus struct {
// will trigger state syncing when blockNum is low
// will trigger state syncing when blockNum is low
blockNumLowChanchanstruct{}
blockNumLowChanchanstruct{}
// Channel for DRG protocol to send pRnd (preimage of randomness resulting from combined vrf randomnesses) to consensus. The first 32 bytes are randomness, the rest is for bitmap.
// Channel for DRG protocol to send pRnd (preimage of randomness resulting from combined vrf
// randomnesses) to consensus. The first 32 bytes are randomness, the rest is for bitmap.
PRndChannelchan[]byte
PRndChannelchan[]byte
// Channel for DRG protocol to send VDF. The first 516 bytes are the VDF/Proof and the last 32 bytes are the seed for deriving VDF
// Channel for DRG protocol to send VDF. The first 516 bytes are the VDF/Proof and the last 32
// bytes are the seed for deriving VDF
RndChannelchan[vdfAndSeedSize]byte
RndChannelchan[vdfAndSeedSize]byte
pendingRnds[][vdfAndSeedSize]byte// A list of pending randomness
pendingRnds[][vdfAndSeedSize]byte// A list of pending randomness
@ -12,13 +12,12 @@ To support such behavior, we architecture Node logic with service manager which
Each service needs to implement minimal interace behavior like Start, Stop so that the service manager can handle those operation.
Each service needs to implement minimal interace behavior like Start, Stop so that the service manager can handle those operation.
```
```golang
// ServiceInterface is the collection of functions any service needs to implement.
// ServiceInterface is the collection of functions any service needs to implement.
type ServiceInterface interface {
type ServiceInterface interface {
StartService()
StartService()
StopService()
StopService()
}
}
```
```
### Creating a service.
### Creating a service.
@ -31,7 +30,7 @@ Since different services may have different ways to be created you may need to h
Action is the input to operate Service Manager. We can send action to action channel of service manager to start or stop a service.
Action is the input to operate Service Manager. We can send action to action channel of service manager to start or stop a service.
```
```golang
// Action is type of service action.
// Action is type of service action.
type Action struct {
type Action struct {
action ActionType
action ActionType
@ -49,17 +48,19 @@ Service Manager is very handy to transform a node role from validator to leader
We have enabled libp2p based gossiping using pubsub. Nodes no longer send messages to individual nodes.
We have enabled libp2p based gossiping using pubsub. Nodes no longer send messages to individual nodes.
All message communication is via SendMessageToGroups function.
All message communication is via SendMessageToGroups function.
* There would be 4 topics for sending and receiving of messages
- There would be 4 topics for sending and receiving of messages
* **GroupIDBeacon** This topic serves for consensus within the beaconchain
* **GroupIDBeaconClient** This topic serves for receipt of staking transactions by beacon chain and broadcast of blocks (by beacon leader)
- **GroupIDBeacon** This topic serves for consensus within the beaconchain
* **GroupIDShard** (_under construction_) This topic serves for consensus related and pingpong messages within the shard
- **GroupIDBeaconClient** This topic serves for receipt of staking transactions by beacon chain and broadcast of blocks (by beacon leader)
* **GroupIDShardClient** (_under construction_) This topic serves to receive transactions from client and send confirmed blocks back to client (like txgen). The shard leader (only) sends back the confirmed blocks.
- **GroupIDShard** (_under construction_) This topic serves for consensus related and pingpong messages within the shard
- **GroupIDShardClient** (_under construction_) This topic serves to receive transactions from client and send confirmed blocks back to client. The shard leader (only) sends back the confirmed blocks.
- Beacon chain nodes need to subscribe to _TWO_ topics
* Beacon chain nodes need to subscribe to _TWO_ topics
- **GroupIDBeacon**
* **GroupIDBeacon**
- **GroupIDBeaconClient**.
* **GroupIDBeaconClient**.
* Every new node other than beacon chain nodes, including txgen and wallet needs to subscribe to _THREE_ topics.
- Every new node other than beacon chain nodes, wallet needs to subscribe to _THREE_ topics.
@ -85,7 +85,7 @@ It should cover the basic function to pass, to fail, and error conditions.
* test case # : CS1
* test case # : CS1
* description : beacon chain reach consensus
* description : beacon chain reach consensus
* test procedure : start beacon chain with 50, 100, 150, 200, 250, 300 nodes, start txgen for 300 seconds, check leader log on number of consensuses
* test procedure : start beacon chain with 50, 100, 150, 200, 250, 300 nodes, check leader log on number of consensuses
* passing criteria
* passing criteria
* dependency
* dependency
* note
* note
@ -96,7 +96,7 @@ It should cover the basic function to pass, to fail, and error conditions.
* test case # : DR1
* test case # : DR1
* description : drand generate random number
* description : drand generate random number
* test procedure : start beacon chain with 50, 150, 300 nodes, start txgen for 300 seconds, check leader log on the success of generating random number
* test procedure : start beacon chain with 50, 150, 300 nodes, check leader log on the success of generating random number
* passing criteria : random number genreated
* passing criteria : random number genreated
* dependency
* dependency
* note
* note
@ -257,16 +257,6 @@ It should cover the basic function to pass, to fail, and error conditions.
* automated?
* automated?
---
---
### transaction stress
* test case # : STX1
* description : txgen send transaction to shard
* test procedure : started beacon chain with 50 nodes, start txgen to send 1,000, 10,000 tx to the shard