* [committee] Move core.ShardingSchedule to shard.Schedule
* [consensus] Remove redundant PublicKeys field of Consensus as Decider maintains that
* [committee] Use committee package to pick PublicKeys
* [committee] Use committee inplace of CalculateShardState
* [committee] Remove core/resharding.go, complete usage of committee as implementation replacement
* [committee] Address PR comments
* [consensus] Factor out enums to core/values, begin factor out of consensus mechanisms
* [consensus] Make Mechanism explicit
* [consensus] Add ViewChange to QuorumPhase
* Update core/values/consensus.go
Co-Authored-By: Eugene Kim <ek@harmony.one>
* Update core/values/consensus.go
Co-Authored-By: Eugene Kim <ek@harmony.one>
* [mainnet-release] Address code comments
* [staking][consensus][project] Remove txgen, factor out consensus
* [consensus] Factor out PublicKeys
* [txgen] Bring back txgen
* [project] Undo prior consolidation of error values under core
* [consensus] Update tests using quorum decider
* [consensus] Fix overlooked resets during refactor
* [consensus] Fix wrong check of quorum phase
* [consensus] Address leftover TODO for prepare count
* [consensus] Simplfy reset switch
* [consensus] Fix mistake of wrong ReadSignature in ViewChange, need sender, not node PubKey
* [staking] Factor some project errors into values core/pkg. Thread stking Txs through Finalize
* [staking] Incorporate Chao code from PR 1700
* [staking] Remove dead staking code, create const values, factor out dec from staking
* [staking] Remove voting power for now till discussion, factor out more error values
this is solve the problem of validators in different network connected
with each others.
* mainet is still using the original harmony prefix to keep backward
compatibility
* pangaea uses "pangaea" as network prefix
* testnet uses "testnet" as network prefix
All nodes in Pangaea and Testnet need to restart to re-connect with each
other. Mainnet nodes have no changes.
Signed-off-by: Leo Chen <leo@harmony.one>
This is done by introducing two concepts: sharding configuration
instance and sharding configuration schedule.
A sharding configuration instance is a particular set of sharding
parameters in effect, namely:
- Number of shards;
- Number of nodes/shard; and
- Number of Harmony-operated nodes per shard.
A sharding configuration schedule is a mapping from an epoch to the
corresponding sharding configuration for that epoch.
Two schedules are provided and to be maintained in the long term:
Mainnet sharding schedule (4 shards, 150 nodes/shard, 112
Harmony-operated nodes/shard) and public testnet sharding schedule (2
shards, 150 nodes/shard, 150 Harmony-operated nodes/shard).
Harmony node binary uses one of these for each -network_type=mainnet and
-network_type=testnet respectively.
In addition, for -network_type=devnet, a fixed sharding schedule can be
specified with direct control over all three parameters (-dn_num_shards,
-dn_shard_size, and -dn_hmy_size).
The mainnet schedule code includes a commented-out example schedule.
Warn level was chosen for the current behavior: Alert about uncaught
failures but do not alter the code path (yet). More proper error
handling will come later.
* Manage shard chains as first class citizen
* Bring core.BlockChain's read/write semantics cleaner
* Reimplement the block reward in terms of on-chain committee info, not
hardcoded genesis committee.