Casey Gardiner
01691fd3ea
* check leader for N blocks * fix * fix * Cleanup and fix update pub keys. * Rotate leader. * fix fix fix fix fix * Cleaned. * Cache for `GetLeaderPubKeyFromCoinbase`, removed `NthNextHmyExt`. * activate epoch * comment activation * 295 epoch * Fix failed tests. * Fixed code review. * Fix review "--port flag". * Fix review comments. * Returned locks in rotateLeader. * Rebased onto dev. * Commented golangci. * staged stream sync v1.0 * fix protocol tests * fix spell * remove unused struct * fix rosetta test * add comments and refactor verify sig * add comments, remove extra function * add comment * refactor errors, rename metrics * refactor p2p host creation * fix initsync and host creation * fix short range hash chain * fix beacon node detection for p2p protocol * refactor stream peer cooldown and fix protocol beacon node field * refactor p2p host and routing * fix p2p discovery test issue * add MaxAdvertiseWaitTime to handle advertisements interval and address stream connection issue * terminal print the peer id and proto id * fix boot complete message when node is shut down * add new config option ( ForceReachabilityPublic ) to fix local-net consensus issue * fix self query issue * fix test NewDNSSyncingPeerProvider * [testnet] disable leader rotation * fix discovery issue for legacy sync * add watermark low/high options for p2p connection manager * add test for new conn manager flags * fix dedent * add comment to inform about p2p connection manager options * fix max height issue * add a separate log for get max height error * fix log * feat: triesInMemory flag * fix: panic if TriesInMemory is 1 to 2 * in progress. * consensus check is forked * fix * Cleanup and fix update pub keys. * fix fix fix fix fix * activate epoch * EpochTBD for leader rotation epoch. * 295 epoch * Decider no longer requires public keys as a dependency. (#4289) * Consensus doesn't require anymore `Node` as a circular dependency. * Proper blockchain initialization. * Rwlock consensus. * Removed channels. * Removed view change locks. * Removed timers locks. * Removed fbft locks. * Removed multiSigMutex locks. * Removed leader locks. * Removed additional locks and isViewChange. * Added locks detected by race. * Added locks detected by race. * Locks for start. * Removed additional logs. * Removed additional locks. * Removed additional locks. * Make func private. * Make VerifyBlock private. * Make IsLeader private. * Make ParseFBFTMessage private. * Fix remove locks. * Added additional locks. * Added additional locks. * Added readSignatureBitmapPayload locks. * Added HandleMessageUpdate locks. * Added LastMile locks. * Locks for IsValidatorInCommittee. * Fixed locks. * Fixed tests. * Fixed tests. * Fixed lock. * Rebased over leader rotation. * Fix formatting. * Rebased onto dev. * in progress. * consensus check is forked * update master * fix leader * check leader for N blocks * fix * fix * Cleanup and fix update pub keys. * Rotate leader. * fix fix fix fix fix * Cleaned. * Cache for `GetLeaderPubKeyFromCoinbase`, removed `NthNextHmyExt`. * comment activation * 295 epoch * Fix failed tests. * Fixed code review. * Fix review comments. * Merged leader rotation. * Rebased on dev. * Rebased on dev. * Fix usage of private methods. * Fix usage of private methods. * Fix usage of private methods. * Removed deadcode, LockedFBFTPhase. * Fix review comment. * Fix review comment. * Go mod tidy. * Set to EpochTBD. * Fix tests. * [core] fix state handling of self destruct If a contract self destructs to self and then receives funds within the same transaction, it is possible for its stale state to be saved. This change removes that possibility by checking for deleted state objects before returning them. * Fixed race error. * rpc: add configurable http and `eth_call` timeout * remove default timeouts * store the evm call timeout in rosetta object * [cmd] actually apply ToRPCServerConfig * Removed unused method. * Rotate external leaders on non-beacon chains. * Fix nil panic. * in progress. * in progress. * in progress. * consensus check is forked * update master * fix leader * check leader for N blocks * fix * fix * Cleanup and fix update pub keys. * Rotate leader. * fix fix fix fix fix * Cleaned. * Cache for `GetLeaderPubKeyFromCoinbase`, removed `NthNextHmyExt`. * Fixed code review. * Fix review comments. * Returned locks in rotateLeader. * Rebased onto dev. * staged stream sync v1.0 * refactor errors, rename metrics * fix p2p discovery test issue * add watermark low/high options for p2p connection manager * fix dedent * in progress. * consensus check is forked * fix * Cleanup and fix update pub keys. * fix fix fix fix fix * activate epoch * EpochTBD for leader rotation epoch. * 295 epoch * Decider no longer requires public keys as a dependency. (#4289) * Consensus doesn't require anymore `Node` as a circular dependency. * Proper blockchain initialization. * Rwlock consensus. * Removed channels. * Removed view change locks. * Removed multiSigMutex locks. * Removed leader locks. * Removed additional locks and isViewChange. * Added locks detected by race. * Added locks detected by race. * Locks for start. * Removed additional locks. * Removed additional locks. * Make func private. * Make VerifyBlock private. * Make IsLeader private. * Make ParseFBFTMessage private. * Fix remove locks. * Added additional locks. * Added additional locks. * Added readSignatureBitmapPayload locks. * Added HandleMessageUpdate locks. * Added LastMile locks. * Locks for IsValidatorInCommittee. * Fixed locks. * Fixed tests. * Fixed lock. * Rebased over leader rotation. * in progress. * consensus check is forked * update master * fix leader * check leader for N blocks * fix * fix * Cleanup and fix update pub keys. * Rotate leader. * fix fix fix fix fix * Cleaned. * Cache for `GetLeaderPubKeyFromCoinbase`, removed `NthNextHmyExt`. * Fix failed tests. * Fixed code review. * Fix review comments. * Merged leader rotation. * Rebased on dev. * Rebased on dev. * Fix usage of private methods. * Fix usage of private methods. * Fix usage of private methods. * Removed deadcode, LockedFBFTPhase. * Fix review comment. * Go mod tidy. * remove default timeouts * Rotate external leaders on non-beacon chains. * Fix nil panic. * Fixes. * Update singleton.go * evm: don't return extcode for validators Due to technical debt, validator information is stored in the code field of the address. The code field can be accessed in Solidity for an arbitrary address using `extcodesize`, `extcodehash`, and `extcodecopy` or helper commands (such as `address.code.Length`). The presence of this field is used by contract developers to (erroneously) deny smart contract access to other smart contracts (and therefore, validators). This PR fixes that oversight by returning the same values as other EOAs for known validator addresses. Obviously, it needs a hard fork that will be scheduled separately. * Fix context passing. * Clean up code. * Removed engine dependency. * Fix possible panic. * Clean up code. * Network type. * Fix tests. * Revert "Removed engine dependency." (#4392) * Revert "Fix tests." This reverts commit |
2 years ago | |
---|---|---|
.. | ||
Dockerfile | Release Candidate hotfix: dev -> main (#4333) | 2 years ago |
README.md | Rosetta Dockerfile FIX2 (Stage 4 of Node API Overhaul) (#3398) | 4 years ago |
docker-compose-testnet.yaml | fixed rosetta offline config | 3 years ago |
docker-compose.yaml | fixed rosetta offline config | 3 years ago |
harmony-mainnet.conf | Release Candidate 2023.2.0 ( dev -> main ) (#4399) | 2 years ago |
harmony-pstn.conf | Release Candidate 2023.2.0 ( dev -> main ) (#4399) | 2 years ago |
rclone.conf | add rosetta RCLONE_DB_0_URL | 3 years ago |
run.sh | fixed rosetta offline config | 3 years ago |
README.md
Docker deployment of a Rosetta enabled Harmony node
Docker Image
You can choose to build the docker image using the included Dockerfile with the following command:
docker build -t harmonyone/explorer-node .
Or you can download/pull the image from dockerhub with the following command:
docker pull harmonyone/explorer-node:latest
Starting the node
You can start the node with the following command:
docker run -d -p 9700:9700 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0
This command will create the container of the harmony node on shard 0 in the detached mode, binding port 9700 (the rosetta port) on the container to the host and mounting the shared
./data
directory on the host to/root/data
on the container. Note that the container uses/root/data
for all data storage (this is where theharmony_db_*
directories will be stored).
You can view your container with the following command:
docker ps
You can ensure that your node is running with the following curl command:
curl -X POST --data '{
"metadata": {}
}' http://localhost:9700/network/list
You can start the node in the offline mode with the following command:
docker run -d -p 9700:9700 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0 --run.offline
The offline mode implies that the node will not connect to any p2p peer or sync.
Stopping the node
First get your CONTAINER ID
using the following command:
docker ps
Note that if you do not see your node in the list, then your node is not running. You can verify this with the
docker ps -a
command.
Once you have your CONTAINER ID
, you can stop it with the following command:
docker stop [CONTAINER ID]
Details
Note that all the arguments provided when running the docker img are immediately forwarded to the harmony node binary.
Note that the following args are appended to the provided arg when running the image:
--http.ip "0.0.0.0" --ws.ip "0.0.0.0" --http.rosetta --node_type "explorer" --datadir "./data" --log.dir "./data/logs"
. This effectively makes them args that you cannot easily change.
Running the node on testnet
All the args on the image run are forwarded to the harmony node binary. Therefore, you can simply add -n testnet
to
run the node for testnet. For example:
docker run -d -p 9700:9700 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0 -n testnet
Running the node with the http RPC capabilities
Similar to running a node on testnet, once can simply add --http
to enable the rpc server. Then you have to forward
the host port to the container's rpc server port.
docker run -d -p 9700:9700 -p 9500:9500 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0 -n testnet --http
Running the node with the web socket RPC capabilities
Similar to running a node on testnet, once can simply add --ws
to enable the rpc server. Then you have to forward
the host port to the container's rpc server port.
docker run -d -p 9700:9700 -p 9800:9900 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0 -n testnet --ws
Running the node in non-archival mode
One can append --run.archive=false
to the docker run command to run the node in non-archival mode. For example:
docker run -d -p 9700:9700 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0 -n testnet --run.archive=false
Running a node with a rcloned DB
Note that all node data will be stored in the /root/data
directory within the container. Therefore, you can rclone
the harmony_db_*
directory to some directory (i.e: ./data
) and mount the volume on the docker run.
This way, the node will use DB in the volume that is shared between the container and host. For example:
docker run -d -p 9700:9700 -v "$(pwd)/data:/root/data" harmonyone/explorer-node --run.shard=0
Note that the directory structure for /root/data
(== ./data
) should look something like:
.
├── explorer_storage_127.0.0.1_9000
├── harmony_db_0
├── harmony_db_1
├── logs
│ ├── node_execution.log
│ └── zerolog-harmony.log
└── transactions.rlp
Inspecting Logs
If you mount ./data
on the host to /root/data
in the container, you van view the harmony node logs at
./data/logs/
on your host machine.
View rosetta request logs
You can view all the rosetta endpoint requests with the following command:
docker logs [CONTAINER ID]
The
[CONTAINER ID]
can be found with this command:docker ps