* Do not exit ChainDownloader in case of a generic CancellationException
There right way of halting a ChainDownloader is only via the cancel method,
no special action should be taken on a generic CancellationException, that
could be triggered by peer tasks that run in the pipeline, since in that
case the error is transient and the download should restart.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
When picking up the fast sync after previous incomplete run of fast sync we have to clear the old worldstate data to be sure children are persisted before parents.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
This fixes an issue with the Istanbul protocol which extends that Eth protocol where devp2p messages were being trying to be sent with a null capability
Signed-off-by: Jason Frame <jasonwframe@gmail.com>
* Add option to enforce tx replay protection for local txs
* Only enforce replay protection if the current milestone supports it
* moved changelog entry to next release
Signed-off-by: Meredith Baxter <meredith.baxter@palm.io>
Signed-off-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Added integration test to ensure DevP2P-over-TLS can handle large fragmented messages
TLS fragments large messages into 16KB records.
Actual fix was done in an earlier commit: f2f7ac9af1
Fixes https://github.com/hyperledger/besu/issues/3254
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
* Remove duplicate code (TODO) in FlexiblePrivacyPrecompiledContract
Signed-off-by: Romeet Puhar <Romeet27@outlook.com>
* Increasing visibility of gas fee being below the configured minimum message
Signed-off-by: Romeet Puhar <38156169+RP27@users.noreply.github.com>
* Increasing visibility of logging for when the gas fee of a transaction being sent is below the configured minimum
Signed-off-by: RP27 <38156169+RP27@users.noreply.github.com>
* Reverting unintended change
Signed-off-by: RP27 <38156169+RP27@users.noreply.github.com>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
* new natmethod enum
Signed-off-by: Frank Li <b439988l@gmail.com>
* more tests
Signed-off-by: Frank Li <b439988l@gmail.com>
* more test
Signed-off-by: Frank Li <b439988l@gmail.com>
* Fast sync should traverse the world state depth first
1. The pending requests queue in the world state downloader is now a different data structure. We now use a list of priority queues.
2. The node requests now know about the parent request that was responsible for spawning them.
3. When a node has data available and is asked to persist before its children are persisted, the node will not do anything. Instead, it will wait for all its children to persist first and will persist together with the last child.
Storing children before parents gives us the following benefits:
* We don't need to store pending requests on disk any more.
* When restarting download from a new pivot, we do not need to walk and check the whole tree any more.
And the following drawbacks:
* We now have pending nodes in memory for which we already downloaded data, but we do not store them in the database yet.
Overall expectations on performance:
We still need to download every single state node at least once, so there is no improvement there. We will save a significant amount of time in case we change pivots. And we save lots of read/writes on filesystem because tasks are not needed to be written to disk any more.
We want to avoid having too many pending unsaved nodes in memory, not to run out of it. If we were always handling only one request to our peers at the same time, we would not need to be worried, and we would just use a simple depth first search. Because we batch our requests, we might produce too many pending unprocessed nodes in memory if we are not careful about the order of processing requests. That is where the priority on node request comes from. We want to always process nodes lower in the tree before nodes higher in the tree, and preferably we want to first process children from the same parent so that we can save the current unsaved parent as soon as possible.
At the moment, I still left in the code several artefacts that I use for debugging the behaviour. I am planning to get rid of most of these counters, feel free to point them out in the review. There is for instance a weird counter in the NodeDataRequest class that I am using to monitor the total amount of unsaved nodes. In case pending unsaved node count rises too high, there is a warning printed into the logs. At the moment of writing, I would expect the counter to stay below 10 000 generally and not rise above 20 000 nodes. If you saw the number rise to for instance 100 000 that would signify a bug.
Similarly, because of the order of processing of the nodes, we do not need to store huge number of requests on the disk any more and the whole list fits comfortably into the memory. Without batching, we would not have more than a thousand requests around waiting. Because of the batching, we can see the number of requests occasionally rises all the way up to 300 000, but usually should be under 200 000.
Note that at any time there should not be more pending unsaved blocks than pending requests. Such a situation would be a bug to be reported.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Fixed failing test
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Improving test coverage
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Message sizes greater than the 16KB TLS record limit were being truncated.
Implementation based on 06f94311d8 but removed Protobufs
Note, this is an early access feature and behind a feature flag.
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
* Add missing Magneto ETC hard fork entry in test
This corresponds to 61bf0d9ca
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Change ETC bootnode public keys
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add ETC Mystique hard fork spec (ECIP-1104)
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* 6.0.1
Signed-off-by: Sally MacFarlane <sally.macfarlane@consensys.net>
* Removing includeUninitialized parameter
New version of the client-java-api-6.0.1.jar does not have the includeUnitialized parameter on the method.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* 6.0.1 compiles successfully
Signed-off-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Co-authored-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Implemented getPositiveInt function in JsonUtil to validate a JSON positive number value. Using this function, the blockperiodseconds should now be validated whenever retrieved from the genesis config, including transitions.
Signed-off-by: George Patterson <g-patt@outlook.com>
* Stream JSON RPC responses to avoid creating big JSON string in memory for large responses
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Adapt code to last development on result with Optionals
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Log an error if there is an IOException during the streaming of the response
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Remove the intermediate String object creation, writing directly to a Buffer
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Implement response streaming for web socket
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Fix log messages
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Move inner classes to outer level, to avoid too big class files
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Fix copyright
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* lots of errorprone fixes
* some license updates
* some mockito updates
* upgrade the rocksdb version
* Prometheus left at 0.9.0 as 0.10.0+ introduces OpenMetrics
related changes that break unit tests.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* initial commit of executePayload to main
* additional coverage, change findLatestValidAncestor to take Hash rather than Block, fix comments
Signed-off-by: garyschulte <garyschulte@gmail.com>