* Refactor to async retrieve blocks, and change peer when retrying to get a block
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Apply suggested changes
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Remove deprecated class
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Add more logs arond block synchronizer
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* First try do download the block from the peer that announced it
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Avoid redownload non annunced blocks
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Conditionally log at trace level
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Use max number of peers when retrying to download a block to try all peers
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Use the shared Slf4jLambdaHelper, instead of custom helper
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
Add block choice rule to Clique
1. Choose the block with the most total difficulty.
2. Then choose the block with the lowest block number.
3. Then choose the block whose validator had the least recent in-turn block assignment.
4. Then choose the block with the lowest hash.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
* add jdk8 module for optional parsing
Signed-off-by: Frank Li <b439988l@gmail.com>
* register modle at mapper initiation
Signed-off-by: Frank Li <b439988l@gmail.com>
* Bump SLF4J version
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace log4j2 API with SLF4j API
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace usage of LogManager#getFormatterLogger
This is for keeping compatibility with SLF4J. If neccesary, a specific formatter can be created for the RlpBlockImporter class
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Unset the default logging value for the retesteth
This is because it's not possible to resolve the root logger level into a Log4J2 field
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Prevent creation of Logger context outside SLF4J
org.hyperledger.besu.cli.BesuCommand#setAllLevels was taken from
https://github.com/apache/logging-log4j2/blob/rel%2F2.17.1/log4j-core/src/main/java/org/apache/logging/log4j/core/config/Configurator.java#L309
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add FATAL level deprecation message
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S2139
Exceptions should be either logged or rethrown but not both
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S3457
Printf-style format strings should be used correctly
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add changelog
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Do not exit ChainDownloader in case of a generic CancellationException
There right way of halting a ChainDownloader is only via the cancel method,
no special action should be taken on a generic CancellationException, that
could be triggered by peer tasks that run in the pipeline, since in that
case the error is transient and the download should restart.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
When picking up the fast sync after previous incomplete run of fast sync we have to clear the old worldstate data to be sure children are persisted before parents.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
This fixes an issue with the Istanbul protocol which extends that Eth protocol where devp2p messages were being trying to be sent with a null capability
Signed-off-by: Jason Frame <jasonwframe@gmail.com>
* Add option to enforce tx replay protection for local txs
* Only enforce replay protection if the current milestone supports it
* moved changelog entry to next release
Signed-off-by: Meredith Baxter <meredith.baxter@palm.io>
Signed-off-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Added integration test to ensure DevP2P-over-TLS can handle large fragmented messages
TLS fragments large messages into 16KB records.
Actual fix was done in an earlier commit: f2f7ac9af1
Fixes https://github.com/hyperledger/besu/issues/3254
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>
* Remove duplicate code (TODO) in FlexiblePrivacyPrecompiledContract
Signed-off-by: Romeet Puhar <Romeet27@outlook.com>
* Increasing visibility of gas fee being below the configured minimum message
Signed-off-by: Romeet Puhar <38156169+RP27@users.noreply.github.com>
* Increasing visibility of logging for when the gas fee of a transaction being sent is below the configured minimum
Signed-off-by: RP27 <38156169+RP27@users.noreply.github.com>
* Reverting unintended change
Signed-off-by: RP27 <38156169+RP27@users.noreply.github.com>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
* new natmethod enum
Signed-off-by: Frank Li <b439988l@gmail.com>
* more tests
Signed-off-by: Frank Li <b439988l@gmail.com>
* more test
Signed-off-by: Frank Li <b439988l@gmail.com>
* Fast sync should traverse the world state depth first
1. The pending requests queue in the world state downloader is now a different data structure. We now use a list of priority queues.
2. The node requests now know about the parent request that was responsible for spawning them.
3. When a node has data available and is asked to persist before its children are persisted, the node will not do anything. Instead, it will wait for all its children to persist first and will persist together with the last child.
Storing children before parents gives us the following benefits:
* We don't need to store pending requests on disk any more.
* When restarting download from a new pivot, we do not need to walk and check the whole tree any more.
And the following drawbacks:
* We now have pending nodes in memory for which we already downloaded data, but we do not store them in the database yet.
Overall expectations on performance:
We still need to download every single state node at least once, so there is no improvement there. We will save a significant amount of time in case we change pivots. And we save lots of read/writes on filesystem because tasks are not needed to be written to disk any more.
We want to avoid having too many pending unsaved nodes in memory, not to run out of it. If we were always handling only one request to our peers at the same time, we would not need to be worried, and we would just use a simple depth first search. Because we batch our requests, we might produce too many pending unprocessed nodes in memory if we are not careful about the order of processing requests. That is where the priority on node request comes from. We want to always process nodes lower in the tree before nodes higher in the tree, and preferably we want to first process children from the same parent so that we can save the current unsaved parent as soon as possible.
At the moment, I still left in the code several artefacts that I use for debugging the behaviour. I am planning to get rid of most of these counters, feel free to point them out in the review. There is for instance a weird counter in the NodeDataRequest class that I am using to monitor the total amount of unsaved nodes. In case pending unsaved node count rises too high, there is a warning printed into the logs. At the moment of writing, I would expect the counter to stay below 10 000 generally and not rise above 20 000 nodes. If you saw the number rise to for instance 100 000 that would signify a bug.
Similarly, because of the order of processing of the nodes, we do not need to store huge number of requests on the disk any more and the whole list fits comfortably into the memory. Without batching, we would not have more than a thousand requests around waiting. Because of the batching, we can see the number of requests occasionally rises all the way up to 300 000, but usually should be under 200 000.
Note that at any time there should not be more pending unsaved blocks than pending requests. Such a situation would be a bug to be reported.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Fixed failing test
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Improving test coverage
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Message sizes greater than the 16KB TLS record limit were being truncated.
Implementation based on 06f94311d8 but removed Protobufs
Note, this is an early access feature and behind a feature flag.
Signed-off-by: Simon Dudley <simon.dudley@consensys.net>