Change the unit test execution to use the Junit5 JUnitPlatform. This
allows for a mix of junit 4 and junit 5 tests and for a gradual
migration to junit 5 instead of a big bang. One class depended on
junit 4 exceptions and was updated. Two tests depending on
native libraries fail gracefully on mac (and only mac).
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
The currently way Besu has to spot if a transaction has already been process, is to look if the transaction is present in the transaction pool, that by default is 4K, while the amount of pending transactions on the mainnet, is much more, the order of hundred of thousands, so basically even if a transaction has been already processed, the chances that it gets reprocessed is very high, with the result of doing a lot of useless work, that affects Besu performance.
A trivial solution could be to just raise the transaction pool size, but that is not always advisable, because it is critical for block production to keep it fast, and incresing its size could negatively affect the perfomance of the strategy choosen to select transactions to include in the block.
A better option, implemented here, is to leverage data that we already have, and that keeps the history of the transactions exchanged with other peers. This data is just a collection of transaction hashes that we have received or seen, and in any case if a transaction is in that collection, it means that it has already been processed by Besu, so it is possible to directly skip it.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
Currently Besu has a limited support for sending NewPooledTransactionHashes messages, and other aspect related to reduce transactions synchronization traffic, described in the Ethereum Wire Protocol version 66.
Specifically:
Besu only uses NewPooledTransactionHashes for new local transactions, while it could be extended to any transaction added to the transaction pool
Besu does not limit the sending of the full transaction messages to a small fraction of the connected peers, and sends the new transaction hashes to all the remaining peers
This PR, extends eth/66 support and does some code refactoring, to remove some reduntant code and rename some classes to identify they are related to the NewPooledTransactionHashes message.
The main changes are:
Do not have a separate tracker for transaction hashes, since for them we can reuse PeerTransactionTracker, that tracks full transactions exchange history and sending queue with a peer. So PeerPendingTransactionTracker has been removed. --tx-pool-hashes-max-size is now deprecated and has no more effect and it will be removed in a future release.
When a new peer connects, if it support eth/6[56] then we send all the transaction hashes we have in the pool, otherwise we send the full transactions.
When new transactions are added to the pool, we send full transactions to peers without eth/6[56] support, or to a small fractions of all peers, and then we send only transaction hashes to the remaining peer that support eth/6[56]. Both transactions and transaction hashes are only sent if not already exchanged with that specific peer.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
PR that adds the different request tasks necessary for the snapsync as well as a utility to manage the ranges of requests
Signed-off-by: Karim TAAM <karim.t2am@gmail.com>
* Tune transaction worker queue capacity to adapt to mainnet
Currently Besu handles hundreds of transaction message per seconds,
so having a queue of 1M messages result in a lot of expired messages,
make sense to reduce the capacity to drop incoming messages that could not
be handled anyway.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Tune the number transactions seen by peer to adapt to mainnet traffic
Currently on mainnet there are many thousand of pending transactions
exchanged between peers, but Besu has a short memory of what has been
exchanged with a specific peer, with the result that the same transaction
is often exchanged back and forth with the same peer, expecially when the
peer is another Besu.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Improve trace log of transaction exchanges with other peers
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Update CHANGELOG
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* speculative use of backward sync in forkchoiceupdated
* reorg newPayload tests to not return INVALID_TERMINAL_BLOCK for as-yet-unknown parent hashes
Signed-off-by: garyschulte <garyschulte@gmail.com>
* Refactor TransactionPool to accept MiningParameters
* Check for zero GasPrice Frontier Transactions
* if you are not mining your node could fill up with pending transactions.
* make low-or-no-gas transactions viable for local transactions
Signed-off-by: Antony Denyer <git@antonydenyer.co.uk>
Co-authored-by: garyschulte <garyschulte@gmail.com>
* handle TTD and mining pre and post merge without creating a regression in behavior for clique networks
Signed-off-by: garyschulte <garyschulte@gmail.com>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
List of components: controller builder, protocol schedule,
coordinator, block creator and processor.
Code originally written by garyschulte on the merge branch.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Add PostMergeContext that will stop syncing after the swith to PoS
Code originally written by garyschulte and gezero on the merge branch.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Unit test for PostMergeContext
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Add CHANGELOG entry
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Addressing comments
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Fix test
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* More PostMergeContext unit tests
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Extend validateAndProcessBlock to return an error message in case of failure
This message can then be used to send a more detailed response to the RPC
caller of ExecutePayload API.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
When asking for the forkId for the chain head sometimes we may return
null. In those cases return an empty list.
Fixes#3343
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* Refactor to async retrieve blocks, and change peer when retrying to get a block
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Apply suggested changes
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Remove deprecated class
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Add more logs arond block synchronizer
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* First try do download the block from the peer that announced it
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Avoid redownload non annunced blocks
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Conditionally log at trace level
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Use max number of peers when retrying to download a block to try all peers
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Use the shared Slf4jLambdaHelper, instead of custom helper
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
* Bump SLF4J version
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace log4j2 API with SLF4j API
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace usage of LogManager#getFormatterLogger
This is for keeping compatibility with SLF4J. If neccesary, a specific formatter can be created for the RlpBlockImporter class
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Unset the default logging value for the retesteth
This is because it's not possible to resolve the root logger level into a Log4J2 field
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Prevent creation of Logger context outside SLF4J
org.hyperledger.besu.cli.BesuCommand#setAllLevels was taken from
https://github.com/apache/logging-log4j2/blob/rel%2F2.17.1/log4j-core/src/main/java/org/apache/logging/log4j/core/config/Configurator.java#L309
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add FATAL level deprecation message
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S2139
Exceptions should be either logged or rethrown but not both
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S3457
Printf-style format strings should be used correctly
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add changelog
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Do not exit ChainDownloader in case of a generic CancellationException
There right way of halting a ChainDownloader is only via the cancel method,
no special action should be taken on a generic CancellationException, that
could be triggered by peer tasks that run in the pipeline, since in that
case the error is transient and the download should restart.
Signed-off-by: Fabio Di Fabio <fabio.difabio@consensys.net>
When picking up the fast sync after previous incomplete run of fast sync we have to clear the old worldstate data to be sure children are persisted before parents.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
This fixes an issue with the Istanbul protocol which extends that Eth protocol where devp2p messages were being trying to be sent with a null capability
Signed-off-by: Jason Frame <jasonwframe@gmail.com>
* Add option to enforce tx replay protection for local txs
* Only enforce replay protection if the current milestone supports it
* moved changelog entry to next release
Signed-off-by: Meredith Baxter <meredith.baxter@palm.io>
Signed-off-by: Sally MacFarlane <sally.macfarlane@consensys.net>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>
* Fast sync should traverse the world state depth first
1. The pending requests queue in the world state downloader is now a different data structure. We now use a list of priority queues.
2. The node requests now know about the parent request that was responsible for spawning them.
3. When a node has data available and is asked to persist before its children are persisted, the node will not do anything. Instead, it will wait for all its children to persist first and will persist together with the last child.
Storing children before parents gives us the following benefits:
* We don't need to store pending requests on disk any more.
* When restarting download from a new pivot, we do not need to walk and check the whole tree any more.
And the following drawbacks:
* We now have pending nodes in memory for which we already downloaded data, but we do not store them in the database yet.
Overall expectations on performance:
We still need to download every single state node at least once, so there is no improvement there. We will save a significant amount of time in case we change pivots. And we save lots of read/writes on filesystem because tasks are not needed to be written to disk any more.
We want to avoid having too many pending unsaved nodes in memory, not to run out of it. If we were always handling only one request to our peers at the same time, we would not need to be worried, and we would just use a simple depth first search. Because we batch our requests, we might produce too many pending unprocessed nodes in memory if we are not careful about the order of processing requests. That is where the priority on node request comes from. We want to always process nodes lower in the tree before nodes higher in the tree, and preferably we want to first process children from the same parent so that we can save the current unsaved parent as soon as possible.
At the moment, I still left in the code several artefacts that I use for debugging the behaviour. I am planning to get rid of most of these counters, feel free to point them out in the review. There is for instance a weird counter in the NodeDataRequest class that I am using to monitor the total amount of unsaved nodes. In case pending unsaved node count rises too high, there is a warning printed into the logs. At the moment of writing, I would expect the counter to stay below 10 000 generally and not rise above 20 000 nodes. If you saw the number rise to for instance 100 000 that would signify a bug.
Similarly, because of the order of processing of the nodes, we do not need to store huge number of requests on the disk any more and the whole list fits comfortably into the memory. Without batching, we would not have more than a thousand requests around waiting. Because of the batching, we can see the number of requests occasionally rises all the way up to 300 000, but usually should be under 200 000.
Note that at any time there should not be more pending unsaved blocks than pending requests. Such a situation would be a bug to be reported.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Fixed failing test
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Improving test coverage
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* lots of errorprone fixes
* some license updates
* some mockito updates
* upgrade the rocksdb version
* Prometheus left at 0.9.0 as 0.10.0+ introduces OpenMetrics
related changes that break unit tests.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* compile issues sorted, some tests failing
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* closing runnerBehind closes the vertx shared with runnerAhead, which now throws an exception
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* checkpoint when 4/5 websocket login tests pass
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* exp moved to attribute from principal
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* fixed more tests
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* fixed more tests
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* exception handling test improvement
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* exception handling test improvement
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* static renamed
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* We want old implementation of the host()
Newly vertex handles the forward headers and modifies host(). In the
process vert.x loses track of port from Host header in case the port was
not a string.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* adding dependency on jackson-databind for tests
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* making sure changes are spotless
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Dealing with regression
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* fixing last failing vert.x test hopefully
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* removed commented out code
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* reverts debugging adjustment
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* removed commented out code
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* minor whitespace cleanup
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* (internal) Refactor 'onchain' to 'flexible' where applicable (#3075)
* CLI option name change
Signed-off-by: Frank Li <b439988l@gmail.com>
* refactor privacyparameters.java and add deprecation warning
Signed-off-by: Frank Li <b439988l@gmail.com>
* more refactoring
Signed-off-by: Frank Li <b439988l@gmail.com>
* add to everything.toml
Signed-off-by: Frank Li <b439988l@gmail.com>
* bugs
Signed-off-by: Frank Li <b439988l@gmail.com>
* more missing variable names
Signed-off-by: Frank Li <b439988l@gmail.com>
* more classes
Signed-off-by: Frank Li <b439988l@gmail.com>
* more classes
Signed-off-by: Frank Li <b439988l@gmail.com>
* fix compile error
Signed-off-by: Frank Li <b439988l@gmail.com>
* add new test to invalidate passing both commands
Signed-off-by: Frank Li <b439988l@gmail.com>
* more refactoring + more tests
Signed-off-by: Frank Li <b439988l@gmail.com>
* new batch
Signed-off-by: Frank Li <b439988l@gmail.com>
* final batch?
Signed-off-by: Frank Li <b439988l@gmail.com>
* failing unit test
Signed-off-by: Frank Li <b439988l@gmail.com>
* revert incorrect refactoring back to onchain
Signed-off-by: Frank Li <b439988l@gmail.com>
* fix unit test
Signed-off-by: Frank Li <b439988l@gmail.com>
* comment
Signed-off-by: Frank Li <b439988l@gmail.com>
* comment
Signed-off-by: Frank Li <b439988l@gmail.com>
* support both privx methods
Signed-off-by: Frank Li <b439988l@gmail.com>
* add to changelog
Signed-off-by: Frank Li <b439988l@gmail.com>
* address comment
Signed-off-by: Frank Li <b439988l@gmail.com>
* add plugin privacy
Signed-off-by: Frank Li <b439988l@gmail.com>
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* checkpoint when 4/5 websocket login tests pass
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* trying to figure out how to decouple this test
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* spotless
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* removes Orion from integration test
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* spotless
Signed-off-by: Justin Florentine <justin+github@florentine.us>
* Add jackson dependency to merge module
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Migrate JWTAuthOptions creations for public keys
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Check http client response status
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace Orian with Tessera in tests
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Change Tessera expected error messages in tests
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Change executor of integrationTests to allow spanning Docker processes
Signed-off-by: Diego López León <dieguitoll@gmail.com>
Co-authored-by: Justin Florentine <justin+github@florentine.us>
Co-authored-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Co-authored-by: Frank Li <39414003+frankisawesome@users.noreply.github.com>
Co-authored-by: Sally MacFarlane <sally.macfarlane@consensys.net>