* build: Update jacoco version to 0.8.11
* build: Enforce Java 21 and above check for build
* CI: Use Java 21 in Github CI workflows
* CI: Use Java 21 in circleci workflows
* build: Update gradle verification metadata for jacoco 0.8.11
* refactor: Fix javadoc related warnings which are applicable to Java 21
* fix(test): BackwardSyncAlgSpec slightly increase timeout to pass it in CI
---------
Signed-off-by: Usman Saleem <usman@usmans.info>
Upgrade spotless to 1.22.0 and reformat.
This is required for Java21 support.
Signed-off-by: Danno Ferrin <danno@numisight.com>
Co-authored-by: Danno Ferrin <danno@numisight.com>
Co-authored-by: Sally MacFarlane <macfarla.github@gmail.com>
* services to junit5
* removed some junit4 engine imports
* updated some plugins test since these extend from testutil KV storage
* one more form of EPL v2
Signed-off-by: Sally MacFarlane <macfarla.github@gmail.com>
---------
Signed-off-by: Sally MacFarlane <macfarla.github@gmail.com>
- Added missing javadocs so that javadoc doclint passes against JDK 17 (invoke by Besu gradle build).
- Exclude following packages from javadoc lint:
org.hyperledger.besu.privacy.contracts.generated
org.hyperledger.besu.tests.acceptance.*
- Temporarily exclude ethereum and evm submodule for doc lint checks.
- Run the javadoc task using GitHub actions (use Java 17) to report any javadoc errors during the PR builds
- Updating plugin-api build.gradle with new hash as javadoc comments caused it to change
Signed-off-by: Usman Saleem <usman@usmans.info>
* Bump SLF4J version
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace log4j2 API with SLF4j API
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace usage of LogManager#getFormatterLogger
This is for keeping compatibility with SLF4J. If neccesary, a specific formatter can be created for the RlpBlockImporter class
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Unset the default logging value for the retesteth
This is because it's not possible to resolve the root logger level into a Log4J2 field
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Prevent creation of Logger context outside SLF4J
org.hyperledger.besu.cli.BesuCommand#setAllLevels was taken from
https://github.com/apache/logging-log4j2/blob/rel%2F2.17.1/log4j-core/src/main/java/org/apache/logging/log4j/core/config/Configurator.java#L309
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add FATAL level deprecation message
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S2139
Exceptions should be either logged or rethrown but not both
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S3457
Printf-style format strings should be used correctly
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add changelog
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Fast sync should traverse the world state depth first
1. The pending requests queue in the world state downloader is now a different data structure. We now use a list of priority queues.
2. The node requests now know about the parent request that was responsible for spawning them.
3. When a node has data available and is asked to persist before its children are persisted, the node will not do anything. Instead, it will wait for all its children to persist first and will persist together with the last child.
Storing children before parents gives us the following benefits:
* We don't need to store pending requests on disk any more.
* When restarting download from a new pivot, we do not need to walk and check the whole tree any more.
And the following drawbacks:
* We now have pending nodes in memory for which we already downloaded data, but we do not store them in the database yet.
Overall expectations on performance:
We still need to download every single state node at least once, so there is no improvement there. We will save a significant amount of time in case we change pivots. And we save lots of read/writes on filesystem because tasks are not needed to be written to disk any more.
We want to avoid having too many pending unsaved nodes in memory, not to run out of it. If we were always handling only one request to our peers at the same time, we would not need to be worried, and we would just use a simple depth first search. Because we batch our requests, we might produce too many pending unprocessed nodes in memory if we are not careful about the order of processing requests. That is where the priority on node request comes from. We want to always process nodes lower in the tree before nodes higher in the tree, and preferably we want to first process children from the same parent so that we can save the current unsaved parent as soon as possible.
At the moment, I still left in the code several artefacts that I use for debugging the behaviour. I am planning to get rid of most of these counters, feel free to point them out in the review. There is for instance a weird counter in the NodeDataRequest class that I am using to monitor the total amount of unsaved nodes. In case pending unsaved node count rises too high, there is a warning printed into the logs. At the moment of writing, I would expect the counter to stay below 10 000 generally and not rise above 20 000 nodes. If you saw the number rise to for instance 100 000 that would signify a bug.
Similarly, because of the order of processing of the nodes, we do not need to store huge number of requests on the disk any more and the whole list fits comfortably into the memory. Without batching, we would not have more than a thousand requests around waiting. Because of the batching, we can see the number of requests occasionally rises all the way up to 300 000, but usually should be under 200 000.
Note that at any time there should not be more pending unsaved blocks than pending requests. Such a situation would be a bug to be reported.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Fixed failing test
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Improving test coverage
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
Remove all but 4 log4j2.xml config files
* The main config for the besu CLI app
* The config for the evmTool CLI app
* The config for acceptance tests
* A config in testUtil
If any tests depend on a log4j file they should import testutil.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Adding the Log4j "jul" (java.util.logging) adapter resulted in many
messages like this at startup:
`main INFO Registered Log4j as the java.util.logging.LogManager.`
These come from the Log4j status logger. We can get rid of those by
setting the status attribute on all configurations to a higher logging
level. WARN is the next higher level.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Upgrade to ErrorProne 2.4.0
* public constructors on abstract classes are removed
* Javadoc must have meaningfull documentation
* lambdas should not be variables
* Added to the list of confusing inner class names (Entry and Type)
* no assert keyword in tests
* Obsolete JDK classes produce errors now
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* adding in spdx-license-identifier & updated check for the same; removing license check from spotless
Signed-off-by: Joshua Fernandes <joshua.fernandes@consensys.net>
* Change CheckSpdxHeader to a task.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Windows has some slightly different Java NIO semantics, regarding size
after writing and whether files are deleted on close (they aren not).
The first issue is we shouldn't be using Integer.SIZE when we mean
Integer.BYTES.
The second issue is we cannot count on these work files showing up or
being deleted from the file system consistently across platforms. The
ordering is consistent within platforms but not across. The test was
re-written to check the read and write file numbers instead.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Replaces the RocksDB based queue for pending world state download tasks with one that uses a simple file. Added tasks are appended to the file while the reader starts from the beginning of the file and reads forwards.
Periodically a new file is started to limit the disk space used. The reader deletes files it has completed reading.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Biggest change is that UnusedVariable and UnusedMethod went to WARN by
default. Since our build is a no warning build this means we either need
to turn them off or fix them. I mostly opted for the latter. Test code
was mostly fixed, unused loggers were deleted, and other shipped code
was mostly suppressed.
Two less noisy fixes to not use `SortedSet` and to use zero based
comparable results instead of -1, 0, and 1. Also a compiler nit in
errorprone was suppressed, per the description it won't affect us.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>