* services to junit5
* removed some junit4 engine imports
* updated some plugins test since these extend from testutil KV storage
* one more form of EPL v2
Signed-off-by: Sally MacFarlane <macfarla.github@gmail.com>
---------
Signed-off-by: Sally MacFarlane <macfarla.github@gmail.com>
* update to 2.4.1
* update use of DNS daemon with Vertx
* fix issue with Bytes.repeat
* update antlr version
* fix dns tests
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
---------
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
Co-authored-by: Sally MacFarlane <macfarla.github@gmail.com>
- Added missing javadocs so that javadoc doclint passes against JDK 17 (invoke by Besu gradle build).
- Exclude following packages from javadoc lint:
org.hyperledger.besu.privacy.contracts.generated
org.hyperledger.besu.tests.acceptance.*
- Temporarily exclude ethereum and evm submodule for doc lint checks.
- Run the javadoc task using GitHub actions (use Java 17) to report any javadoc errors during the PR builds
- Updating plugin-api build.gradle with new hash as javadoc comments caused it to change
Signed-off-by: Usman Saleem <usman@usmans.info>
Change the unit test execution to use the Junit5 JUnitPlatform. This
allows for a mix of junit 4 and junit 5 tests and for a gradual
migration to junit 5 instead of a big bang. One class depended on
junit 4 exceptions and was updated. Two tests depending on
native libraries fail gracefully on mac (and only mac).
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* Bump SLF4J version
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace log4j2 API with SLF4j API
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Replace usage of LogManager#getFormatterLogger
This is for keeping compatibility with SLF4J. If neccesary, a specific formatter can be created for the RlpBlockImporter class
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Unset the default logging value for the retesteth
This is because it's not possible to resolve the root logger level into a Log4J2 field
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Prevent creation of Logger context outside SLF4J
org.hyperledger.besu.cli.BesuCommand#setAllLevels was taken from
https://github.com/apache/logging-log4j2/blob/rel%2F2.17.1/log4j-core/src/main/java/org/apache/logging/log4j/core/config/Configurator.java#L309
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add FATAL level deprecation message
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S2139
Exceptions should be either logged or rethrown but not both
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* [Sonar] Fix java:S3457
Printf-style format strings should be used correctly
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Add changelog
Signed-off-by: Diego López León <dieguitoll@gmail.com>
* Fast sync should traverse the world state depth first
1. The pending requests queue in the world state downloader is now a different data structure. We now use a list of priority queues.
2. The node requests now know about the parent request that was responsible for spawning them.
3. When a node has data available and is asked to persist before its children are persisted, the node will not do anything. Instead, it will wait for all its children to persist first and will persist together with the last child.
Storing children before parents gives us the following benefits:
* We don't need to store pending requests on disk any more.
* When restarting download from a new pivot, we do not need to walk and check the whole tree any more.
And the following drawbacks:
* We now have pending nodes in memory for which we already downloaded data, but we do not store them in the database yet.
Overall expectations on performance:
We still need to download every single state node at least once, so there is no improvement there. We will save a significant amount of time in case we change pivots. And we save lots of read/writes on filesystem because tasks are not needed to be written to disk any more.
We want to avoid having too many pending unsaved nodes in memory, not to run out of it. If we were always handling only one request to our peers at the same time, we would not need to be worried, and we would just use a simple depth first search. Because we batch our requests, we might produce too many pending unprocessed nodes in memory if we are not careful about the order of processing requests. That is where the priority on node request comes from. We want to always process nodes lower in the tree before nodes higher in the tree, and preferably we want to first process children from the same parent so that we can save the current unsaved parent as soon as possible.
At the moment, I still left in the code several artefacts that I use for debugging the behaviour. I am planning to get rid of most of these counters, feel free to point them out in the review. There is for instance a weird counter in the NodeDataRequest class that I am using to monitor the total amount of unsaved nodes. In case pending unsaved node count rises too high, there is a warning printed into the logs. At the moment of writing, I would expect the counter to stay below 10 000 generally and not rise above 20 000 nodes. If you saw the number rise to for instance 100 000 that would signify a bug.
Similarly, because of the order of processing of the nodes, we do not need to store huge number of requests on the disk any more and the whole list fits comfortably into the memory. Without batching, we would not have more than a thousand requests around waiting. Because of the batching, we can see the number of requests occasionally rises all the way up to 300 000, but usually should be under 200 000.
Note that at any time there should not be more pending unsaved blocks than pending requests. Such a situation would be a bug to be reported.
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Fixed failing test
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Improving test coverage
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Addressing review comments
Signed-off-by: Jiri Peinlich <jiri.peinlich@gmail.com>
* Upgrade to Apache Tuweni 2.0
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Remove intermediate repository
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Remove all occurrences of toBytes
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Migrate to tuweni-bytes
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* add changelog
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* correct reference tests
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Initial API changes
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* more changes
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Change APIs for VM ops
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Use constant UInt256.ONE
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Optimize a bit address <> word transformation
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* spotless
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
Remove all but 4 log4j2.xml config files
* The main config for the besu CLI app
* The config for the evmTool CLI app
* The config for acceptance tests
* A config in testUtil
If any tests depend on a log4j file they should import testutil.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Adding the Log4j "jul" (java.util.logging) adapter resulted in many
messages like this at startup:
`main INFO Registered Log4j as the java.util.logging.LogManager.`
These come from the Log4j status logger. We can get rid of those by
setting the status attribute on all configurations to a higher logging
level. WARN is the next higher level.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Upgrade to ErrorProne 2.4.0
* public constructors on abstract classes are removed
* Javadoc must have meaningfull documentation
* lambdas should not be variables
* Added to the list of confusing inner class names (Entry and Type)
* no assert keyword in tests
* Obsolete JDK classes produce errors now
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Removes as many Gradle 7.0 compatibility issues as possible
* `baseName` -> `archiveBaseName`
* `extension` -> `archiveExtension`
* `destinationDir` -> `destinationDirectory`
* `runtime` -> `runtimeOnly`
* Change some log4j-api and log4j-core dependencies
* Remove an unneeded and outdated plugin (`net.ltgt.apt`)
* tweak the plugin-api change detector's property annotations.
Warnings still exist with one external plugin used for license file
checking that we do not control the source code for.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Update dependencies to most current version
- except picocli which is a major version update
Alphabetize dependencies
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* adding in spdx-license-identifier & updated check for the same; removing license check from spotless
Signed-off-by: Joshua Fernandes <joshua.fernandes@consensys.net>
* Change CheckSpdxHeader to a task.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
* Ensure `plugin-api` module gets published at the correct maven path
* Move `plugins` to `plugin-api`
Signed-off-by: Edward Evans <edward.evans@consensys.net>
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
* Return plugin-api to the main repo
* Spotless
* Migrate all external plugin-api references to the project in this repo
* Add licence header
* Update repo reference for publish, even if commented
* Use real configuration for publishing plugin-api
This was tested with the
`:plugins:publishMavenJavaPublicationToMavenLocal` task and checking the
local Maven repo to make sure it was using the correct paths
Signed-off-by: Edward Evans <edward.evans@consensys.net>
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
* Rich Data for Events Plugin
* plugin-api -> api
* use BinaryData as a root and add getValue to UInt256Value
* bring rick data changes in line with proposed APIs.
Undo orthagonal naming changes.
* add size
* use log and transaction interfaces from rich data api
* update to released plugin api version and add new event listener.
* add tests
* fix acceptance tests
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Windows has some slightly different Java NIO semantics, regarding size
after writing and whether files are deleted on close (they aren not).
The first issue is we shouldn't be using Integer.SIZE when we mean
Integer.BYTES.
The second issue is we cannot count on these work files showing up or
being deleted from the file system consistently across platforms. The
ordering is consistent within platforms but not across. The test was
re-written to check the read and write file numbers instead.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Replaces the RocksDB based queue for pending world state download tasks with one that uses a simple file. Added tasks are appended to the file while the reader starts from the beginning of the file and reads forwards.
Periodically a new file is started to limit the disk space used. The reader deletes files it has completed reading.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Move RocksDBStats to it's own module. This also brings metrics to
metrics:core since none of our other module have nested modules but
they have peer modules.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>
Biggest change is that UnusedVariable and UnusedMethod went to WARN by
default. Since our build is a no warning build this means we either need
to turn them off or fix them. I mostly opted for the latter. Test code
was mostly fixed, unused loggers were deleted, and other shipped code
was mostly suppressed.
Two less noisy fixes to not use `SortedSet` and to use zero based
comparable results instead of -1, 0, and 1. Also a compiler nit in
errorprone was suppressed, per the description it won't affect us.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>