* Use UnformattedDataImpl as a DelegatingBytes class, so we can have it used throughout and reduce the churn of new objects
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
only when you are trying to process a privacy marker transaction
Wrap errors with Enclave exception
Signed-off-by: Antony Denyer <email@antonydenyer.co.uk>
Co-authored-by: Lucas Saldanha <lucascrsaldanha@gmail.com>
Refactor uses of BlockchainQueries so that they use a single instance
instead of createing one every place they need it.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* [BESU-122] Index tx log bloom bits and use the index for queries.
This comes in two parts: first a CLI program to generate the log bloom
indexes, then updating BlockchainQueries to use the indexes if present.
First, to create the bloom index on a synced node (for example Goerli):
`bin/besu --network=goerli --data-path /tmp/goerli operator generate-log-bloom-cache`
There are options where to start and to stop. I estimate 15-30 minutes
for mainnet.
The RPCs should magically use the indexes now. Note that the last
fragment of 100K blocks is not indexed and uses the old paths.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
Vertx by default orders all calls to executeBlocking in an ordered
fashion. As a side effect all requests are single threaded, even across
multiple clients. Because JSON-RPC has request identifiers it is not
needed to thread responses as they can be answered out of order. This
also allows multiple threads to handle requests, increasing throughput.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* NewBlockHeaders performance improvement
When sending out new block headers to the websocket subscribers we
serialized the block once per each subscriber. This had some crypto
calls for each serialization and was CPU bound with redundant
calculations.
We can memoize the result and only serialize it once per block.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* Add warning when enabling PRIV/EEA APIs with privacy disabled
* Prevent execution of PRIV/EEA methods when privacy is disabled
Signed-off-by: Lucas Saldanha <lucas.saldanha@consensys.net>
* Multi-Tenancy: Do not specify a public key anymore when requesting a payload from Orion, so all private keys are tried to decrypt the encrypted payload.
Signed-off-by: Stefan Pingel <stefan.pingel@consensys.net>
Add support for external GPU mining via the stratum protocol.
Three new CLI Options support this: `--miner-stratum-enabled`,
`--miner-stratum-host`, and `--miner-stratum-port`.
To use stratum first use the `--miner-enabled` option and add the
`--miner-stratum-enabled` option. This disables local CPU mining and opens up
a stratum server, configurable via `--miner-stratum-host` (default is
`0.0.0.0`) and `--miner-stratum-port` (default is 8008). This server supports
`stratum+tcp` mining and the JSON-RPC services (if enabled) will support the
`eth_getWork` and `eth_submitWork` calls as well (supporting `getwork` or
`http` schemes).
This is known to work with ethminer.
Signed-off-by: Antoine Toulme <antoine@lunar-ocean.com>
* Rework how filter and log query parameters are created/used
We used a `FilterParameter` that held strings in places where we could
create strongly typed objects. We also used it in places where we only
wanted a subset of its descriptiveness, namely, the `LogsQuery` part of
it.
* deserialize directly into `LogsQuery`, which is useful for log pub/sub
* narrow uses of `FilterParameter` to `LogsQuery` where possible
* make `FilterParameter` hold strongly typed `Address`s and `LogTopic`s
Signed-off-by: Ratan Rai Sur <ratan.r.sur@gmail.com>
Rename eea_getTransactionCount to priv_getEeaTransactionCount
Signed-off-by: Stefan Pingel <stefan.pingel@consensys.net>
Signed-off-by: Jason Frame <jasonwframe@gmail.com>
Use the bloombits for logs queries, so we only have to walk headers
and not every receipt on a large query.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
An error was detected (PAN-3248) whereby if "Null" appeared in a Log
Topic filter, it and all subsequent filters were lost (and thus
were not used to filter responses) - thus Besu would return too many
results (as the filters were less restrictive than requested).
This was determined to be an issue in the TopicParameterDeserialiser
which is resolved in this commit.
Signed-off-by: Trent Mohay <37158202+rain-on@users.noreply.github.com>
Upgrade dependencies except rocksdb (needs burn in testing),
picocli (reorders options), gradle (causes build server breakage), and
web3j (test failures).
* Awaitality removed a Duration object and instead uses java.time
* jackson stopped throwing a checked exception for one API
* spotless now enforces gradle formatting checks (yea!)
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
We had two mostly idenitcal classes used for GraphQL and JsonRPC/WS.
This PR merges that to one class.
* Move from org.hyperledger.besu.ethereum.api.json.internal.queries to
org.hyperledger.besu.ethereum.api.query
* Add one method from the GraphQL version
(generateLogWithMetadataForTransaction)
* Remove graphql version and point graphql to the shared version.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
We should not send a sync status for every forking block state update.
Yes, we send status updates for detected forks as well as new canonical
heads.
Instead we should send a synching message for status changes as well as
when we reorg the chain.
Signed-off-by: Danno Ferrin <danno.ferrin@gmail.com>
* adding in spdx-license-identifier & updated check for the same; removing license check from spotless
Signed-off-by: Joshua Fernandes <joshua.fernandes@consensys.net>
* Change CheckSpdxHeader to a task.
Signed-off-by: Adrian Sutton <adrian.sutton@consensys.net>