feat: AnyTrust and Celestia support as DA for Arbitrum batches (#10144)
* Initial version of x-level messages indexer * fixes for cspell and credo * new state of x-level messages * Monitoring of new L1-to-L2 messages on L1 * new batches discovery * fetcher workers in separate modules * proper name * Fix for responses without "id", e.g. "Too Many Requests" * update DB with new batches and corresponding data * update DB with confirmed blocks * fixes for cspell and credo * tracking commitments confirmations for L1 to L2 messages * Proper usign of max function * tracking completion of L2 to L1 messages * catchup historical messages to L2 * incorrect version of committed file * catchup historical messages from L2 and completion of L1-to-L2 messages * historical batches catchup * status for historical l2-to-l1 messages * address matching issue * catchup historical executions of L2-to-L1 messages * db query to find unconfirmed blocks gaps * first changes to catchup historical confirmations * finalized catchup of historical confirmations * 4844 blobs support * fix for the issue with multiple confirmations * limit amount of batches to handle at once * Use latest L1 block by fetchers if start block is not configured * merge issue fix * missed file * historical messages discovery * reduce logs severity * first iteration to improve documentation for new functionality * second iteration to improve documentation for new functionality * third iteration to improve documentation for new functionality * fourth iteration to improve documentation for new functionality * fifth iteration to improve documentation for new functionality * final iteration to improve documentation for new functionality * Arbitrum related info in Transaction and Block views * Views to get info about batches and messages * usage of committed for batches instead of confirmed * merge issues addressed * merge issues addressed * code review issues addressed * code review issues addressed * fix merge issue * raising exception in the case of DB inconsistency * fix formatting issue * termination case for RollupMessagesCatchup * code review comments addressed * code review comments addressed * consistency in primary keys * dialyzer fix * code review comments addressed * missed doc comment * code review comments addressed * changes after merge * formatting issue fix * block and transaction views extended * updated indices creation as per code review comments * code review comment addressed * fix merge issue * configuration of intervals as time variables * TODO added to reflect improvement ability * database fields refactoring * association renaming * associations and fields in api response renamed * format issue addressed * feat: APIv2 endpoints for Arbitrum messages and batches (#9963) * Arbitrum related info in Transaction and Block views * Views to get info about batches and messages * usage of committed for batches instead of confirmed * merge issues addressed * changes after merge * formatting issue fix * code review comment addressed * associations and fields in api response renamed * format issue addressed * feat: Arbitrum-specific fields in the block and transaction API endpoints (#10067) * Arbitrum related info in Transaction and Block views * Views to get info about batches and messages * usage of committed for batches instead of confirmed * merge issues addressed * changes after merge * formatting issue fix * block and transaction views extended * code review comment addressed * associations and fields in api response renamed * format issue addressed * fix credo issue * fix tests issues * ethereumjsonrpc test fail investigation * test issues fixes * initial version to get DA infromation from batch transactions * merge issues fix * keep discovered da information in db * show the batch data source in API response * formatting, spelling and credo issues * Documentation and specs improved * covered a case with empty extra data * API endpoints updated * changed order of params for celestia * more robust string hash identification * duplcitated alias removed * missed field in the type documentation * mapset used instead of map * comments for unfolding results of getKeysetCreationBlock call * common function to get data key for Celestia blobspull/10390/head
parent
a5a7ebbba2
commit
3c268d2196
@ -0,0 +1,105 @@ |
||||
defmodule Explorer.Chain.Arbitrum.DaMultiPurposeRecord do |
||||
@moduledoc """ |
||||
Models a multi purpose record related to Data Availability for Arbitrum. |
||||
|
||||
Changes in the schema should be reflected in the bulk import module: |
||||
- Explorer.Chain.Import.Runner.Arbitrum.DAMultiPurposeRecords |
||||
|
||||
Migrations: |
||||
- Explorer.Repo.Arbitrum.Migrations.AddDaInfo |
||||
""" |
||||
|
||||
use Explorer.Schema |
||||
|
||||
alias Explorer.Chain.Hash |
||||
|
||||
alias Explorer.Chain.Arbitrum.L1Batch |
||||
|
||||
@optional_attrs ~w(batch_number)a |
||||
|
||||
@required_attrs ~w(data_key data_type data)a |
||||
|
||||
@allowed_attrs @optional_attrs ++ @required_attrs |
||||
|
||||
@typedoc """ |
||||
Descriptor of the a multi purpose record related to Data Availability for Arbitrum rollups: |
||||
* `data_key` - The hash of the data key. |
||||
* `data_type` - The type of the data. |
||||
* `data` - The data |
||||
* `batch_number` - The number of the Arbitrum batch associated with the data for the |
||||
records where applicable. |
||||
""" |
||||
@type to_import :: %{ |
||||
data_key: binary(), |
||||
data_type: non_neg_integer(), |
||||
data: map(), |
||||
batch_number: non_neg_integer() | nil |
||||
} |
||||
|
||||
@typedoc """ |
||||
* `data_key` - The hash of the data key. |
||||
* `data_type` - The type of the data. |
||||
* `data` - The data to be stored as a json in the database. |
||||
* `batch_number` - The number of the Arbitrum batch associated with the data for the |
||||
records where applicable. |
||||
* `batch` - An instance of `Explorer.Chain.Arbitrum.L1Batch` referenced by `batch_number`. |
||||
""" |
||||
@primary_key false |
||||
typed_schema "arbitrum_da_multi_purpose" do |
||||
field(:data_key, Hash.Full) |
||||
field(:data_type, :integer) |
||||
field(:data, :map) |
||||
|
||||
belongs_to(:batch, L1Batch, |
||||
foreign_key: :batch_number, |
||||
references: :number, |
||||
type: :integer |
||||
) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
@doc """ |
||||
Validates that the `attrs` are valid. |
||||
""" |
||||
@spec changeset(Ecto.Schema.t(), map()) :: Ecto.Schema.t() |
||||
def changeset(%__MODULE__{} = da_records, attrs \\ %{}) do |
||||
da_records |
||||
|> cast(attrs, @allowed_attrs) |
||||
|> validate_required(@required_attrs) |
||||
|> foreign_key_constraint(:batch_number) |
||||
|> unique_constraint(:data_key) |
||||
end |
||||
end |
||||
|
||||
defmodule Explorer.Chain.Arbitrum.DaMultiPurposeRecord.Helper do |
||||
@moduledoc """ |
||||
Helper functions to work with `Explorer.Chain.Arbitrum.DaMultiPurposeRecord` data |
||||
""" |
||||
|
||||
alias Explorer.Chain.Hash |
||||
|
||||
@doc """ |
||||
Calculates the data key for `Explorer.Chain.Arbitrum.DaMultiPurposeRecord` that contains Celestia blob data. |
||||
|
||||
## Parameters |
||||
- `height`: The height of the block in the Celestia network. |
||||
- `tx_commitment`: The transaction commitment. |
||||
|
||||
## Returns |
||||
- A binary representing the calculated data key for the record containing |
||||
Celestia blob data. |
||||
""" |
||||
@spec calculate_celestia_data_key(binary() | non_neg_integer(), binary() | Explorer.Chain.Hash.t()) :: binary() |
||||
def calculate_celestia_data_key(height, tx_commitment) when is_binary(height) do |
||||
calculate_celestia_data_key(String.to_integer(height), tx_commitment) |
||||
end |
||||
|
||||
def calculate_celestia_data_key(height, %Hash{} = tx_commitment) when is_integer(height) do |
||||
calculate_celestia_data_key(height, tx_commitment.bytes) |
||||
end |
||||
|
||||
def calculate_celestia_data_key(height, tx_commitment) when is_integer(height) and is_binary(tx_commitment) do |
||||
:crypto.hash(:sha256, :binary.encode_unsigned(height) <> tx_commitment) |
||||
end |
||||
end |
@ -0,0 +1,106 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Arbitrum.DaMultiPurposeRecords do |
||||
@moduledoc """ |
||||
Bulk imports of Explorer.Chain.Arbitrum.DaMultiPurposeRecord. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Arbitrum.DaMultiPurposeRecord |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [DaMultiPurposeRecord.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: DaMultiPurposeRecord |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :arbitrum_da_multi_purpose_records |
||||
|
||||
@impl Import.Runner |
||||
@spec imported_table_row() :: %{:value_description => binary(), :value_type => binary()} |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
@spec run(Multi.t(), list(), map()) :: Multi.t() |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_da_multi_purpose_records, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:arbitrum_da_multi_purpose_records, |
||||
:arbitrum_da_multi_purpose_records |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [DaMultiPurposeRecord.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce Arbitrum.DaMultiPurposeRecord ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.data_key) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: DaMultiPurposeRecord, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :data_key, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
rec in DaMultiPurposeRecord, |
||||
update: [ |
||||
set: [ |
||||
# don't update `data_key` as it is a primary key and used for the conflict target |
||||
data_type: fragment("EXCLUDED.data_type"), |
||||
data: fragment("EXCLUDED.data"), |
||||
batch_number: fragment("EXCLUDED.batch_number"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", rec.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", rec.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.data_type, EXCLUDED.data, EXCLUDED.batch_number) IS DISTINCT FROM (?, ?, ?)", |
||||
rec.data_type, |
||||
rec.data, |
||||
rec.batch_number |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,25 @@ |
||||
defmodule Explorer.Repo.Arbitrum.Migrations.AddDaInfo do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
execute( |
||||
"CREATE TYPE arbitrum_da_containers_types AS ENUM ('in_blob4844', 'in_calldata', 'in_celestia', 'in_anytrust')", |
||||
"DROP TYPE arbitrum_da_containers_types" |
||||
) |
||||
|
||||
alter table(:arbitrum_l1_batches) do |
||||
add(:batch_container, :arbitrum_da_containers_types) |
||||
end |
||||
|
||||
create table(:arbitrum_da_multi_purpose, primary_key: false) do |
||||
add(:data_key, :bytea, null: false, primary_key: true) |
||||
add(:data_type, :integer, null: false) |
||||
add(:data, :map, null: false) |
||||
add(:batch_number, :integer) |
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
|
||||
create(index(:arbitrum_da_multi_purpose, [:data_type, :data_key])) |
||||
create(index(:arbitrum_da_multi_purpose, [:data_type, :batch_number])) |
||||
end |
||||
end |
@ -0,0 +1,414 @@ |
||||
defmodule Indexer.Fetcher.Arbitrum.DA.Anytrust do |
||||
@moduledoc """ |
||||
Provides functionality for handling AnyTrust data availability information |
||||
within the Arbitrum rollup context. |
||||
""" |
||||
|
||||
import Indexer.Fetcher.Arbitrum.Utils.Logging, only: [log_error: 1, log_info: 1, log_debug: 1] |
||||
|
||||
import Explorer.Helper, only: [decode_data: 2] |
||||
|
||||
alias Indexer.Fetcher.Arbitrum.Utils.{Db, Rpc} |
||||
alias Indexer.Fetcher.Arbitrum.Utils.Helper, as: ArbitrumHelper |
||||
alias Indexer.Helper, as: IndexerHelper |
||||
|
||||
alias Explorer.Chain.Arbitrum |
||||
|
||||
@enforce_keys [ |
||||
:batch_number, |
||||
:keyset_hash, |
||||
:data_hash, |
||||
:timeout, |
||||
:signers_mask, |
||||
:bls_signature |
||||
] |
||||
defstruct @enforce_keys |
||||
|
||||
@typedoc """ |
||||
AnyTrust DA info struct: |
||||
* `batch_number` - The batch number in the Arbitrum rollup associated with the |
||||
AnyTrust data blob. |
||||
* `keyset_hash` - The hash identifying a keyset that defines the rules (threshold |
||||
and committee members) to issue the DA certificate. |
||||
* `data_hash` - The hash of the data blob stored by the AnyTrust committee. |
||||
* `timeout` - Expiration timeout for the data blob. |
||||
* `signers_mask` - Mask identifying committee members who guaranteed data availability. |
||||
* `bls_signature` - Aggregated BLS signature of the committee members. |
||||
""" |
||||
@type t :: %__MODULE__{ |
||||
batch_number: non_neg_integer(), |
||||
keyset_hash: binary(), |
||||
data_hash: binary(), |
||||
timeout: DateTime.t(), |
||||
signers_mask: non_neg_integer(), |
||||
bls_signature: binary() |
||||
} |
||||
|
||||
@typedoc """ |
||||
AnyTrust DA certificate struct: |
||||
* `keyset_hash` - The hash identifying a keyset that defines the rules (threshold |
||||
and committee members) to issue the DA certificate. |
||||
* `data_hash` - The hash of the data blob stored by the AnyTrust committee. |
||||
* `timeout` - Expiration timeout for the data blob. |
||||
* `signers_mask` - Mask identifying committee members who guaranteed data availability. |
||||
* `bls_signature` - Aggregated BLS signature of the committee members. |
||||
""" |
||||
@type certificate :: %{ |
||||
:keyset_hash => String.t(), |
||||
:data_hash => String.t(), |
||||
:timeout => DateTime.t(), |
||||
:signers_mask => non_neg_integer(), |
||||
:bls_signature => String.t() |
||||
} |
||||
|
||||
@typedoc """ |
||||
AnyTrust committee member public key struct: |
||||
* `trusted` - A boolean indicating whether the member is trusted. |
||||
* `key` - The public key of the member. |
||||
* `proof` - The proof of the member's public key. |
||||
""" |
||||
@type signer :: %{ |
||||
:trusted => boolean(), |
||||
:key => String.t(), |
||||
optional(:proof) => String.t() |
||||
} |
||||
|
||||
@typedoc """ |
||||
AnyTrust committee struct: |
||||
* `threshold` - The threshold of honest members for the keyset. |
||||
* `pubkeys` - A list of public keys of the committee members. |
||||
""" |
||||
@type keyset :: %{ |
||||
:threshold => non_neg_integer(), |
||||
:pubkeys => [signer()] |
||||
} |
||||
|
||||
# keccak256("SetValidKeyset(bytes32,bytes)") |
||||
@set_valid_keyset_event "0xabca9b7986bc22ad0160eb0cb88ae75411eacfba4052af0b457a9335ef655722" |
||||
@set_valid_keyset_event_unindexed_params [:bytes] |
||||
|
||||
@doc """ |
||||
Parses batch accompanying data to extract AnyTrust data availability information. |
||||
|
||||
This function decodes the provided binary data to extract information related to |
||||
AnyTrust data availability. |
||||
|
||||
## Parameters |
||||
- `batch_number`: The batch number associated with the AnyTrust data. |
||||
- `binary_data`: The binary data to be parsed, containing AnyTrust data fields. |
||||
|
||||
## Returns |
||||
- `{:ok, :in_anytrust, da_info}` if the parsing is successful, where `da_info` is |
||||
the AnyTrust data availability information struct. |
||||
- `{:error, nil, nil}` if the parsing fails. |
||||
""" |
||||
@spec parse_batch_accompanying_data(non_neg_integer(), binary()) :: |
||||
{:ok, :in_anytrust, __MODULE__.t()} | {:error, nil, nil} |
||||
def parse_batch_accompanying_data(batch_number, << |
||||
keyset_hash::binary-size(32), |
||||
data_hash::binary-size(32), |
||||
timeout::big-unsigned-integer-size(64), |
||||
_version::size(8), |
||||
signers_mask::big-unsigned-integer-size(64), |
||||
bls_signature::binary-size(96) |
||||
>>) do |
||||
# https://github.com/OffchainLabs/nitro/blob/ad9ab00723e13cf98307b9b65774ad455594ef7b/arbstate/das_reader.go#L95-L151 |
||||
{:ok, :in_anytrust, |
||||
%__MODULE__{ |
||||
batch_number: batch_number, |
||||
keyset_hash: keyset_hash, |
||||
data_hash: data_hash, |
||||
timeout: IndexerHelper.timestamp_to_datetime(timeout), |
||||
signers_mask: signers_mask, |
||||
bls_signature: bls_signature |
||||
}} |
||||
end |
||||
|
||||
def parse_batch_accompanying_data(_, _) do |
||||
log_error("Can not parse Anytrust DA message.") |
||||
{:error, nil, nil} |
||||
end |
||||
|
||||
@doc """ |
||||
Prepares AnyTrust data availability information for import. |
||||
|
||||
This function prepares a list of data structures for import into the database, |
||||
ensuring that AnyTrust DA information and related keysets are included. It |
||||
verifies if the keyset associated with the AnyTrust DA certificate is already |
||||
known or needs to be fetched from L1. |
||||
|
||||
To avoid fetching the same keyset multiple times, the function uses a cache. |
||||
|
||||
## Parameters |
||||
- `source`: The initial list of data to be imported. |
||||
- `da_info`: The AnyTrust DA info struct containing details about the data blob. |
||||
- `l1_connection_config`: A map containing the address of the Sequencer Inbox contract |
||||
and configuration parameters for the JSON RPC connection. |
||||
- `cache`: A set of unique elements used to cache the checked keysets. |
||||
|
||||
## Returns |
||||
- A tuple containing: |
||||
- An updated list of data structures ready for import, including the DA |
||||
certificate (`data_type` is `0`) and potentially a new keyset (`data_type` |
||||
is `1`) if required. |
||||
- The updated cache with the checked keysets. |
||||
""" |
||||
@spec prepare_for_import( |
||||
list(), |
||||
__MODULE__.t(), |
||||
%{ |
||||
:sequencer_inbox_address => String.t(), |
||||
:json_rpc_named_arguments => EthereumJSONRPC.json_rpc_named_arguments() |
||||
}, |
||||
MapSet.t() |
||||
) :: |
||||
{[Arbitrum.DaMultiPurposeRecord.to_import()], MapSet.t()} |
||||
def prepare_for_import(source, %__MODULE__{} = da_info, l1_connection_config, cache) do |
||||
data = %{ |
||||
keyset_hash: ArbitrumHelper.bytes_to_hex_str(da_info.keyset_hash), |
||||
data_hash: ArbitrumHelper.bytes_to_hex_str(da_info.data_hash), |
||||
timeout: da_info.timeout, |
||||
signers_mask: da_info.signers_mask, |
||||
bls_signature: ArbitrumHelper.bytes_to_hex_str(da_info.bls_signature) |
||||
} |
||||
|
||||
res = [ |
||||
%{ |
||||
data_type: 0, |
||||
data_key: da_info.data_hash, |
||||
data: data, |
||||
batch_number: da_info.batch_number |
||||
} |
||||
] |
||||
|
||||
{check_result, keyset_map, updated_cache} = check_if_new_keyset(da_info.keyset_hash, l1_connection_config, cache) |
||||
|
||||
updated_res = |
||||
case check_result do |
||||
:new_keyset -> |
||||
[ |
||||
%{ |
||||
data_type: 1, |
||||
data_key: da_info.keyset_hash, |
||||
data: keyset_map, |
||||
batch_number: nil |
||||
} |
||||
| res |
||||
] |
||||
|
||||
_ -> |
||||
res |
||||
end |
||||
|
||||
{updated_res ++ source, updated_cache} |
||||
end |
||||
|
||||
# Verifies the existence of an AnyTrust committee keyset in the database and fetches it from L1 if not found. |
||||
# |
||||
# To avoid fetching the same keyset multiple times, the function uses a cache. |
||||
# |
||||
# ## Parameters |
||||
# - `keyset_hash`: A binary representing the hash of the keyset. |
||||
# - `l1_connection_config`: A map containing the address of the Sequencer Inbox |
||||
# contract and configuration parameters for the JSON RPC |
||||
# connection. |
||||
# - `cache`: A set of unique elements used to cache the checked keysets. |
||||
# |
||||
# ## Returns |
||||
# - `{:new_keyset, keyset_info, updated_cache}` if the keyset is not found and fetched from L1. |
||||
# - `{:existing_keyset, nil, cache}` if the keyset is found in the cache or database. |
||||
@spec check_if_new_keyset( |
||||
binary(), |
||||
%{ |
||||
:sequencer_inbox_address => binary(), |
||||
:json_rpc_named_arguments => EthereumJSONRPC.json_rpc_named_arguments() |
||||
}, |
||||
MapSet.t() |
||||
) :: |
||||
{:new_keyset, __MODULE__.keyset(), MapSet.t()} |
||||
| {:existing_keyset, nil, MapSet.t()} |
||||
defp check_if_new_keyset(keyset_hash, l1_connection_config, cache) do |
||||
if MapSet.member?(cache, keyset_hash) do |
||||
{:existing_keyset, nil, cache} |
||||
else |
||||
updated_cache = MapSet.put(cache, keyset_hash) |
||||
|
||||
case Db.anytrust_keyset_exists?(keyset_hash) do |
||||
true -> |
||||
{:existing_keyset, nil, updated_cache} |
||||
|
||||
false -> |
||||
{:new_keyset, get_keyset_info_from_l1(keyset_hash, l1_connection_config), updated_cache} |
||||
end |
||||
end |
||||
end |
||||
|
||||
# Retrieves and decodes AnyTrust committee keyset information from L1 using the provided keyset hash. |
||||
# |
||||
# This function fetches the block number when the keyset was applied, retrieves |
||||
# the raw keyset data from L1, and decodes it to extract the threshold and public |
||||
# keys information. |
||||
# |
||||
# ## Parameters |
||||
# - `keyset_hash`: The hash of the keyset to be retrieved. |
||||
# - A map containing: |
||||
# - `:sequencer_inbox_address`: The address of the Sequencer Inbox contract. |
||||
# - `:json_rpc_named_arguments`: Configuration parameters for the JSON RPC connection. |
||||
# |
||||
# ## Returns |
||||
# - A map describing an AnyTrust committee. |
||||
@spec get_keyset_info_from_l1( |
||||
binary(), |
||||
%{ |
||||
:sequencer_inbox_address => binary(), |
||||
:json_rpc_named_arguments => EthereumJSONRPC.json_rpc_named_arguments() |
||||
} |
||||
) :: __MODULE__.keyset() |
||||
defp get_keyset_info_from_l1(keyset_hash, %{ |
||||
sequencer_inbox_address: sequencer_inbox_address, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
}) do |
||||
keyset_applied_block_number = |
||||
Rpc.get_block_number_for_keyset(sequencer_inbox_address, keyset_hash, json_rpc_named_arguments) |
||||
|
||||
log_debug("Keyset applied block number: #{keyset_applied_block_number}") |
||||
|
||||
raw_keyset_data = |
||||
get_keyset_raw_data(keyset_hash, keyset_applied_block_number, sequencer_inbox_address, json_rpc_named_arguments) |
||||
|
||||
decode_keyset(raw_keyset_data) |
||||
end |
||||
|
||||
# Retrieves the raw data of a keyset by querying logs for the `SetValidKeyset` event. |
||||
# |
||||
# This function fetches logs for the `SetValidKeyset` event within a specific block |
||||
# emitted by the Sequencer Inbox contract and extracts the keyset data if available. |
||||
# |
||||
# ## Parameters |
||||
# - `keyset_hash`: The hash of the keyset to retrieve. |
||||
# - `block_number`: The block number to search for the logs. |
||||
# - `sequencer_inbox_address`: The address of the Sequencer Inbox contract. |
||||
# - `json_rpc_named_arguments`: Configuration parameters for the JSON RPC connection. |
||||
# |
||||
# ## Returns |
||||
# - The raw data of the keyset if found, otherwise `nil`. |
||||
@spec get_keyset_raw_data( |
||||
binary(), |
||||
non_neg_integer(), |
||||
binary(), |
||||
EthereumJSONRPC.json_rpc_named_arguments() |
||||
) :: binary() | nil |
||||
defp get_keyset_raw_data(keyset_hash, block_number, sequencer_inbox_address, json_rpc_named_arguments) do |
||||
{:ok, logs} = |
||||
IndexerHelper.get_logs( |
||||
block_number, |
||||
block_number, |
||||
sequencer_inbox_address, |
||||
[@set_valid_keyset_event, ArbitrumHelper.bytes_to_hex_str(keyset_hash)], |
||||
json_rpc_named_arguments |
||||
) |
||||
|
||||
if length(logs) > 0 do |
||||
log_info("Found #{length(logs)} SetValidKeyset logs") |
||||
|
||||
set_valid_keyset_event_parse(List.first(logs)) |
||||
else |
||||
log_error("No SetValidKeyset logs found in the block #{block_number}") |
||||
nil |
||||
end |
||||
end |
||||
|
||||
defp set_valid_keyset_event_parse(event) do |
||||
[keyset_data] = decode_data(event["data"], @set_valid_keyset_event_unindexed_params) |
||||
|
||||
keyset_data |
||||
end |
||||
|
||||
# Decodes an AnyTrust committee keyset from a binary input. |
||||
# |
||||
# This function extracts the threshold of committee members configured for the |
||||
# keyset and the number of member public keys from the binary input, then decodes |
||||
# the specified number of public keys. |
||||
# |
||||
# Implemented as per: https://github.com/OffchainLabs/nitro/blob/ad9ab00723e13cf98307b9b65774ad455594ef7b/arbstate/das_reader.go#L217-L248 |
||||
# |
||||
# ## Parameters |
||||
# - A binary input containing the threshold value, the number of public keys, |
||||
# and the public keys themselves. |
||||
# |
||||
# ## Returns |
||||
# - A map describing an AnyTrust committee. |
||||
@spec decode_keyset(binary()) :: __MODULE__.keyset() |
||||
defp decode_keyset(<< |
||||
threshold::big-unsigned-integer-size(64), |
||||
num_keys::big-unsigned-integer-size(64), |
||||
rest::binary |
||||
>>) |
||||
when num_keys <= 64 do |
||||
{pubkeys, _} = decode_pubkeys(rest, num_keys, []) |
||||
|
||||
%{ |
||||
threshold: threshold, |
||||
pubkeys: pubkeys |
||||
} |
||||
end |
||||
|
||||
# Decodes a list of AnyTrust committee member public keys from a binary input. |
||||
# |
||||
# This function recursively processes a binary input to extract a specified number |
||||
# of public keys. |
||||
# |
||||
# ## Parameters |
||||
# - `data`: The binary input containing the public keys. |
||||
# - `num_keys`: The number of public keys to decode. |
||||
# - `acc`: An accumulator list to collect the decoded public keys. |
||||
# |
||||
# ## Returns |
||||
# - A tuple containing: |
||||
# - `{:error, "Insufficient data to decode public keys"}` if the input is insufficient |
||||
# to decode the specified number of keys. |
||||
# - A list of decoded AnyTrust committee member public keys and a binary entity |
||||
# of zero length, if successful. |
||||
@spec decode_pubkeys(binary(), non_neg_integer(), [ |
||||
signer() |
||||
]) :: {:error, String.t()} | {[signer()], binary()} |
||||
defp decode_pubkeys(<<>>, 0, acc), do: {Enum.reverse(acc), <<>>} |
||||
defp decode_pubkeys(<<>>, _num_keys, _acc), do: {:error, "Insufficient data to decode public keys"} |
||||
|
||||
defp decode_pubkeys(data, num_keys, acc) when num_keys > 0 do |
||||
<<high_byte, low_byte, rest::binary>> = data |
||||
pubkey_len = high_byte * 256 + low_byte |
||||
|
||||
<<pubkey_data::binary-size(pubkey_len), remaining::binary>> = rest |
||||
pubkey = parse_pubkey(pubkey_data) |
||||
decode_pubkeys(remaining, num_keys - 1, [pubkey | acc]) |
||||
end |
||||
|
||||
# Parses a public key of an AnyTrust AnyTrust committee member from a binary input. |
||||
# |
||||
# This function extracts either the public key (for trusted sources) or the proof |
||||
# bytes and key bytes (for untrusted sources). |
||||
# |
||||
# Implemented as per: https://github.com/OffchainLabs/nitro/blob/35bd2aa59611702e6403051af581fddda7c17f74/blsSignatures/blsSignatures.go#L206C6-L242 |
||||
# |
||||
# ## Parameters |
||||
# - A binary input containing the proof length and the rest of the data. |
||||
# |
||||
# ## Returns |
||||
# - A map describing an AnyTrust committee member public key. |
||||
@spec parse_pubkey(binary()) :: signer() |
||||
defp parse_pubkey(<<proof_len::size(8), rest::binary>>) do |
||||
if proof_len == 0 do |
||||
# Trusted source, no proof bytes, the rest is the key |
||||
%{trusted: true, key: ArbitrumHelper.bytes_to_hex_str(rest)} |
||||
else |
||||
<<proof_bytes::binary-size(proof_len), key_bytes::binary>> = rest |
||||
|
||||
%{ |
||||
trusted: false, |
||||
proof: ArbitrumHelper.bytes_to_hex_str(proof_bytes), |
||||
key: ArbitrumHelper.bytes_to_hex_str(key_bytes) |
||||
} |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,113 @@ |
||||
defmodule Indexer.Fetcher.Arbitrum.DA.Celestia do |
||||
@moduledoc """ |
||||
Provides functionality for parsing and preparing Celestia data availability |
||||
information associated with Arbitrum rollup batches. |
||||
""" |
||||
|
||||
import Indexer.Fetcher.Arbitrum.Utils.Logging, only: [log_error: 1] |
||||
import Explorer.Chain.Arbitrum.DaMultiPurposeRecord.Helper, only: [calculate_celestia_data_key: 2] |
||||
|
||||
alias Indexer.Fetcher.Arbitrum.Utils.Helper, as: ArbitrumHelper |
||||
|
||||
alias Explorer.Chain.Arbitrum |
||||
|
||||
@enforce_keys [:batch_number, :height, :tx_commitment, :raw] |
||||
defstruct @enforce_keys |
||||
|
||||
@typedoc """ |
||||
Celestia Blob Pointer struct: |
||||
* `batch_number` - The batch number in Arbitrum rollup associated with the |
||||
Celestia data. |
||||
* `height` - The height of the block in Celestia. |
||||
* `tx_commitment` - Data commitment in Celestia. |
||||
* `raw` - Unparsed blob pointer data containing data root, proof, etc. |
||||
""" |
||||
@type t :: %__MODULE__{ |
||||
batch_number: non_neg_integer(), |
||||
height: non_neg_integer(), |
||||
tx_commitment: binary(), |
||||
raw: binary() |
||||
} |
||||
|
||||
@typedoc """ |
||||
Celestia Blob Descriptor struct: |
||||
* `height` - The height of the block in Celestia. |
||||
* `tx_commitment` - Data commitment in Celestia. |
||||
* `raw` - Unparsed blob pointer data containing data root, proof, etc. |
||||
""" |
||||
@type blob_descriptor :: %{ |
||||
:height => non_neg_integer(), |
||||
:tx_commitment => String.t(), |
||||
:raw => String.t() |
||||
} |
||||
|
||||
@doc """ |
||||
Parses the batch accompanying data for Celestia. |
||||
|
||||
This function extracts Celestia blob descriptor information, representing |
||||
information required to address a data blob and prove data availability, |
||||
from a binary input associated with a given batch number. |
||||
|
||||
## Parameters |
||||
- `batch_number`: The batch number in the Arbitrum rollup associated with the Celestia data. |
||||
- `binary`: A binary input containing the Celestia blob descriptor data. |
||||
|
||||
## Returns |
||||
- `{:ok, :in_celestia, da_info}` if the data is successfully parsed. |
||||
- `{:error, nil, nil}` if the data cannot be parsed. |
||||
""" |
||||
@spec parse_batch_accompanying_data(non_neg_integer(), binary()) :: |
||||
{:ok, :in_celestia, __MODULE__.t()} | {:error, nil, nil} |
||||
def parse_batch_accompanying_data( |
||||
batch_number, |
||||
<< |
||||
height::big-unsigned-integer-size(64), |
||||
_start_index::binary-size(8), |
||||
_shares_length::binary-size(8), |
||||
_key::big-unsigned-integer-size(64), |
||||
_num_leaves::big-unsigned-integer-size(64), |
||||
_tuple_root_nonce::big-unsigned-integer-size(64), |
||||
tx_commitment::binary-size(32), |
||||
_data_root::binary-size(32), |
||||
_side_nodes_length::big-unsigned-integer-size(64), |
||||
_rest::binary |
||||
>> = raw |
||||
) do |
||||
# https://github.com/celestiaorg/nitro-contracts/blob/celestia/blobstream/src/bridge/SequencerInbox.sol#L334-L360 |
||||
{:ok, :in_celestia, %__MODULE__{batch_number: batch_number, height: height, tx_commitment: tx_commitment, raw: raw}} |
||||
end |
||||
|
||||
def parse_batch_accompanying_data(_, _) do |
||||
log_error("Can not parse Celestia DA message.") |
||||
{:error, nil, nil} |
||||
end |
||||
|
||||
@doc """ |
||||
Prepares Celestia Blob data for import. |
||||
|
||||
## Parameters |
||||
- `source`: The initial list of data to be imported. |
||||
- `da_info`: The Celestia blob descriptor struct containing details about the data blob. |
||||
|
||||
## Returns |
||||
- An updated list of data structures ready for import, including the Celestia blob descriptor. |
||||
""" |
||||
@spec prepare_for_import(list(), __MODULE__.t()) :: [Arbitrum.DaMultiPurposeRecord.to_import()] |
||||
def prepare_for_import(source, %__MODULE__{} = da_info) do |
||||
data = %{ |
||||
height: da_info.height, |
||||
tx_commitment: ArbitrumHelper.bytes_to_hex_str(da_info.tx_commitment), |
||||
raw: ArbitrumHelper.bytes_to_hex_str(da_info.raw) |
||||
} |
||||
|
||||
[ |
||||
%{ |
||||
data_type: 0, |
||||
data_key: calculate_celestia_data_key(da_info.height, da_info.tx_commitment), |
||||
data: data, |
||||
batch_number: da_info.batch_number |
||||
} |
||||
| source |
||||
] |
||||
end |
||||
end |
@ -0,0 +1,143 @@ |
||||
defmodule Indexer.Fetcher.Arbitrum.DA.Common do |
||||
@moduledoc """ |
||||
This module provides common functionalities for handling data availability (DA) |
||||
information in the Arbitrum rollup. |
||||
""" |
||||
|
||||
import Indexer.Fetcher.Arbitrum.Utils.Logging, only: [log_error: 1] |
||||
|
||||
alias Indexer.Fetcher.Arbitrum.DA.{Anytrust, Celestia} |
||||
|
||||
alias Explorer.Chain.Arbitrum |
||||
|
||||
@doc """ |
||||
Examines the batch accompanying data to determine its type and parse it accordingly. |
||||
|
||||
This function examines the batch accompanying data to identify its type and then |
||||
parses it based on the identified type if necessary. |
||||
|
||||
## Parameters |
||||
- `batch_number`: The batch number in the Arbitrum rollup. |
||||
- `batch_accompanying_data`: The binary data accompanying the batch. |
||||
|
||||
## Returns |
||||
- `{status, da_type, da_info}` where `da_type` is one of `:in_blob4844`, |
||||
`:in_calldata`, `:in_celestia`, `:in_anytrust`, or `nil` if the accompanying |
||||
data cannot be parsed or is of an unsupported type. `da_info` contains the DA |
||||
info descriptor for Celestia or Anytrust. |
||||
""" |
||||
@spec examine_batch_accompanying_data(non_neg_integer(), binary()) :: |
||||
{:ok, :in_blob4844, nil} |
||||
| {:ok, :in_calldata, nil} |
||||
| {:ok, :in_celestia, Celestia.t()} |
||||
| {:ok, :in_anytrust, Anytrust.t()} |
||||
| {:error, nil, nil} |
||||
def examine_batch_accompanying_data(batch_number, batch_accompanying_data) do |
||||
case batch_accompanying_data do |
||||
nil -> {:ok, :in_blob4844, nil} |
||||
_ -> parse_data_availability_info(batch_number, batch_accompanying_data) |
||||
end |
||||
end |
||||
|
||||
@doc """ |
||||
Prepares data availability (DA) information for import. |
||||
|
||||
This function processes a list of DA information, either from Celestia or Anytrust, |
||||
preparing it for database import. |
||||
|
||||
## Parameters |
||||
- `da_info`: A list of DA information structs. |
||||
- `l1_connection_config`: A map containing the address of the Sequencer Inbox contract |
||||
and configuration parameters for the JSON RPC connection. |
||||
|
||||
## Returns |
||||
- A list of data structures ready for import, each containing: |
||||
- `:data_key`: A binary key identifying the data. |
||||
- `:data_type`: An integer indicating the type of data, which can be `0` |
||||
for data blob descriptors and `1` for Anytrust keyset descriptors. |
||||
- `:data`: A map containing the DA information. |
||||
- `:batch_number`: The batch number associated with the data, or `nil`. |
||||
""" |
||||
@spec prepare_for_import([Celestia.t() | Anytrust.t() | map()], %{ |
||||
:sequencer_inbox_address => String.t(), |
||||
:json_rpc_named_arguments => EthereumJSONRPC.json_rpc_named_arguments() |
||||
}) :: [Arbitrum.DaMultiPurposeRecord.to_import()] |
||||
def prepare_for_import([], _), do: [] |
||||
|
||||
def prepare_for_import(da_info, l1_connection_config) do |
||||
da_info |
||||
|> Enum.reduce({[], MapSet.new()}, fn info, {acc, cache} -> |
||||
case info do |
||||
%Celestia{} -> |
||||
{Celestia.prepare_for_import(acc, info), cache} |
||||
|
||||
%Anytrust{} -> |
||||
Anytrust.prepare_for_import(acc, info, l1_connection_config, cache) |
||||
|
||||
_ -> |
||||
{acc, cache} |
||||
end |
||||
end) |
||||
|> Kernel.elem(0) |
||||
end |
||||
|
||||
@doc """ |
||||
Determines if data availability information requires import. |
||||
|
||||
This function checks the type of data availability (DA) and returns whether |
||||
the data should be imported based on its type. |
||||
|
||||
## Parameters |
||||
- `da_type`: The type of data availability, which can be `:in_blob4844`, `:in_calldata`, |
||||
`:in_celestia`, `:in_anytrust`, or `nil`. |
||||
|
||||
## Returns |
||||
- `true` if the DA type is `:in_celestia` or `:in_anytrust`, indicating that the data |
||||
requires import. |
||||
- `false` for all other DA types, indicating that the data does not require import. |
||||
""" |
||||
@spec required_import?(:in_blob4844 | :in_calldata | :in_celestia | :in_anytrust | nil) :: boolean() |
||||
def required_import?(da_type) do |
||||
da_type in [:in_celestia, :in_anytrust] |
||||
end |
||||
|
||||
# Parses data availability information based on the header flag. |
||||
@spec parse_data_availability_info(non_neg_integer(), binary()) :: |
||||
{:ok, :in_calldata, nil} |
||||
| {:ok, :in_celestia, Celestia.t()} |
||||
| {:ok, :in_anytrust, Anytrust.t()} |
||||
| {:error, nil, nil} |
||||
defp parse_data_availability_info(batch_number, << |
||||
header_flag::size(8), |
||||
rest::binary |
||||
>>) do |
||||
# https://github.com/OffchainLabs/nitro-contracts/blob/90037b996509312ef1addb3f9352457b8a99d6a6/src/bridge/SequencerInbox.sol#L69-L81 |
||||
case header_flag do |
||||
0 -> |
||||
{:ok, :in_calldata, nil} |
||||
|
||||
12 -> |
||||
Celestia.parse_batch_accompanying_data(batch_number, rest) |
||||
|
||||
32 -> |
||||
log_error("ZERO HEAVY messages are not supported.") |
||||
{:error, nil, nil} |
||||
|
||||
128 -> |
||||
log_error("DAS messages are not supported.") |
||||
{:error, nil, nil} |
||||
|
||||
136 -> |
||||
Anytrust.parse_batch_accompanying_data(batch_number, rest) |
||||
|
||||
_ -> |
||||
log_error("Unknown header flag found during an attempt to parse DA data: #{header_flag}") |
||||
{:error, nil, nil} |
||||
end |
||||
end |
||||
|
||||
defp parse_data_availability_info(_, _) do |
||||
log_error("Failed to parse data availability information.") |
||||
{:error, nil, nil} |
||||
end |
||||
end |
Loading…
Reference in new issue