Optimism chain type (#9460)
* Transaction page L1 fields * Path fix * Reduce the number of files from 19 to 5 in logs rotate config * Customize optimism-goerli deployment * Optimism branding * Remove testnet logo text. OG uses customized label * Fix Circles theme * L1 tx fields fix for Optimism BedRock update * Remove redundant line * Add gas_price handling for elixir_to_params and change function ordering * Remove l1TxOrigin handling for another version of RPC * Add GA * Fix realtime fetcher test * Update Changelog * Fix internal transactions processing for non-consensus blocks * Lose consensus only for consensus=true blocks * Fix handling transaction RPC responses without some fields * Fix tests except for indexer module * Add Optimism BedRock support (Txn Batches, Output Roots, Deposits, Withdrawals) (#6980) * Add op_output_roots table * Add OptimismOutputRoots runner * Add initial code for output roots fetcher * Add checks to init function * Partially add logs and L1 reorgs handling * Add reorgs handling * Add RPC retries * Write output roots to database * Log output roots handling * Update indexer README * Add API v2 for Optimism Output Roots * Add op_withdrawals table * Add OptimismWithdrawals runner * Prepare realtime optimism withdrawals fetching * Add realtime optimism withdrawals fetching * Define checks in init function * log.first_topic can be nil * Show total count of output roots in API v2 * Add msg_nonce gaps filler * Refactoring * Intermediate refactoring * Add historical withdrawals handling and refactor * Finish op_withdrawals table filling * Small refactoring * Add op_withdrawal_events table * Add OptimismWithdrawalEvents runner * Add OptimismWithdrawalEvent fetcher * Update indexer README * Add API v2 for Optimism Withdrawals * Add env variables to common-blockscout.env and Makefile * Set `from` as address object instead of just address hash for withdrawal * mix format * Add op_transaction_batches table * Add OptimismTxnBatches runner * Add a draft for OptimismTxnBatch fetcher * Add a draft for OptimismTxnBatch * Extend a draft for OptimismTxnBatch * Extend OptimismTxnBatch * Finish OptimismTxnBatch (without reorgs handling yet) * Optimize OptimismTxnBatch fetcher * Remove duplicated txn batches * Add zlib_inflate_handler for empty case * Add reorgs handling for txn batches * Fix reorgs handling for txn batches * Small refactor * Finish Indexer.Fetcher.OptimismTxnBatch (without refactoring yet) * Apply new ex_rlp version * Add API v2 for Optimism Txn Batches * Add env variables to common-blockscout.env and Makefile * Refactor OptimismTxnBatch fetcher for mix credo * Replace binary_slice with binary_part function to run with Elixir 1.13 * Update changelog * Update indexer readme * Rename op_withdrawals.l2_tx_hash field to l2_transaction_hash * Rename l1_tx_hash fields to l1_transaction_hash * Rename *tx* fields to *transaction* fields * Rename env variables * Rename env variables * Add an indexer helper * Add an indexer helper * Small refactoring * Fix tx_count for txn batches view * Use EthereumJSONRPC.Block.ByHash instead of the raw call * Infinity timeout for blocks query * Small refactoring * Refactor init function for two modules * Small refactoring * Rename l1_transaction_timestamp field to l1_timestamp * Rename withdrawal_hash field to hash * Refactor for decode_data function * Refactor for mix credo * Add INDEXER_OPTIMISM_L1_BATCH_BLOCKS_CHUNK_SIZE env and small refactoring * Add INDEXER_OPTIMISM_L1_BATCH_BLOCKS_CHUNK_SIZE env to other files * Add an index for l1_block_number field * Add an index for l1_block_number field * Remove redundant :ok * Use faster way to count rows in a table * Refactor reorgs monitor functions * Clarify frame structure * Reduce storage consumption for optimism transaction batches * Reuse CacheHelper.estimated_count_from function * Bedrock optimism deposits (#6993) * Create `op_deposits` table * Add OptimismDeposit runner * WIP Fetcher * Finish fetcher * Integrate deposits into APIv2 * Add envs * Fix requests * Remove debug * Update envs names * Rename `tx` -> `transaction` * Reuse `decode_data/2` * Fix review * Add `uninstall_filter` * Fix formatting * Switch to realtime mode more carefully * Fix review Allow nil in timestamp Add progress logging Improve check_interval calculation * Fix logging and env * Fix Association.NotLoaded error * Replace switching to realtime mode log * Remove excess start_block * Fix reorg logging * Fix `from_block` > `to_block` and add realtime logging * Fix block boundaries --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * mix format * Return total count of L2 entity by a separate API request * Filter by consensus blocks * Parallelize tx count operation and small refactoring * Use read replica for L2 entities in API * Parse block_number and tx_hash for Optimism Deposits module * Return page_size back to 50 * Small fixes and refactoring * Update apps/block_scout_web/lib/block_scout_web/api_router.ex Co-authored-by: Maxim Filonov <53992153+sl1depengwyn@users.noreply.github.com> * Small optimization * Use ecto association instead of explicit join for txn batches * Refactoring * Use Stream inspead of Enum * Small refactoring * Add assoc for transaction batches in OptimismFrameSequence * Use common reorg monitor for Optimism modules * Rename Explorer.Helpers to Explorer.Helper * Don't start an optimism module unless the main optimism module is not started * Don't start reorg monitor for optimism modules when it is not needed * Small refactoring * Remove debug broadcasting * Add Optimism BedRock Deposits to the main page in API (#7200) * Add Optimism BedRock Deposits to the main page in API * Update changelog * Pass the number of deposits instead of only one item per once --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Refactor for credo * Output L1 fields in API v2 for transaction page * Update changelog * Use helper * Refactor Indexer.Fetcher.Optimism * Fix l1_timestamp issue in OptimismTxnBatch fetcher * Reset Logger metadata before Indexer.Transform.OptimismWithdrawals.parse function finishes * Fix IDs ordering in remove_duplicates function of Indexer.Fetcher.OptimismTxnBatch * Consider rewriting of the first frame in Indexer.Fetcher.OptimismTxnBatch * Fix Indexer.Fetcher.OptimismTxnBatch (consider chunking) * Fix Indexer.Fetcher.OptimismTxnBatch * Fix handling invalid frame sequences in Indexer.Fetcher.OptimismTxnBatch * Read Optimism finalization period from a smart contract * Fixes for dialyzer * Fix for EthereumJSONRPC tests * Fixes for Explorer tests * Fixes for Explorer tests * Fix of block/realtime/fetcher_test.exs * mix format and small fixes for block_scout_web tests * Reset GA cache * Fix handling nil in PendingBlockOperation.estimated_count() --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> Co-authored-by: Maxim Filonov <53992153+sl1depengwyn@users.noreply.github.com> * Fix autocomplete * Fix merging conflicts * Add exit handler to Indexer.Fetcher.OptimismWithdrawal * Fix transactions ordering in Indexer.Fetcher.OptimismTxnBatch * Update changelog * Refactor to fix credo * Mix credo fix * Fix transaction batches module for L2 OP stack (#7827) * Fix mixed transactions handling in Indexer.Fetcher.OptimismTxnBatch * Ignore duplicated frame * Update changelog * Add sorting to the future frames list * Change list order --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Remove unused aliases * Ignore previously handled frame by OP transaction batches module (#8122) * Ignore duplicated frame * Update changelog --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Return alias for Explorer.Chain.Cache.Helper in chain.ex * Ignore invalid frame by OP transaction batches module (#8208) * Update changelog * Ignore invalid frame * Update changelog --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Fix Indexer.Fetcher.OptimismTxnBatch * Fix API v2 for OP Withdrawals * Refactor optimism fetchers init * Add log for switching from fallback url * Fix for Indexer.Fetcher.OptimismTxnBatch * Add OP withdrawal status to transaction page in API (#8702) * Add OP withdrawal status to transaction page in API * Update changelog * Small refactoring * Update .dialyzer-ignore --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Add start pause to `Indexer.Fetcher.OptimismTxnBatch` * Small refactor of `Indexer.Fetcher.OptimismTxnBatch` * Consider consensus block only when retrieving OP withdrawal transaction status (#8811) * Consider consensus block only when retrieving OP withdrawal transaction status * Update changelog * Clear GA cache --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> * Hotfix for optimism_withdrawal_transaction_status function * Return all OP Withdrawals bound to L2 transaction * Try to import config * Remove unused functions from Explorer.Chain * Refactor for mix credo * Fix order of proxy standards: 1167, 1967 * Fixes in Optimism due to changed log topics type * Fix for EthereumJSONRPC tests * Clear GA cache and update cspell.json * Fix indexer tests * Return current exchange rate in api/v2/stats * Fix log decoding bug * Temp disable build of image for arm64 * Rewrite Indexer.Fetcher.OptimismTxnBatch module * Add handling of span batches * Add support of latest block for Optimism modules * Update changelog and spelling * Rewrite Indexer.Fetcher.OptimismTxnBatch module * Add handling of span batches * Add support of latest block for Optimism modules * Refactoring * Partially add specs and docs for public functions * Refactoring * add an entry to CHANEGELOG.md * apply review (use origin entity instead of joined entity in with tx status) * Fixes after rebase * Remove old UI sustomizations * Optimism chain type * Change structure of folders * Fixes after review * Fix CHANGELOG * Fixes after 2nd review * Process 3d review: add tests for fee/2 function * Process 4th review * Review fix: move Op related functions from chain.ex * Review fix: make OptimismFinalizationPeriod configurable * Process review comment * System.get_env("CHAIN_TYPE") => Application.get_env(:explorer, :chain_type) --------- Co-authored-by: POA <33550681+poa@users.noreply.github.com> Co-authored-by: Qwerty5Uiop <alex000010@bk.ru> Co-authored-by: varasev <33550681+varasev@users.noreply.github.com> Co-authored-by: Maxim Filonov <53992153+sl1depengwyn@users.noreply.github.com> Co-authored-by: rlgns98kr <rlgns98kr@gmail.com>pull/9531/head
parent
7467d4d075
commit
9819522ea1
@ -0,0 +1,45 @@ |
||||
name: Release for Ethereum |
||||
|
||||
on: |
||||
release: |
||||
types: [published] |
||||
|
||||
env: |
||||
OTP_VERSION: ${{ vars.OTP_VERSION }} |
||||
ELIXIR_VERSION: ${{ vars.ELIXIR_VERSION }} |
||||
|
||||
jobs: |
||||
push_to_registry: |
||||
name: Push Docker image to Docker Hub |
||||
runs-on: ubuntu-latest |
||||
env: |
||||
RELEASE_VERSION: ${{ vars.RELEASE_VERSION }} |
||||
steps: |
||||
- uses: actions/checkout@v4 |
||||
- name: Setup repo |
||||
uses: ./.github/actions/setup-repo |
||||
with: |
||||
docker-username: ${{ secrets.DOCKER_USERNAME }} |
||||
docker-password: ${{ secrets.DOCKER_PASSWORD }} |
||||
|
||||
- name: Build and push Docker image for Optimism |
||||
uses: docker/build-push-action@v5 |
||||
with: |
||||
context: . |
||||
file: ./docker/Dockerfile |
||||
push: true |
||||
tags: blockscout/blockscout-optimism:latest, blockscout/blockscout-optimism:${{ env.RELEASE_VERSION }} |
||||
platforms: | |
||||
linux/amd64 |
||||
linux/arm64/v8 |
||||
build-args: | |
||||
CACHE_EXCHANGE_RATES_PERIOD= |
||||
API_V1_READ_METHODS_DISABLED=false |
||||
DISABLE_WEBAPP=false |
||||
API_V1_WRITE_METHODS_DISABLED=false |
||||
CACHE_TOTAL_GAS_USAGE_COUNTER_ENABLED= |
||||
ADMIN_PANEL_ENABLED=false |
||||
CACHE_ADDRESS_WITH_BALANCES_UPDATE_INTERVAL= |
||||
BLOCKSCOUT_VERSION=v${{ env.RELEASE_VERSION }}-beta |
||||
RELEASE_VERSION=${{ env.RELEASE_VERSION }} |
||||
CHAIN_TYPE=optimism |
@ -0,0 +1,22 @@ |
||||
defmodule BlockScoutWeb.OptimismDepositChannel do |
||||
@moduledoc """ |
||||
Establishes pub/sub channel for live updates of Optimism deposit events. |
||||
""" |
||||
use BlockScoutWeb, :channel |
||||
|
||||
intercept(["deposits"]) |
||||
|
||||
def join("optimism_deposits:new_deposits", _params, socket) do |
||||
{:ok, %{}, socket} |
||||
end |
||||
|
||||
def handle_out( |
||||
"deposits", |
||||
%{deposits: deposits}, |
||||
%Phoenix.Socket{handler: BlockScoutWeb.UserSocketV2} = socket |
||||
) do |
||||
push(socket, "deposits", %{deposits: Enum.count(deposits)}) |
||||
|
||||
{:noreply, socket} |
||||
end |
||||
end |
@ -0,0 +1,111 @@ |
||||
defmodule BlockScoutWeb.API.V2.OptimismController do |
||||
use BlockScoutWeb, :controller |
||||
|
||||
import BlockScoutWeb.Chain, |
||||
only: [ |
||||
next_page_params: 3, |
||||
paging_options: 1, |
||||
split_list_by_page: 1 |
||||
] |
||||
|
||||
alias Explorer.Chain |
||||
alias Explorer.Chain.Optimism.{Deposit, OutputRoot, TxnBatch, Withdrawal} |
||||
|
||||
action_fallback(BlockScoutWeb.API.V2.FallbackController) |
||||
|
||||
def txn_batches(conn, params) do |
||||
{batches, next_page} = |
||||
params |
||||
|> paging_options() |
||||
|> Keyword.put(:api?, true) |
||||
|> TxnBatch.list() |
||||
|> split_list_by_page() |
||||
|
||||
next_page_params = next_page_params(next_page, batches, params) |
||||
|
||||
conn |
||||
|> put_status(200) |
||||
|> render(:optimism_txn_batches, %{ |
||||
batches: batches, |
||||
next_page_params: next_page_params |
||||
}) |
||||
end |
||||
|
||||
def txn_batches_count(conn, _params) do |
||||
items_count(conn, TxnBatch) |
||||
end |
||||
|
||||
def output_roots(conn, params) do |
||||
{roots, next_page} = |
||||
params |
||||
|> paging_options() |
||||
|> Keyword.put(:api?, true) |
||||
|> OutputRoot.list() |
||||
|> split_list_by_page() |
||||
|
||||
next_page_params = next_page_params(next_page, roots, params) |
||||
|
||||
conn |
||||
|> put_status(200) |
||||
|> render(:optimism_output_roots, %{ |
||||
roots: roots, |
||||
next_page_params: next_page_params |
||||
}) |
||||
end |
||||
|
||||
def output_roots_count(conn, _params) do |
||||
items_count(conn, OutputRoot) |
||||
end |
||||
|
||||
def deposits(conn, params) do |
||||
{deposits, next_page} = |
||||
params |
||||
|> paging_options() |
||||
|> Keyword.put(:api?, true) |
||||
|> Deposit.list() |
||||
|> split_list_by_page() |
||||
|
||||
next_page_params = next_page_params(next_page, deposits, params) |
||||
|
||||
conn |
||||
|> put_status(200) |
||||
|> render(:optimism_deposits, %{ |
||||
deposits: deposits, |
||||
next_page_params: next_page_params |
||||
}) |
||||
end |
||||
|
||||
def deposits_count(conn, _params) do |
||||
items_count(conn, Deposit) |
||||
end |
||||
|
||||
def withdrawals(conn, params) do |
||||
{withdrawals, next_page} = |
||||
params |
||||
|> paging_options() |
||||
|> Keyword.put(:api?, true) |
||||
|> Withdrawal.list() |
||||
|> split_list_by_page() |
||||
|
||||
next_page_params = next_page_params(next_page, withdrawals, params) |
||||
|
||||
conn |
||||
|> put_status(200) |
||||
|> render(:optimism_withdrawals, %{ |
||||
withdrawals: withdrawals, |
||||
next_page_params: next_page_params |
||||
}) |
||||
end |
||||
|
||||
def withdrawals_count(conn, _params) do |
||||
items_count(conn, Withdrawal) |
||||
end |
||||
|
||||
defp items_count(conn, module) do |
||||
count = Chain.get_table_rows_total_count(module, api?: true) |
||||
|
||||
conn |
||||
|> put_status(200) |
||||
|> render(:optimism_items_count, %{count: count}) |
||||
end |
||||
end |
@ -0,0 +1,149 @@ |
||||
defmodule BlockScoutWeb.API.V2.OptimismView do |
||||
use BlockScoutWeb, :view |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
alias BlockScoutWeb.API.V2.Helper |
||||
alias Explorer.{Chain, Repo} |
||||
alias Explorer.Chain.{Block, Transaction} |
||||
alias Explorer.Chain.Optimism.Withdrawal |
||||
|
||||
def render("optimism_txn_batches.json", %{ |
||||
batches: batches, |
||||
next_page_params: next_page_params |
||||
}) do |
||||
items = |
||||
batches |
||||
|> Enum.map(fn batch -> |
||||
Task.async(fn -> |
||||
tx_count = |
||||
Repo.replica().aggregate( |
||||
from( |
||||
t in Transaction, |
||||
inner_join: b in Block, |
||||
on: b.hash == t.block_hash and b.consensus == true, |
||||
where: t.block_number == ^batch.l2_block_number |
||||
), |
||||
:count, |
||||
timeout: :infinity |
||||
) |
||||
|
||||
%{ |
||||
"l2_block_number" => batch.l2_block_number, |
||||
"tx_count" => tx_count, |
||||
"l1_tx_hashes" => batch.frame_sequence.l1_transaction_hashes, |
||||
"l1_timestamp" => batch.frame_sequence.l1_timestamp |
||||
} |
||||
end) |
||||
end) |
||||
|> Task.yield_many(:infinity) |
||||
|> Enum.map(fn {_task, {:ok, item}} -> item end) |
||||
|
||||
%{ |
||||
items: items, |
||||
next_page_params: next_page_params |
||||
} |
||||
end |
||||
|
||||
def render("optimism_output_roots.json", %{ |
||||
roots: roots, |
||||
next_page_params: next_page_params |
||||
}) do |
||||
%{ |
||||
items: |
||||
Enum.map(roots, fn r -> |
||||
%{ |
||||
"l2_output_index" => r.l2_output_index, |
||||
"l2_block_number" => r.l2_block_number, |
||||
"l1_tx_hash" => r.l1_transaction_hash, |
||||
"l1_timestamp" => r.l1_timestamp, |
||||
"l1_block_number" => r.l1_block_number, |
||||
"output_root" => r.output_root |
||||
} |
||||
end), |
||||
next_page_params: next_page_params |
||||
} |
||||
end |
||||
|
||||
def render("optimism_deposits.json", %{ |
||||
deposits: deposits, |
||||
next_page_params: next_page_params |
||||
}) do |
||||
%{ |
||||
items: |
||||
Enum.map(deposits, fn deposit -> |
||||
%{ |
||||
"l1_block_number" => deposit.l1_block_number, |
||||
"l2_tx_hash" => deposit.l2_transaction_hash, |
||||
"l1_block_timestamp" => deposit.l1_block_timestamp, |
||||
"l1_tx_hash" => deposit.l1_transaction_hash, |
||||
"l1_tx_origin" => deposit.l1_transaction_origin, |
||||
"l2_tx_gas_limit" => deposit.l2_transaction.gas |
||||
} |
||||
end), |
||||
next_page_params: next_page_params |
||||
} |
||||
end |
||||
|
||||
def render("optimism_deposits.json", %{deposits: deposits}) do |
||||
Enum.map(deposits, fn deposit -> |
||||
%{ |
||||
"l1_block_number" => deposit.l1_block_number, |
||||
"l1_block_timestamp" => deposit.l1_block_timestamp, |
||||
"l1_tx_hash" => deposit.l1_transaction_hash, |
||||
"l2_tx_hash" => deposit.l2_transaction_hash |
||||
} |
||||
end) |
||||
end |
||||
|
||||
def render("optimism_withdrawals.json", %{ |
||||
withdrawals: withdrawals, |
||||
next_page_params: next_page_params, |
||||
conn: conn |
||||
}) do |
||||
%{ |
||||
items: |
||||
Enum.map(withdrawals, fn w -> |
||||
msg_nonce = |
||||
Bitwise.band( |
||||
Decimal.to_integer(w.msg_nonce), |
||||
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF |
||||
) |
||||
|
||||
msg_nonce_version = Bitwise.bsr(Decimal.to_integer(w.msg_nonce), 240) |
||||
|
||||
{from_address, from_address_hash} = |
||||
with false <- is_nil(w.from), |
||||
{:ok, address} <- |
||||
Chain.hash_to_address( |
||||
w.from, |
||||
[necessity_by_association: %{:names => :optional, :smart_contract => :optional}, api?: true], |
||||
false |
||||
) do |
||||
{address, address.hash} |
||||
else |
||||
_ -> {nil, nil} |
||||
end |
||||
|
||||
{status, challenge_period_end} = Withdrawal.status(w) |
||||
|
||||
%{ |
||||
"msg_nonce_raw" => Decimal.to_string(w.msg_nonce, :normal), |
||||
"msg_nonce" => msg_nonce, |
||||
"msg_nonce_version" => msg_nonce_version, |
||||
"from" => Helper.address_with_info(conn, from_address, from_address_hash, w.from), |
||||
"l2_tx_hash" => w.l2_transaction_hash, |
||||
"l2_timestamp" => w.l2_timestamp, |
||||
"status" => status, |
||||
"l1_tx_hash" => w.l1_transaction_hash, |
||||
"challenge_period_end" => challenge_period_end |
||||
} |
||||
end), |
||||
next_page_params: next_page_params |
||||
} |
||||
end |
||||
|
||||
def render("optimism_items_count.json", %{count: count}) do |
||||
count |
||||
end |
||||
end |
@ -0,0 +1,54 @@ |
||||
defmodule Explorer.Chain.Cache.OptimismFinalizationPeriod do |
||||
@moduledoc """ |
||||
Caches Optimism Finalization period. |
||||
""" |
||||
|
||||
require Logger |
||||
|
||||
use Explorer.Chain.MapCache, |
||||
name: :optimism_finalization_period, |
||||
key: :period |
||||
|
||||
import EthereumJSONRPC, only: [json_rpc: 2, quantity_to_integer: 1] |
||||
|
||||
alias EthereumJSONRPC.Contract |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Fetcher.Optimism.OutputRoot |
||||
|
||||
defp handle_fallback(:period) do |
||||
optimism_l1_rpc = Application.get_all_env(:indexer)[Optimism][:optimism_l1_rpc] |
||||
output_oracle = Application.get_all_env(:indexer)[OutputRoot][:output_oracle] |
||||
|
||||
# call FINALIZATION_PERIOD_SECONDS() public getter of L2OutputOracle contract on L1 |
||||
request = Contract.eth_call_request("0xf4daa291", output_oracle, 0, nil, nil) |
||||
|
||||
case json_rpc(request, json_rpc_named_arguments(optimism_l1_rpc)) do |
||||
{:ok, value} -> |
||||
{:update, quantity_to_integer(value)} |
||||
|
||||
{:error, reason} -> |
||||
Logger.debug([ |
||||
"Couldn't fetch Optimism finalization period, reason: #{inspect(reason)}" |
||||
]) |
||||
|
||||
{:return, nil} |
||||
end |
||||
end |
||||
|
||||
defp handle_fallback(_key), do: {:return, nil} |
||||
|
||||
defp json_rpc_named_arguments(optimism_l1_rpc) do |
||||
[ |
||||
transport: EthereumJSONRPC.HTTP, |
||||
transport_options: [ |
||||
http: EthereumJSONRPC.HTTP.HTTPoison, |
||||
url: optimism_l1_rpc, |
||||
http_options: [ |
||||
recv_timeout: :timer.minutes(10), |
||||
timeout: :timer.minutes(10), |
||||
hackney: [pool: :ethereum_jsonrpc] |
||||
] |
||||
] |
||||
] |
||||
end |
||||
end |
@ -0,0 +1,106 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.Deposits do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.Deposit.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.Deposit |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [Deposit.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: Deposit |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_deposits |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_optimism_deposits, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_deposits, |
||||
:optimism_deposits |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [Deposit.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce Deposit ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.l2_transaction_hash) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: Deposit, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :l2_transaction_hash, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
deposit in Deposit, |
||||
update: [ |
||||
set: [ |
||||
# don't update `l2_transaction_hash` as it is a primary key and used for the conflict target |
||||
l1_block_number: fragment("EXCLUDED.l1_block_number"), |
||||
l1_block_timestamp: fragment("EXCLUDED.l1_block_timestamp"), |
||||
l1_transaction_hash: fragment("EXCLUDED.l1_transaction_hash"), |
||||
l1_transaction_origin: fragment("EXCLUDED.l1_transaction_origin"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", deposit.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", deposit.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.l1_block_number, EXCLUDED.l1_block_timestamp, EXCLUDED.l1_transaction_hash, EXCLUDED.l1_transaction_origin) IS DISTINCT FROM (?, ?, ?, ?)", |
||||
deposit.l1_block_number, |
||||
deposit.l1_block_timestamp, |
||||
deposit.l1_transaction_hash, |
||||
deposit.l1_transaction_origin |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,102 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.FrameSequences do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.Optimism.FrameSequence.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.FrameSequence |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [FrameSequence.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: FrameSequence |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_frame_sequences |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_frame_sequences, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_frame_sequences, |
||||
:optimism_frame_sequences |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [FrameSequence.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce FrameSequence ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.id) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: FrameSequence, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :id, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
fs in FrameSequence, |
||||
update: [ |
||||
set: [ |
||||
# don't update `id` as it is a primary key and used for the conflict target |
||||
l1_transaction_hashes: fragment("EXCLUDED.l1_transaction_hashes"), |
||||
l1_timestamp: fragment("EXCLUDED.l1_timestamp"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", fs.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", fs.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.l1_transaction_hashes, EXCLUDED.l1_timestamp) IS DISTINCT FROM (?, ?)", |
||||
fs.l1_transaction_hashes, |
||||
fs.l1_timestamp |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,108 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.OutputRoots do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.Optimism.OutputRoot.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.OutputRoot |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [OutputRoot.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: OutputRoot |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_output_roots |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_output_roots, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_output_roots, |
||||
:optimism_output_roots |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [OutputRoot.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce OutputRoot ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.l2_output_index) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: OutputRoot, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :l2_output_index, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
root in OutputRoot, |
||||
update: [ |
||||
set: [ |
||||
# don't update `l2_output_index` as it is a primary key and used for the conflict target |
||||
l2_block_number: fragment("EXCLUDED.l2_block_number"), |
||||
l1_transaction_hash: fragment("EXCLUDED.l1_transaction_hash"), |
||||
l1_timestamp: fragment("EXCLUDED.l1_timestamp"), |
||||
l1_block_number: fragment("EXCLUDED.l1_block_number"), |
||||
output_root: fragment("EXCLUDED.output_root"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", root.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", root.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.l2_block_number, EXCLUDED.l1_transaction_hash, EXCLUDED.l1_timestamp, EXCLUDED.l1_block_number, EXCLUDED.output_root) IS DISTINCT FROM (?, ?, ?, ?, ?)", |
||||
root.l2_block_number, |
||||
root.l1_transaction_hash, |
||||
root.l1_timestamp, |
||||
root.l1_block_number, |
||||
root.output_root |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,100 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.TxnBatches do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.Optimism.TxnBatch.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.TxnBatch |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [TxnBatch.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: TxnBatch |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_txn_batches |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_txn_batches, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_txn_batches, |
||||
:optimism_txn_batches |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [TxnBatch.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce TxnBatch ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.l2_block_number) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: TxnBatch, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :l2_block_number, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
tb in TxnBatch, |
||||
update: [ |
||||
set: [ |
||||
# don't update `l2_block_number` as it is a primary key and used for the conflict target |
||||
frame_sequence_id: fragment("EXCLUDED.frame_sequence_id"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", tb.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", tb.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.frame_sequence_id) IS DISTINCT FROM (?)", |
||||
tb.frame_sequence_id |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,105 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.WithdrawalEvents do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.Optimism.WithdrawalEvent.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.WithdrawalEvent |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [WithdrawalEvent.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: WithdrawalEvent |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_withdrawal_events |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_withdrawal_events, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_withdrawal_events, |
||||
:optimism_withdrawal_events |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [WithdrawalEvent.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce WithdrawalEvent ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, &{&1.withdrawal_hash, &1.l1_event_type}) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: WithdrawalEvent, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: [:withdrawal_hash, :l1_event_type], |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
we in WithdrawalEvent, |
||||
update: [ |
||||
set: [ |
||||
# don't update `withdrawal_hash` as it is a part of the composite primary key and used for the conflict target |
||||
# don't update `l1_event_type` as it is a part of the composite primary key and used for the conflict target |
||||
l1_timestamp: fragment("EXCLUDED.l1_timestamp"), |
||||
l1_transaction_hash: fragment("EXCLUDED.l1_transaction_hash"), |
||||
l1_block_number: fragment("EXCLUDED.l1_block_number"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", we.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", we.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.l1_timestamp, EXCLUDED.l1_transaction_hash, EXCLUDED.l1_block_number) IS DISTINCT FROM (?, ?, ?)", |
||||
we.l1_timestamp, |
||||
we.l1_transaction_hash, |
||||
we.l1_block_number |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,104 @@ |
||||
defmodule Explorer.Chain.Import.Runner.Optimism.Withdrawals do |
||||
@moduledoc """ |
||||
Bulk imports `t:Explorer.Chain.OptimismWithdrawal.t/0`. |
||||
""" |
||||
|
||||
require Ecto.Query |
||||
|
||||
alias Ecto.{Changeset, Multi, Repo} |
||||
alias Explorer.Chain.Import |
||||
alias Explorer.Chain.Optimism.Withdrawal, as: OptimismWithdrawal |
||||
alias Explorer.Prometheus.Instrumenter |
||||
|
||||
import Ecto.Query, only: [from: 2] |
||||
|
||||
@behaviour Import.Runner |
||||
|
||||
# milliseconds |
||||
@timeout 60_000 |
||||
|
||||
@type imported :: [OptimismWithdrawal.t()] |
||||
|
||||
@impl Import.Runner |
||||
def ecto_schema_module, do: OptimismWithdrawal |
||||
|
||||
@impl Import.Runner |
||||
def option_key, do: :optimism_withdrawals |
||||
|
||||
@impl Import.Runner |
||||
def imported_table_row do |
||||
%{ |
||||
value_type: "[#{ecto_schema_module()}.t()]", |
||||
value_description: "List of `t:#{ecto_schema_module()}.t/0`s" |
||||
} |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def run(multi, changes_list, %{timestamps: timestamps} = options) do |
||||
insert_options = |
||||
options |
||||
|> Map.get(option_key(), %{}) |
||||
|> Map.take(~w(on_conflict timeout)a) |
||||
|> Map.put_new(:timeout, @timeout) |
||||
|> Map.put(:timestamps, timestamps) |
||||
|
||||
Multi.run(multi, :insert_withdrawals, fn repo, _ -> |
||||
Instrumenter.block_import_stage_runner( |
||||
fn -> insert(repo, changes_list, insert_options) end, |
||||
:block_referencing, |
||||
:optimism_withdrawals, |
||||
:optimism_withdrawals |
||||
) |
||||
end) |
||||
end |
||||
|
||||
@impl Import.Runner |
||||
def timeout, do: @timeout |
||||
|
||||
@spec insert(Repo.t(), [map()], %{required(:timeout) => timeout(), required(:timestamps) => Import.timestamps()}) :: |
||||
{:ok, [OptimismWithdrawal.t()]} |
||||
| {:error, [Changeset.t()]} |
||||
def insert(repo, changes_list, %{timeout: timeout, timestamps: timestamps} = options) when is_list(changes_list) do |
||||
on_conflict = Map.get_lazy(options, :on_conflict, &default_on_conflict/0) |
||||
|
||||
# Enforce OptimismWithdrawal ShareLocks order (see docs: sharelock.md) |
||||
ordered_changes_list = Enum.sort_by(changes_list, & &1.msg_nonce) |
||||
|
||||
{:ok, inserted} = |
||||
Import.insert_changes_list( |
||||
repo, |
||||
ordered_changes_list, |
||||
for: OptimismWithdrawal, |
||||
returning: true, |
||||
timeout: timeout, |
||||
timestamps: timestamps, |
||||
conflict_target: :msg_nonce, |
||||
on_conflict: on_conflict |
||||
) |
||||
|
||||
{:ok, inserted} |
||||
end |
||||
|
||||
defp default_on_conflict do |
||||
from( |
||||
withdrawal in OptimismWithdrawal, |
||||
update: [ |
||||
set: [ |
||||
# don't update `msg_nonce` as it is a primary key and used for the conflict target |
||||
hash: fragment("EXCLUDED.hash"), |
||||
l2_transaction_hash: fragment("EXCLUDED.l2_transaction_hash"), |
||||
l2_block_number: fragment("EXCLUDED.l2_block_number"), |
||||
inserted_at: fragment("LEAST(?, EXCLUDED.inserted_at)", withdrawal.inserted_at), |
||||
updated_at: fragment("GREATEST(?, EXCLUDED.updated_at)", withdrawal.updated_at) |
||||
] |
||||
], |
||||
where: |
||||
fragment( |
||||
"(EXCLUDED.hash, EXCLUDED.l2_transaction_hash, EXCLUDED.l2_block_number) IS DISTINCT FROM (?, ?, ?)", |
||||
withdrawal.hash, |
||||
withdrawal.l2_transaction_hash, |
||||
withdrawal.l2_block_number |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,86 @@ |
||||
defmodule Explorer.Chain.Optimism.Deposit do |
||||
@moduledoc "Models a deposit for Optimism." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
import Explorer.Chain, only: [join_association: 3, select_repo: 1] |
||||
|
||||
alias Explorer.Chain.{Hash, Transaction} |
||||
alias Explorer.PagingOptions |
||||
|
||||
@default_paging_options %PagingOptions{page_size: 50} |
||||
|
||||
@required_attrs ~w(l1_block_number l1_transaction_hash l1_transaction_origin l2_transaction_hash)a |
||||
@optional_attrs ~w(l1_block_timestamp)a |
||||
@allowed_attrs @required_attrs ++ @optional_attrs |
||||
|
||||
@type t :: %__MODULE__{ |
||||
l1_block_number: non_neg_integer(), |
||||
l1_block_timestamp: DateTime.t(), |
||||
l1_transaction_hash: Hash.t(), |
||||
l1_transaction_origin: Hash.t(), |
||||
l2_transaction_hash: Hash.t(), |
||||
l2_transaction: %Ecto.Association.NotLoaded{} | Transaction.t() |
||||
} |
||||
|
||||
@primary_key false |
||||
schema "op_deposits" do |
||||
field(:l1_block_number, :integer) |
||||
field(:l1_block_timestamp, :utc_datetime_usec) |
||||
field(:l1_transaction_hash, Hash.Full) |
||||
field(:l1_transaction_origin, Hash.Address) |
||||
|
||||
belongs_to(:l2_transaction, Transaction, |
||||
foreign_key: :l2_transaction_hash, |
||||
primary_key: true, |
||||
references: :hash, |
||||
type: Hash.Full |
||||
) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = deposit, attrs \\ %{}) do |
||||
deposit |
||||
|> cast(attrs, @allowed_attrs) |
||||
|> validate_required(@required_attrs) |
||||
|> foreign_key_constraint(:l2_transaction_hash) |
||||
end |
||||
|
||||
def last_deposit_l1_block_number_query do |
||||
from(d in __MODULE__, |
||||
select: {d.l1_block_number, d.l1_transaction_hash}, |
||||
order_by: [desc: d.l1_block_number], |
||||
limit: 1 |
||||
) |
||||
end |
||||
|
||||
@doc """ |
||||
Lists `t:Explorer.Chain.Optimism.Deposit.t/0`'s' in descending order based on l1_block_number and l2_transaction_hash. |
||||
|
||||
""" |
||||
@spec list :: [__MODULE__.t()] |
||||
def list(options \\ []) do |
||||
paging_options = Keyword.get(options, :paging_options, @default_paging_options) |
||||
|
||||
base_query = |
||||
from(d in __MODULE__, |
||||
order_by: [desc: d.l1_block_number, desc: d.l2_transaction_hash] |
||||
) |
||||
|
||||
base_query |
||||
|> join_association(:l2_transaction, :required) |
||||
|> page_deposits(paging_options) |
||||
|> limit(^paging_options.page_size) |
||||
|> select_repo(options).all() |
||||
end |
||||
|
||||
defp page_deposits(query, %PagingOptions{key: nil}), do: query |
||||
|
||||
defp page_deposits(query, %PagingOptions{key: {block_number, l2_tx_hash}}) do |
||||
from(d in query, |
||||
where: d.l1_block_number < ^block_number, |
||||
or_where: d.l1_block_number == ^block_number and d.l2_transaction_hash < ^l2_tx_hash |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,33 @@ |
||||
defmodule Explorer.Chain.Optimism.FrameSequence do |
||||
@moduledoc "Models a frame sequence for Optimism." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
alias Explorer.Chain.Hash |
||||
alias Explorer.Chain.Optimism.TxnBatch |
||||
|
||||
@required_attrs ~w(id l1_transaction_hashes l1_timestamp)a |
||||
|
||||
@type t :: %__MODULE__{ |
||||
l1_transaction_hashes: [Hash.t()], |
||||
l1_timestamp: DateTime.t(), |
||||
transaction_batches: %Ecto.Association.NotLoaded{} | [TxnBatch.t()] |
||||
} |
||||
|
||||
@primary_key {:id, :integer, autogenerate: false} |
||||
schema "op_frame_sequences" do |
||||
field(:l1_transaction_hashes, {:array, Hash.Full}) |
||||
field(:l1_timestamp, :utc_datetime_usec) |
||||
|
||||
has_many(:transaction_batches, TxnBatch, foreign_key: :frame_sequence_id) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = sequences, attrs \\ %{}) do |
||||
sequences |
||||
|> cast(attrs, @required_attrs) |
||||
|> validate_required(@required_attrs) |
||||
|> unique_constraint(:id) |
||||
end |
||||
end |
@ -0,0 +1,67 @@ |
||||
defmodule Explorer.Chain.Optimism.OutputRoot do |
||||
@moduledoc "Models an output root for Optimism." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
import Explorer.Chain, only: [select_repo: 1] |
||||
|
||||
alias Explorer.Chain.Hash |
||||
alias Explorer.PagingOptions |
||||
|
||||
@default_paging_options %PagingOptions{page_size: 50} |
||||
|
||||
@required_attrs ~w(l2_output_index l2_block_number l1_transaction_hash l1_timestamp l1_block_number output_root)a |
||||
|
||||
@type t :: %__MODULE__{ |
||||
l2_output_index: non_neg_integer(), |
||||
l2_block_number: non_neg_integer(), |
||||
l1_transaction_hash: Hash.t(), |
||||
l1_timestamp: DateTime.t(), |
||||
l1_block_number: non_neg_integer(), |
||||
output_root: Hash.t() |
||||
} |
||||
|
||||
@primary_key false |
||||
schema "op_output_roots" do |
||||
field(:l2_output_index, :integer, primary_key: true) |
||||
field(:l2_block_number, :integer) |
||||
field(:l1_transaction_hash, Hash.Full) |
||||
field(:l1_timestamp, :utc_datetime_usec) |
||||
field(:l1_block_number, :integer) |
||||
field(:output_root, Hash.Full) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = output_roots, attrs \\ %{}) do |
||||
output_roots |
||||
|> cast(attrs, @required_attrs) |
||||
|> validate_required(@required_attrs) |
||||
end |
||||
|
||||
@doc """ |
||||
Lists `t:Explorer.Chain.Optimism.OutputRoot.t/0`'s' in descending order based on output root index. |
||||
|
||||
""" |
||||
@spec list :: [__MODULE__.t()] |
||||
def list(options \\ []) do |
||||
paging_options = Keyword.get(options, :paging_options, @default_paging_options) |
||||
|
||||
base_query = |
||||
from(r in __MODULE__, |
||||
order_by: [desc: r.l2_output_index], |
||||
select: r |
||||
) |
||||
|
||||
base_query |
||||
|> page_output_roots(paging_options) |
||||
|> limit(^paging_options.page_size) |
||||
|> select_repo(options).all() |
||||
end |
||||
|
||||
defp page_output_roots(query, %PagingOptions{key: nil}), do: query |
||||
|
||||
defp page_output_roots(query, %PagingOptions{key: {index}}) do |
||||
from(r in query, where: r.l2_output_index < ^index) |
||||
end |
||||
end |
@ -0,0 +1,61 @@ |
||||
defmodule Explorer.Chain.Optimism.TxnBatch do |
||||
@moduledoc "Models a batch of transactions for Optimism." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
import Explorer.Chain, only: [join_association: 3, select_repo: 1] |
||||
|
||||
alias Explorer.Chain.Optimism.FrameSequence |
||||
alias Explorer.PagingOptions |
||||
|
||||
@default_paging_options %PagingOptions{page_size: 50} |
||||
|
||||
@required_attrs ~w(l2_block_number frame_sequence_id)a |
||||
|
||||
@type t :: %__MODULE__{ |
||||
l2_block_number: non_neg_integer(), |
||||
frame_sequence_id: non_neg_integer(), |
||||
frame_sequence: %Ecto.Association.NotLoaded{} | FrameSequence.t() |
||||
} |
||||
|
||||
@primary_key false |
||||
schema "op_transaction_batches" do |
||||
field(:l2_block_number, :integer, primary_key: true) |
||||
belongs_to(:frame_sequence, FrameSequence, foreign_key: :frame_sequence_id, references: :id, type: :integer) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = batches, attrs \\ %{}) do |
||||
batches |
||||
|> cast(attrs, @required_attrs) |
||||
|> validate_required(@required_attrs) |
||||
|> foreign_key_constraint(:frame_sequence_id) |
||||
end |
||||
|
||||
@doc """ |
||||
Lists `t:Explorer.Chain.Optimism.TxnBatch.t/0`'s' in descending order based on l2_block_number. |
||||
|
||||
""" |
||||
@spec list :: [__MODULE__.t()] |
||||
def list(options \\ []) do |
||||
paging_options = Keyword.get(options, :paging_options, @default_paging_options) |
||||
|
||||
base_query = |
||||
from(tb in __MODULE__, |
||||
order_by: [desc: tb.l2_block_number] |
||||
) |
||||
|
||||
base_query |
||||
|> join_association(:frame_sequence, :required) |
||||
|> page_txn_batches(paging_options) |
||||
|> limit(^paging_options.page_size) |
||||
|> select_repo(options).all() |
||||
end |
||||
|
||||
defp page_txn_batches(query, %PagingOptions{key: nil}), do: query |
||||
|
||||
defp page_txn_batches(query, %PagingOptions{key: {block_number}}) do |
||||
from(tb in query, where: tb.l2_block_number < ^block_number) |
||||
end |
||||
end |
@ -0,0 +1,163 @@ |
||||
defmodule Explorer.Chain.Optimism.Withdrawal do |
||||
@moduledoc "Models Optimism withdrawal." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
import Explorer.Chain, only: [select_repo: 1] |
||||
|
||||
alias Explorer.Chain.{Block, Hash, Transaction} |
||||
alias Explorer.Chain.Cache.OptimismFinalizationPeriod |
||||
alias Explorer.Chain.Optimism.{OutputRoot, WithdrawalEvent} |
||||
alias Explorer.{PagingOptions, Repo} |
||||
|
||||
@default_paging_options %PagingOptions{page_size: 50} |
||||
|
||||
@required_attrs ~w(msg_nonce hash l2_transaction_hash l2_block_number)a |
||||
|
||||
@type t :: %__MODULE__{ |
||||
msg_nonce: Decimal.t(), |
||||
hash: Hash.t(), |
||||
l2_transaction_hash: Hash.t(), |
||||
l2_block_number: non_neg_integer() |
||||
} |
||||
|
||||
@primary_key false |
||||
schema "op_withdrawals" do |
||||
field(:msg_nonce, :decimal, primary_key: true) |
||||
field(:hash, Hash.Full) |
||||
field(:l2_transaction_hash, Hash.Full) |
||||
field(:l2_block_number, :integer) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = withdrawals, attrs \\ %{}) do |
||||
withdrawals |
||||
|> cast(attrs, @required_attrs) |
||||
|> validate_required(@required_attrs) |
||||
end |
||||
|
||||
@doc """ |
||||
Lists `t:Explorer.Chain.Optimism.Withdrawal.t/0`'s' in descending order based on message nonce. |
||||
|
||||
""" |
||||
@spec list :: [__MODULE__.t()] |
||||
def list(options \\ []) do |
||||
paging_options = Keyword.get(options, :paging_options, @default_paging_options) |
||||
|
||||
base_query = |
||||
from(w in __MODULE__, |
||||
order_by: [desc: w.msg_nonce], |
||||
left_join: l2_tx in Transaction, |
||||
on: w.l2_transaction_hash == l2_tx.hash, |
||||
left_join: l2_block in Block, |
||||
on: w.l2_block_number == l2_block.number, |
||||
left_join: we in WithdrawalEvent, |
||||
on: we.withdrawal_hash == w.hash and we.l1_event_type == :WithdrawalFinalized, |
||||
select: %{ |
||||
msg_nonce: w.msg_nonce, |
||||
hash: w.hash, |
||||
l2_block_number: w.l2_block_number, |
||||
l2_timestamp: l2_block.timestamp, |
||||
l2_transaction_hash: w.l2_transaction_hash, |
||||
l1_transaction_hash: we.l1_transaction_hash, |
||||
from: l2_tx.from_address_hash |
||||
} |
||||
) |
||||
|
||||
base_query |
||||
|> page_optimism_withdrawals(paging_options) |
||||
|> limit(^paging_options.page_size) |
||||
|> select_repo(options).all() |
||||
end |
||||
|
||||
defp page_optimism_withdrawals(query, %PagingOptions{key: nil}), do: query |
||||
|
||||
defp page_optimism_withdrawals(query, %PagingOptions{key: {nonce}}) do |
||||
from(w in query, where: w.msg_nonce < ^nonce) |
||||
end |
||||
|
||||
@doc """ |
||||
Gets withdrawal statuses for Optimism Withdrawal transaction. |
||||
For each withdrawal associated with this transaction, |
||||
returns the status and the corresponding L1 transaction hash if the status is `Relayed`. |
||||
""" |
||||
@spec transaction_statuses(Hash.t()) :: [{non_neg_integer(), String.t(), Hash.t() | nil}] |
||||
def transaction_statuses(l2_transaction_hash) do |
||||
query = |
||||
from(w in __MODULE__, |
||||
where: w.l2_transaction_hash == ^l2_transaction_hash, |
||||
left_join: l2_block in Block, |
||||
on: w.l2_block_number == l2_block.number and l2_block.consensus == true, |
||||
left_join: we in WithdrawalEvent, |
||||
on: we.withdrawal_hash == w.hash and we.l1_event_type == :WithdrawalFinalized, |
||||
select: %{ |
||||
hash: w.hash, |
||||
l2_block_number: w.l2_block_number, |
||||
l1_transaction_hash: we.l1_transaction_hash, |
||||
msg_nonce: w.msg_nonce |
||||
} |
||||
) |
||||
|
||||
query |
||||
|> Repo.replica().all() |
||||
|> Enum.map(fn w -> |
||||
msg_nonce = |
||||
Bitwise.band( |
||||
Decimal.to_integer(w.msg_nonce), |
||||
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF |
||||
) |
||||
|
||||
{status, _} = status(w) |
||||
{msg_nonce, status, w.l1_transaction_hash} |
||||
end) |
||||
end |
||||
|
||||
@doc """ |
||||
Gets Optimism Withdrawal status and remaining time to unlock (when the status is `In challenge period`). |
||||
""" |
||||
@spec status(map()) :: {String.t(), DateTime.t() | nil} |
||||
def status(w) when is_nil(w.l1_transaction_hash) do |
||||
l1_timestamp = |
||||
Repo.replica().one( |
||||
from( |
||||
we in WithdrawalEvent, |
||||
select: we.l1_timestamp, |
||||
where: we.withdrawal_hash == ^w.hash and we.l1_event_type == :WithdrawalProven |
||||
) |
||||
) |
||||
|
||||
if is_nil(l1_timestamp) do |
||||
last_root_l2_block_number = |
||||
Repo.replica().one( |
||||
from(root in OutputRoot, |
||||
select: root.l2_block_number, |
||||
order_by: [desc: root.l2_output_index], |
||||
limit: 1 |
||||
) |
||||
) || 0 |
||||
|
||||
if w.l2_block_number > last_root_l2_block_number do |
||||
{"Waiting for state root", nil} |
||||
else |
||||
{"Ready to prove", nil} |
||||
end |
||||
else |
||||
challenge_period = |
||||
case OptimismFinalizationPeriod.get_period() do |
||||
nil -> 604_800 |
||||
period -> period |
||||
end |
||||
|
||||
if DateTime.compare(l1_timestamp, DateTime.add(DateTime.utc_now(), -challenge_period, :second)) == :lt do |
||||
{"Ready for relay", nil} |
||||
else |
||||
{"In challenge period", DateTime.add(l1_timestamp, challenge_period, :second)} |
||||
end |
||||
end |
||||
end |
||||
|
||||
def status(_w) do |
||||
{"Relayed", nil} |
||||
end |
||||
end |
@ -0,0 +1,34 @@ |
||||
defmodule Explorer.Chain.Optimism.WithdrawalEvent do |
||||
@moduledoc "Models Optimism withdrawal event." |
||||
|
||||
use Explorer.Schema |
||||
|
||||
alias Explorer.Chain.Hash |
||||
|
||||
@required_attrs ~w(withdrawal_hash l1_event_type l1_timestamp l1_transaction_hash l1_block_number)a |
||||
|
||||
@type t :: %__MODULE__{ |
||||
withdrawal_hash: Hash.t(), |
||||
l1_event_type: String.t(), |
||||
l1_timestamp: DateTime.t(), |
||||
l1_transaction_hash: Hash.t(), |
||||
l1_block_number: non_neg_integer() |
||||
} |
||||
|
||||
@primary_key false |
||||
schema "op_withdrawal_events" do |
||||
field(:withdrawal_hash, Hash.Full, primary_key: true) |
||||
field(:l1_event_type, Ecto.Enum, values: [:WithdrawalProven, :WithdrawalFinalized], primary_key: true) |
||||
field(:l1_timestamp, :utc_datetime_usec) |
||||
field(:l1_transaction_hash, Hash.Full) |
||||
field(:l1_block_number, :integer) |
||||
|
||||
timestamps() |
||||
end |
||||
|
||||
def changeset(%__MODULE__{} = withdrawal_events, attrs \\ %{}) do |
||||
withdrawal_events |
||||
|> cast(attrs, @required_attrs) |
||||
|> validate_required(@required_attrs) |
||||
end |
||||
end |
@ -0,0 +1,14 @@ |
||||
defmodule Explorer.Repo.Migrations.TransactionColumnsToSupportL2 do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
alter table(:transactions) do |
||||
add(:l1_fee, :numeric, precision: 100, null: true) |
||||
add(:l1_fee_scalar, :decimal, null: true) |
||||
add(:l1_gas_price, :numeric, precision: 100, null: true) |
||||
add(:l1_gas_used, :numeric, precision: 100, null: true) |
||||
add(:l1_tx_origin, :bytea, null: true) |
||||
add(:l1_block_number, :integer, null: true) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,16 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpOutputRootsTable do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create table(:op_output_roots, primary_key: false) do |
||||
add(:l2_output_index, :bigint, null: false, primary_key: true) |
||||
add(:l2_block_number, :bigint, null: false) |
||||
add(:l1_tx_hash, :bytea, null: false) |
||||
add(:l1_timestamp, :"timestamp without time zone", null: false) |
||||
add(:l1_block_number, :bigint, null: false) |
||||
add(:output_root, :bytea, null: false) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,14 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpWithdrawalsTable do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create table(:op_withdrawals, primary_key: false) do |
||||
add(:msg_nonce, :numeric, precision: 100, null: false, primary_key: true) |
||||
add(:withdrawal_hash, :bytea, null: false) |
||||
add(:l2_tx_hash, :bytea, null: false) |
||||
add(:l2_block_number, :bigint, null: false) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,22 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpWithdrawalEventsTable do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
execute( |
||||
"CREATE TYPE withdrawal_event_type AS ENUM ('WithdrawalProven', 'WithdrawalFinalized')", |
||||
"DROP TYPE withdrawal_event_type" |
||||
) |
||||
|
||||
create table(:op_withdrawal_events, primary_key: false) do |
||||
add(:withdrawal_hash, :bytea, null: false, primary_key: true) |
||||
add(:l1_event_type, :withdrawal_event_type, null: false, primary_key: true) |
||||
add(:l1_timestamp, :"timestamp without time zone", null: false) |
||||
add(:l1_tx_hash, :bytea, null: false) |
||||
add(:l1_block_number, :bigint, null: false) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
|
||||
create(index(:op_withdrawal_events, :l1_timestamp)) |
||||
end |
||||
end |
@ -0,0 +1,14 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpTransactionBatchesTable do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create table(:op_transaction_batches, primary_key: false) do |
||||
add(:l2_block_number, :bigint, null: false, primary_key: true) |
||||
add(:epoch_number, :bigint, null: false) |
||||
add(:l1_tx_hashes, {:array, :bytea}, null: false) |
||||
add(:l1_tx_timestamp, :"timestamp without time zone", null: false) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,17 @@ |
||||
defmodule Explorer.Repo.Migrations.CreateOpDeposits do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create table(:op_deposits, primary_key: false) do |
||||
add(:l1_block_number, :bigint, null: false) |
||||
add(:l1_block_timestamp, :"timestamp without time zone", null: true) |
||||
add(:l1_transaction_hash, :bytea, null: false) |
||||
add(:l1_transaction_origin, :bytea, null: false) |
||||
add(:l2_transaction_hash, :bytea, null: false, primary_key: true) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
|
||||
create(index(:op_deposits, [:l1_block_number])) |
||||
end |
||||
end |
@ -0,0 +1,12 @@ |
||||
defmodule Explorer.Repo.Migrations.RenameFields do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
rename(table(:op_transaction_batches), :l1_tx_hashes, to: :l1_transaction_hashes) |
||||
rename(table(:op_transaction_batches), :l1_tx_timestamp, to: :l1_timestamp) |
||||
rename(table(:op_output_roots), :l1_tx_hash, to: :l1_transaction_hash) |
||||
rename(table(:op_withdrawals), :l2_tx_hash, to: :l2_transaction_hash) |
||||
rename(table(:op_withdrawals), :withdrawal_hash, to: :hash) |
||||
rename(table(:op_withdrawal_events), :l1_tx_hash, to: :l1_transaction_hash) |
||||
end |
||||
end |
@ -0,0 +1,8 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpIndexes do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create(index(:op_output_roots, [:l1_block_number])) |
||||
create(index(:op_withdrawal_events, [:l1_block_number])) |
||||
end |
||||
end |
@ -0,0 +1,24 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpFrameSequencesTable do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create table(:op_frame_sequences, primary_key: true) do |
||||
add(:l1_transaction_hashes, {:array, :bytea}, null: false) |
||||
add(:l1_timestamp, :"timestamp without time zone", null: false) |
||||
|
||||
timestamps(null: false, type: :utc_datetime_usec) |
||||
end |
||||
|
||||
alter table(:op_transaction_batches) do |
||||
remove(:l1_transaction_hashes) |
||||
remove(:l1_timestamp) |
||||
|
||||
add( |
||||
:frame_sequence_id, |
||||
references(:op_frame_sequences, on_delete: :restrict, on_update: :update_all, type: :bigint), |
||||
null: false, |
||||
after: :epoch_number |
||||
) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,15 @@ |
||||
defmodule Explorer.Repo.Optimism.Migrations.ModifyCollatedGasPriceConstraint do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
execute("ALTER TABLE transactions DROP CONSTRAINT collated_gas_price") |
||||
|
||||
create( |
||||
constraint( |
||||
:transactions, |
||||
:collated_gas_price, |
||||
check: "block_hash IS NULL OR gas_price IS NOT NULL OR max_fee_per_gas IS NOT NULL" |
||||
) |
||||
) |
||||
end |
||||
end |
@ -0,0 +1,7 @@ |
||||
defmodule Explorer.Repo.Migrations.AddOpWithdrawalIndex do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
create(index(:op_withdrawals, :l2_transaction_hash)) |
||||
end |
||||
end |
@ -0,0 +1,9 @@ |
||||
defmodule Explorer.Repo.Migrations.RemoveOpEpochNumberField do |
||||
use Ecto.Migration |
||||
|
||||
def change do |
||||
alter table(:op_transaction_batches) do |
||||
remove(:epoch_number) |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,406 @@ |
||||
defmodule Indexer.Fetcher.Optimism do |
||||
@moduledoc """ |
||||
Contains common functions for Optimism* fetchers. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import EthereumJSONRPC, |
||||
only: [ |
||||
fetch_block_number_by_tag_op_version: 2, |
||||
json_rpc: 2, |
||||
integer_to_quantity: 1, |
||||
quantity_to_integer: 1, |
||||
request: 1 |
||||
] |
||||
|
||||
import Explorer.Helper, only: [parse_integer: 1] |
||||
|
||||
alias EthereumJSONRPC.Block.ByNumber |
||||
alias Explorer.Chain.Events.{Publisher, Subscriber} |
||||
alias Indexer.{BoundQueue, Helper} |
||||
|
||||
@fetcher_name :optimism |
||||
@block_check_interval_range_size 100 |
||||
@eth_get_logs_range_size 1000 |
||||
@finite_retries_number 3 |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(_args) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
|
||||
modules_using_reorg_monitor = [ |
||||
Indexer.Fetcher.Optimism.TxnBatch, |
||||
Indexer.Fetcher.Optimism.OutputRoot, |
||||
Indexer.Fetcher.Optimism.WithdrawalEvent |
||||
] |
||||
|
||||
reorg_monitor_not_needed = |
||||
modules_using_reorg_monitor |
||||
|> Enum.all?(fn module -> |
||||
is_nil(Application.get_all_env(:indexer)[module][:start_block_l1]) |
||||
end) |
||||
|
||||
if reorg_monitor_not_needed do |
||||
:ignore |
||||
else |
||||
optimism_l1_rpc = Application.get_all_env(:indexer)[Indexer.Fetcher.Optimism][:optimism_l1_rpc] |
||||
|
||||
json_rpc_named_arguments = json_rpc_named_arguments(optimism_l1_rpc) |
||||
|
||||
{:ok, %{}, {:continue, json_rpc_named_arguments}} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(json_rpc_named_arguments, _state) do |
||||
{:ok, block_check_interval, _} = get_block_check_interval(json_rpc_named_arguments) |
||||
Process.send(self(), :reorg_monitor, []) |
||||
|
||||
{:noreply, |
||||
%{block_check_interval: block_check_interval, json_rpc_named_arguments: json_rpc_named_arguments, prev_latest: 0}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:reorg_monitor, |
||||
%{ |
||||
block_check_interval: block_check_interval, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
prev_latest: prev_latest |
||||
} = state |
||||
) do |
||||
{:ok, latest} = get_block_number_by_tag("latest", json_rpc_named_arguments, Helper.infinite_retries_number()) |
||||
|
||||
if latest < prev_latest do |
||||
Logger.warning("Reorg detected: previous latest block ##{prev_latest}, current latest block ##{latest}.") |
||||
|
||||
Publisher.broadcast([{:optimism_reorg_block, latest}], :realtime) |
||||
end |
||||
|
||||
Process.send_after(self(), :reorg_monitor, block_check_interval) |
||||
|
||||
{:noreply, %{state | prev_latest: latest}} |
||||
end |
||||
|
||||
@doc """ |
||||
Calculates average block time in milliseconds (based on the latest 100 blocks) divided by 2. |
||||
Sends corresponding requests to the RPC node. |
||||
Returns a tuple {:ok, block_check_interval, last_safe_block} |
||||
where `last_safe_block` is the number of the recent `safe` or `latest` block (depending on which one is available). |
||||
Returns {:error, description} in case of error. |
||||
""" |
||||
@spec get_block_check_interval(list()) :: {:ok, non_neg_integer(), non_neg_integer()} | {:error, any()} |
||||
def get_block_check_interval(json_rpc_named_arguments) do |
||||
{last_safe_block, _} = get_safe_block(json_rpc_named_arguments) |
||||
|
||||
first_block = max(last_safe_block - @block_check_interval_range_size, 1) |
||||
|
||||
with {:ok, first_block_timestamp} <- |
||||
get_block_timestamp_by_number(first_block, json_rpc_named_arguments, Helper.infinite_retries_number()), |
||||
{:ok, last_safe_block_timestamp} <- |
||||
get_block_timestamp_by_number(last_safe_block, json_rpc_named_arguments, Helper.infinite_retries_number()) do |
||||
block_check_interval = |
||||
ceil((last_safe_block_timestamp - first_block_timestamp) / (last_safe_block - first_block) * 1000 / 2) |
||||
|
||||
Logger.info("Block check interval is calculated as #{block_check_interval} ms.") |
||||
{:ok, block_check_interval, last_safe_block} |
||||
else |
||||
{:error, error} -> |
||||
{:error, "Failed to calculate block check interval due to #{inspect(error)}"} |
||||
end |
||||
end |
||||
|
||||
@doc """ |
||||
Fetches block number by its tag (e.g. `latest` or `safe`) using RPC request. |
||||
Performs a specified number of retries (up to) if the first attempt returns error. |
||||
""" |
||||
@spec get_block_number_by_tag(binary(), list(), non_neg_integer()) :: {:ok, non_neg_integer()} | {:error, atom()} |
||||
def get_block_number_by_tag(tag, json_rpc_named_arguments, retries \\ @finite_retries_number) do |
||||
error_message = &"Cannot fetch #{tag} block number. Error: #{inspect(&1)}" |
||||
|
||||
Helper.repeated_call( |
||||
&fetch_block_number_by_tag_op_version/2, |
||||
[tag, json_rpc_named_arguments], |
||||
error_message, |
||||
retries |
||||
) |
||||
end |
||||
|
||||
@doc """ |
||||
Tries to get `safe` block number from the RPC node. |
||||
If it's not available, gets the `latest` one. |
||||
Returns a tuple of `{block_number, is_latest}` |
||||
where `is_latest` is true if the `safe` is not available. |
||||
""" |
||||
@spec get_safe_block(list()) :: {non_neg_integer(), boolean()} |
||||
def get_safe_block(json_rpc_named_arguments) do |
||||
case get_block_number_by_tag("safe", json_rpc_named_arguments) do |
||||
{:ok, safe_block} -> |
||||
{safe_block, false} |
||||
|
||||
{:error, :not_found} -> |
||||
{:ok, latest_block} = |
||||
get_block_number_by_tag("latest", json_rpc_named_arguments, Helper.infinite_retries_number()) |
||||
|
||||
{latest_block, true} |
||||
end |
||||
end |
||||
|
||||
defp get_block_timestamp_by_number_inner(number, json_rpc_named_arguments) do |
||||
result = |
||||
%{id: 0, number: number} |
||||
|> ByNumber.request(false) |
||||
|> json_rpc(json_rpc_named_arguments) |
||||
|
||||
with {:ok, block} <- result, |
||||
false <- is_nil(block), |
||||
timestamp <- Map.get(block, "timestamp"), |
||||
false <- is_nil(timestamp) do |
||||
{:ok, quantity_to_integer(timestamp)} |
||||
else |
||||
{:error, message} -> |
||||
{:error, message} |
||||
|
||||
true -> |
||||
{:error, "RPC returned nil."} |
||||
end |
||||
end |
||||
|
||||
@doc """ |
||||
Fetches block timestamp by its number using RPC request. |
||||
Performs a specified number of retries (up to) if the first attempt returns error. |
||||
""" |
||||
@spec get_block_timestamp_by_number(non_neg_integer(), list(), non_neg_integer()) :: |
||||
{:ok, non_neg_integer()} | {:error, any()} |
||||
def get_block_timestamp_by_number(number, json_rpc_named_arguments, retries \\ @finite_retries_number) do |
||||
func = &get_block_timestamp_by_number_inner/2 |
||||
args = [number, json_rpc_named_arguments] |
||||
error_message = &"Cannot fetch block ##{number} or its timestamp. Error: #{inspect(&1)}" |
||||
Helper.repeated_call(func, args, error_message, retries) |
||||
end |
||||
|
||||
@doc """ |
||||
Fetches logs emitted by the specified contract (address) |
||||
within the specified block range and the first topic from the RPC node. |
||||
Performs a specified number of retries (up to) if the first attempt returns error. |
||||
""" |
||||
@spec get_logs( |
||||
non_neg_integer() | binary(), |
||||
non_neg_integer() | binary(), |
||||
binary(), |
||||
binary() | list(), |
||||
list(), |
||||
non_neg_integer() |
||||
) :: {:ok, list()} | {:error, term()} |
||||
def get_logs(from_block, to_block, address, topic0, json_rpc_named_arguments, retries) do |
||||
processed_from_block = if is_integer(from_block), do: integer_to_quantity(from_block), else: from_block |
||||
processed_to_block = if is_integer(to_block), do: integer_to_quantity(to_block), else: to_block |
||||
|
||||
req = |
||||
request(%{ |
||||
id: 0, |
||||
method: "eth_getLogs", |
||||
params: [ |
||||
%{ |
||||
:fromBlock => processed_from_block, |
||||
:toBlock => processed_to_block, |
||||
:address => address, |
||||
:topics => [topic0] |
||||
} |
||||
] |
||||
}) |
||||
|
||||
error_message = &"Cannot fetch logs for the block range #{from_block}..#{to_block}. Error: #{inspect(&1)}" |
||||
|
||||
Helper.repeated_call(&json_rpc/2, [req, json_rpc_named_arguments], error_message, retries) |
||||
end |
||||
|
||||
@doc """ |
||||
Fetches transaction data by its hash using RPC request. |
||||
Performs a specified number of retries (up to) if the first attempt returns error. |
||||
""" |
||||
@spec get_transaction_by_hash(binary() | nil, list(), non_neg_integer()) :: {:ok, any()} | {:error, any()} |
||||
def get_transaction_by_hash(hash, json_rpc_named_arguments, retries_left \\ @finite_retries_number) |
||||
|
||||
def get_transaction_by_hash(hash, _json_rpc_named_arguments, _retries_left) when is_nil(hash), do: {:ok, nil} |
||||
|
||||
def get_transaction_by_hash(hash, json_rpc_named_arguments, retries) do |
||||
req = |
||||
request(%{ |
||||
id: 0, |
||||
method: "eth_getTransactionByHash", |
||||
params: [hash] |
||||
}) |
||||
|
||||
error_message = &"eth_getTransactionByHash failed. Error: #{inspect(&1)}" |
||||
|
||||
Helper.repeated_call(&json_rpc/2, [req, json_rpc_named_arguments], error_message, retries) |
||||
end |
||||
|
||||
def get_logs_range_size do |
||||
@eth_get_logs_range_size |
||||
end |
||||
|
||||
@doc """ |
||||
Forms JSON RPC named arguments for the given RPC URL. |
||||
""" |
||||
@spec json_rpc_named_arguments(binary()) :: list() |
||||
def json_rpc_named_arguments(optimism_l1_rpc) do |
||||
[ |
||||
transport: EthereumJSONRPC.HTTP, |
||||
transport_options: [ |
||||
http: EthereumJSONRPC.HTTP.HTTPoison, |
||||
url: optimism_l1_rpc, |
||||
http_options: [ |
||||
recv_timeout: :timer.minutes(10), |
||||
timeout: :timer.minutes(10), |
||||
hackney: [pool: :ethereum_jsonrpc] |
||||
] |
||||
] |
||||
] |
||||
end |
||||
|
||||
def init_continue(env, contract_address, caller) |
||||
when caller in [Indexer.Fetcher.Optimism.WithdrawalEvent, Indexer.Fetcher.Optimism.OutputRoot] do |
||||
{contract_name, table_name, start_block_note} = |
||||
if caller == Indexer.Fetcher.Optimism.WithdrawalEvent do |
||||
{"Optimism Portal", "op_withdrawal_events", "Withdrawals L1"} |
||||
else |
||||
{"Output Oracle", "op_output_roots", "Output Roots"} |
||||
end |
||||
|
||||
with {:start_block_l1_undefined, false} <- {:start_block_l1_undefined, is_nil(env[:start_block_l1])}, |
||||
{:reorg_monitor_started, true} <- {:reorg_monitor_started, !is_nil(Process.whereis(Indexer.Fetcher.Optimism))}, |
||||
optimism_l1_rpc = Application.get_all_env(:indexer)[Indexer.Fetcher.Optimism][:optimism_l1_rpc], |
||||
{:rpc_l1_undefined, false} <- {:rpc_l1_undefined, is_nil(optimism_l1_rpc)}, |
||||
{:contract_is_valid, true} <- {:contract_is_valid, Helper.address_correct?(contract_address)}, |
||||
start_block_l1 = parse_integer(env[:start_block_l1]), |
||||
false <- is_nil(start_block_l1), |
||||
true <- start_block_l1 > 0, |
||||
{last_l1_block_number, last_l1_transaction_hash} <- caller.get_last_l1_item(), |
||||
{:start_block_l1_valid, true} <- |
||||
{:start_block_l1_valid, start_block_l1 <= last_l1_block_number || last_l1_block_number == 0}, |
||||
json_rpc_named_arguments = json_rpc_named_arguments(optimism_l1_rpc), |
||||
{:ok, last_l1_tx} <- get_transaction_by_hash(last_l1_transaction_hash, json_rpc_named_arguments), |
||||
{:l1_tx_not_found, false} <- {:l1_tx_not_found, !is_nil(last_l1_transaction_hash) && is_nil(last_l1_tx)}, |
||||
{:ok, block_check_interval, last_safe_block} <- get_block_check_interval(json_rpc_named_arguments) do |
||||
start_block = max(start_block_l1, last_l1_block_number) |
||||
|
||||
Subscriber.to(:optimism_reorg_block, :realtime) |
||||
|
||||
Process.send(self(), :continue, []) |
||||
|
||||
{:noreply, |
||||
%{ |
||||
contract_address: contract_address, |
||||
block_check_interval: block_check_interval, |
||||
start_block: start_block, |
||||
end_block: last_safe_block, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
}} |
||||
else |
||||
{:start_block_l1_undefined, true} -> |
||||
# the process shouldn't start if the start block is not defined |
||||
{:stop, :normal, %{}} |
||||
|
||||
{:reorg_monitor_started, false} -> |
||||
Logger.error("Cannot start this process as reorg monitor in Indexer.Fetcher.Optimism is not started.") |
||||
{:stop, :normal, %{}} |
||||
|
||||
{:rpc_l1_undefined, true} -> |
||||
Logger.error("L1 RPC URL is not defined.") |
||||
{:stop, :normal, %{}} |
||||
|
||||
{:contract_is_valid, false} -> |
||||
Logger.error("#{contract_name} contract address is invalid or not defined.") |
||||
{:stop, :normal, %{}} |
||||
|
||||
{:start_block_l1_valid, false} -> |
||||
Logger.error("Invalid L1 Start Block value. Please, check the value and #{table_name} table.") |
||||
{:stop, :normal, %{}} |
||||
|
||||
{:error, error_data} -> |
||||
Logger.error( |
||||
"Cannot get last L1 transaction from RPC by its hash, last safe/latest block, or block timestamp by its number due to RPC error: #{inspect(error_data)}" |
||||
) |
||||
|
||||
{:stop, :normal, %{}} |
||||
|
||||
{:l1_tx_not_found, true} -> |
||||
Logger.error( |
||||
"Cannot find last L1 transaction from RPC by its hash. Probably, there was a reorg on L1 chain. Please, check #{table_name} table." |
||||
) |
||||
|
||||
{:stop, :normal, %{}} |
||||
|
||||
_ -> |
||||
Logger.error("#{start_block_note} Start Block is invalid or zero.") |
||||
{:stop, :normal, %{}} |
||||
end |
||||
end |
||||
|
||||
def repeated_request(req, error_message, json_rpc_named_arguments, retries) do |
||||
Helper.repeated_call(&json_rpc/2, [req, json_rpc_named_arguments], error_message, retries) |
||||
end |
||||
|
||||
def reorg_block_pop(fetcher_name) do |
||||
table_name = reorg_table_name(fetcher_name) |
||||
|
||||
case BoundQueue.pop_front(reorg_queue_get(table_name)) do |
||||
{:ok, {block_number, updated_queue}} -> |
||||
:ets.insert(table_name, {:queue, updated_queue}) |
||||
block_number |
||||
|
||||
{:error, :empty} -> |
||||
nil |
||||
end |
||||
end |
||||
|
||||
def reorg_block_push(fetcher_name, block_number) do |
||||
table_name = reorg_table_name(fetcher_name) |
||||
{:ok, updated_queue} = BoundQueue.push_back(reorg_queue_get(table_name), block_number) |
||||
:ets.insert(table_name, {:queue, updated_queue}) |
||||
end |
||||
|
||||
defp reorg_queue_get(table_name) do |
||||
if :ets.whereis(table_name) == :undefined do |
||||
:ets.new(table_name, [ |
||||
:set, |
||||
:named_table, |
||||
:public, |
||||
read_concurrency: true, |
||||
write_concurrency: true |
||||
]) |
||||
end |
||||
|
||||
with info when info != :undefined <- :ets.info(table_name), |
||||
[{_, value}] <- :ets.lookup(table_name, :queue) do |
||||
value |
||||
else |
||||
_ -> %BoundQueue{} |
||||
end |
||||
end |
||||
|
||||
defp reorg_table_name(fetcher_name) do |
||||
:"#{fetcher_name}#{:_reorgs}" |
||||
end |
||||
end |
@ -0,0 +1,565 @@ |
||||
defmodule Indexer.Fetcher.Optimism.Deposit do |
||||
@moduledoc """ |
||||
Fills op_deposits DB table. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import Ecto.Query |
||||
|
||||
import EthereumJSONRPC, only: [integer_to_quantity: 1, quantity_to_integer: 1, request: 1] |
||||
import Explorer.Helper, only: [decode_data: 2, parse_integer: 1] |
||||
|
||||
alias EthereumJSONRPC.Block.ByNumber |
||||
alias EthereumJSONRPC.Blocks |
||||
alias Explorer.{Chain, Repo} |
||||
alias Explorer.Chain.Events.Publisher |
||||
alias Explorer.Chain.Optimism.Deposit |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Helper |
||||
|
||||
defstruct [ |
||||
:batch_size, |
||||
:start_block, |
||||
:from_block, |
||||
:safe_block, |
||||
:optimism_portal, |
||||
:json_rpc_named_arguments, |
||||
mode: :catch_up, |
||||
filter_id: nil, |
||||
check_interval: nil |
||||
] |
||||
|
||||
# 32-byte signature of the event TransactionDeposited(address indexed from, address indexed to, uint256 indexed version, bytes opaqueData) |
||||
@transaction_deposited_event "0xb3813568d9991fc951961fcb4c784893574240a28925604d09fc577c55bb7c32" |
||||
@retry_interval_minutes 3 |
||||
@retry_interval :timer.minutes(@retry_interval_minutes) |
||||
@address_prefix "0x000000000000000000000000" |
||||
@batch_size 500 |
||||
@fetcher_name :optimism_deposits |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(_args) do |
||||
{:ok, %{}, {:continue, :ok}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(:ok, state) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
|
||||
env = Application.get_all_env(:indexer)[__MODULE__] |
||||
optimism_env = Application.get_all_env(:indexer)[Indexer.Fetcher.Optimism] |
||||
optimism_portal = optimism_env[:optimism_l1_portal] |
||||
optimism_l1_rpc = optimism_env[:optimism_l1_rpc] |
||||
|
||||
with {:start_block_l1_undefined, false} <- {:start_block_l1_undefined, is_nil(env[:start_block_l1])}, |
||||
{:optimism_portal_valid, true} <- {:optimism_portal_valid, Helper.address_correct?(optimism_portal)}, |
||||
{:rpc_l1_undefined, false} <- {:rpc_l1_undefined, is_nil(optimism_l1_rpc)}, |
||||
start_block_l1 <- parse_integer(env[:start_block_l1]), |
||||
false <- is_nil(start_block_l1), |
||||
true <- start_block_l1 > 0, |
||||
{last_l1_block_number, last_l1_tx_hash} <- get_last_l1_item(), |
||||
json_rpc_named_arguments = Optimism.json_rpc_named_arguments(optimism_l1_rpc), |
||||
{:ok, last_l1_tx} <- Optimism.get_transaction_by_hash(last_l1_tx_hash, json_rpc_named_arguments), |
||||
{:l1_tx_not_found, false} <- {:l1_tx_not_found, !is_nil(last_l1_tx_hash) && is_nil(last_l1_tx)}, |
||||
{safe_block, _} = Optimism.get_safe_block(json_rpc_named_arguments), |
||||
{:start_block_l1_valid, true} <- |
||||
{:start_block_l1_valid, |
||||
(start_block_l1 <= last_l1_block_number || last_l1_block_number == 0) && start_block_l1 <= safe_block} do |
||||
start_block = max(start_block_l1, last_l1_block_number) |
||||
|
||||
if start_block > safe_block do |
||||
Process.send(self(), :switch_to_realtime, []) |
||||
else |
||||
Process.send(self(), :fetch, []) |
||||
end |
||||
|
||||
{:noreply, |
||||
%__MODULE__{ |
||||
start_block: start_block, |
||||
from_block: start_block, |
||||
safe_block: safe_block, |
||||
optimism_portal: optimism_portal, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
batch_size: parse_integer(env[:batch_size]) || @batch_size |
||||
}} |
||||
else |
||||
{:start_block_l1_undefined, true} -> |
||||
# the process shouldn't start if the start block is not defined |
||||
{:stop, :normal, state} |
||||
|
||||
{:start_block_l1_valid, false} -> |
||||
Logger.error("Invalid L1 Start Block value. Please, check the value and op_deposits table.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:rpc_l1_undefined, true} -> |
||||
Logger.error("L1 RPC URL is not defined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:optimism_portal_valid, false} -> |
||||
Logger.error("OptimismPortal contract address is invalid or undefined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:error, error_data} -> |
||||
Logger.error("Cannot get last L1 transaction from RPC by its hash due to the RPC error: #{inspect(error_data)}") |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
{:l1_tx_not_found, true} -> |
||||
Logger.error( |
||||
"Cannot find last L1 transaction from RPC by its hash. Probably, there was a reorg on L1 chain. Please, check op_deposits table." |
||||
) |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
_ -> |
||||
Logger.error("Optimism deposits L1 Start Block is invalid or zero.") |
||||
{:stop, :normal, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:fetch, |
||||
%__MODULE__{ |
||||
start_block: start_block, |
||||
from_block: from_block, |
||||
safe_block: safe_block, |
||||
optimism_portal: optimism_portal, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
mode: :catch_up, |
||||
batch_size: batch_size |
||||
} = state |
||||
) do |
||||
to_block = min(from_block + batch_size, safe_block) |
||||
|
||||
with {:logs, {:ok, logs}} <- |
||||
{:logs, |
||||
Optimism.get_logs( |
||||
from_block, |
||||
to_block, |
||||
optimism_portal, |
||||
@transaction_deposited_event, |
||||
json_rpc_named_arguments, |
||||
3 |
||||
)}, |
||||
_ = Helper.log_blocks_chunk_handling(from_block, to_block, start_block, safe_block, nil, :L1), |
||||
deposits = events_to_deposits(logs, json_rpc_named_arguments), |
||||
{:import, {:ok, _imported}} <- |
||||
{:import, Chain.import(%{optimism_deposits: %{params: deposits}, timeout: :infinity})} do |
||||
Publisher.broadcast(%{optimism_deposits: deposits}, :realtime) |
||||
|
||||
Helper.log_blocks_chunk_handling( |
||||
from_block, |
||||
to_block, |
||||
start_block, |
||||
safe_block, |
||||
"#{Enum.count(deposits)} TransactionDeposited event(s)", |
||||
:L1 |
||||
) |
||||
|
||||
if to_block == safe_block do |
||||
Logger.info("Fetched all L1 blocks (#{start_block}..#{safe_block}), switching to realtime mode.") |
||||
Process.send(self(), :switch_to_realtime, []) |
||||
{:noreply, state} |
||||
else |
||||
Process.send(self(), :fetch, []) |
||||
{:noreply, %{state | from_block: to_block + 1}} |
||||
end |
||||
else |
||||
{:logs, {:error, _error}} -> |
||||
Logger.error("Cannot fetch logs. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :fetch, @retry_interval) |
||||
{:noreply, state} |
||||
|
||||
{:import, {:error, error}} -> |
||||
Logger.error("Cannot import logs due to #{inspect(error)}. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :fetch, @retry_interval) |
||||
{:noreply, state} |
||||
|
||||
{:import, {:error, step, failed_value, _changes_so_far}} -> |
||||
Logger.error( |
||||
"Failed to import #{inspect(failed_value)} during #{step}. Retrying in #{@retry_interval_minutes} minutes..." |
||||
) |
||||
|
||||
Process.send_after(self(), :fetch, @retry_interval) |
||||
{:noreply, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:switch_to_realtime, |
||||
%__MODULE__{ |
||||
from_block: from_block, |
||||
safe_block: safe_block, |
||||
optimism_portal: optimism_portal, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
batch_size: batch_size, |
||||
mode: :catch_up |
||||
} = state |
||||
) do |
||||
with {:check_interval, {:ok, check_interval, new_safe}} <- |
||||
{:check_interval, Optimism.get_block_check_interval(json_rpc_named_arguments)}, |
||||
{:catch_up, _, false} <- {:catch_up, new_safe, new_safe - safe_block + 1 > batch_size}, |
||||
{:logs, {:ok, logs}} <- |
||||
{:logs, |
||||
Optimism.get_logs( |
||||
max(safe_block, from_block), |
||||
"latest", |
||||
optimism_portal, |
||||
@transaction_deposited_event, |
||||
json_rpc_named_arguments, |
||||
3 |
||||
)}, |
||||
{:ok, filter_id} <- |
||||
get_new_filter( |
||||
max(safe_block, from_block), |
||||
"latest", |
||||
optimism_portal, |
||||
@transaction_deposited_event, |
||||
json_rpc_named_arguments |
||||
) do |
||||
handle_new_logs(logs, json_rpc_named_arguments) |
||||
Process.send(self(), :fetch, []) |
||||
{:noreply, %{state | mode: :realtime, filter_id: filter_id, check_interval: check_interval}} |
||||
else |
||||
{:catch_up, new_safe, true} -> |
||||
Process.send(self(), :fetch, []) |
||||
{:noreply, %{state | safe_block: new_safe}} |
||||
|
||||
{:logs, {:error, error}} -> |
||||
Logger.error("Failed to get logs while switching to realtime mode, reason: #{inspect(error)}") |
||||
Process.send_after(self(), :switch_to_realtime, @retry_interval) |
||||
{:noreply, state} |
||||
|
||||
{:error, _error} -> |
||||
Logger.error("Failed to set logs filter. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :switch_to_realtime, @retry_interval) |
||||
{:noreply, state} |
||||
|
||||
{:check_interval, {:error, _error}} -> |
||||
Logger.error("Failed to calculate check_interval. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :switch_to_realtime, @retry_interval) |
||||
{:noreply, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:fetch, |
||||
%__MODULE__{ |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
mode: :realtime, |
||||
filter_id: filter_id, |
||||
check_interval: check_interval |
||||
} = state |
||||
) do |
||||
case get_filter_changes(filter_id, json_rpc_named_arguments) do |
||||
{:ok, logs} -> |
||||
handle_new_logs(logs, json_rpc_named_arguments) |
||||
Process.send_after(self(), :fetch, check_interval) |
||||
{:noreply, state} |
||||
|
||||
{:error, :filter_not_found} -> |
||||
Logger.error("The old filter not found on the node. Creating new filter...") |
||||
Process.send(self(), :update_filter, []) |
||||
{:noreply, state} |
||||
|
||||
{:error, _error} -> |
||||
Logger.error("Failed to set logs filter. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :fetch, @retry_interval) |
||||
{:noreply, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:update_filter, |
||||
%__MODULE__{ |
||||
optimism_portal: optimism_portal, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
mode: :realtime |
||||
} = state |
||||
) do |
||||
{last_l1_block_number, _} = get_last_l1_item() |
||||
|
||||
case get_new_filter( |
||||
last_l1_block_number + 1, |
||||
"latest", |
||||
optimism_portal, |
||||
@transaction_deposited_event, |
||||
json_rpc_named_arguments |
||||
) do |
||||
{:ok, filter_id} -> |
||||
Process.send(self(), :fetch, []) |
||||
{:noreply, %{state | filter_id: filter_id}} |
||||
|
||||
{:error, _error} -> |
||||
Logger.error("Failed to set logs filter. Retrying in #{@retry_interval_minutes} minutes...") |
||||
Process.send_after(self(), :update_filter, @retry_interval) |
||||
{:noreply, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({ref, _result}, state) do |
||||
Process.demonitor(ref, [:flush]) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def terminate( |
||||
_reason, |
||||
%__MODULE__{ |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
} = state |
||||
) do |
||||
if state.filter_id do |
||||
Logger.info("Optimism deposits fetcher is terminating, uninstalling filter") |
||||
uninstall_filter(state.filter_id, json_rpc_named_arguments) |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def terminate(:normal, _state) do |
||||
:ok |
||||
end |
||||
|
||||
defp handle_new_logs(logs, json_rpc_named_arguments) do |
||||
{reorgs, logs_to_parse, min_block, max_block, cnt} = |
||||
logs |
||||
|> Enum.reduce({MapSet.new(), [], nil, 0, 0}, fn |
||||
%{"removed" => true, "blockNumber" => block_number}, {reorgs, logs_to_parse, min_block, max_block, cnt} -> |
||||
{MapSet.put(reorgs, block_number), logs_to_parse, min_block, max_block, cnt} |
||||
|
||||
%{"blockNumber" => block_number} = log, {reorgs, logs_to_parse, min_block, max_block, cnt} -> |
||||
{ |
||||
reorgs, |
||||
[log | logs_to_parse], |
||||
min(min_block, quantity_to_integer(block_number)), |
||||
max(max_block, quantity_to_integer(block_number)), |
||||
cnt + 1 |
||||
} |
||||
end) |
||||
|
||||
handle_reorgs(reorgs) |
||||
|
||||
unless Enum.empty?(logs_to_parse) do |
||||
deposits = events_to_deposits(logs_to_parse, json_rpc_named_arguments) |
||||
{:ok, _imported} = Chain.import(%{optimism_deposits: %{params: deposits}, timeout: :infinity}) |
||||
|
||||
Publisher.broadcast(%{optimism_deposits: deposits}, :realtime) |
||||
|
||||
Helper.log_blocks_chunk_handling( |
||||
min_block, |
||||
max_block, |
||||
min_block, |
||||
max_block, |
||||
"#{cnt} TransactionDeposited event(s)", |
||||
:L1 |
||||
) |
||||
end |
||||
end |
||||
|
||||
defp events_to_deposits(logs, json_rpc_named_arguments) do |
||||
timestamps = |
||||
logs |
||||
|> Enum.reduce(MapSet.new(), fn %{"blockNumber" => block_number_quantity}, acc -> |
||||
block_number = quantity_to_integer(block_number_quantity) |
||||
MapSet.put(acc, block_number) |
||||
end) |
||||
|> MapSet.to_list() |
||||
|> get_block_timestamps_by_numbers(json_rpc_named_arguments) |
||||
|> case do |
||||
{:ok, timestamps} -> |
||||
timestamps |
||||
|
||||
{:error, error} -> |
||||
Logger.error( |
||||
"Failed to get L1 block timestamps for deposits due to #{inspect(error)}. Timestamps will be set to null." |
||||
) |
||||
|
||||
%{} |
||||
end |
||||
|
||||
Enum.map(logs, &event_to_deposit(&1, timestamps)) |
||||
end |
||||
|
||||
defp event_to_deposit( |
||||
%{ |
||||
"blockHash" => "0x" <> stripped_block_hash, |
||||
"blockNumber" => block_number_quantity, |
||||
"transactionHash" => transaction_hash, |
||||
"logIndex" => "0x" <> stripped_log_index, |
||||
"topics" => [_, @address_prefix <> from_stripped, @address_prefix <> to_stripped, _], |
||||
"data" => opaque_data |
||||
}, |
||||
timestamps |
||||
) do |
||||
{_, prefixed_block_hash} = (String.pad_leading("", 64, "0") <> stripped_block_hash) |> String.split_at(-64) |
||||
{_, prefixed_log_index} = (String.pad_leading("", 64, "0") <> stripped_log_index) |> String.split_at(-64) |
||||
|
||||
deposit_id_hash = |
||||
"#{prefixed_block_hash}#{prefixed_log_index}" |
||||
|> Base.decode16!(case: :mixed) |
||||
|> ExKeccak.hash_256() |
||||
|> Base.encode16(case: :lower) |
||||
|
||||
source_hash = |
||||
"#{String.pad_leading("", 64, "0")}#{deposit_id_hash}" |
||||
|> Base.decode16!(case: :mixed) |
||||
|> ExKeccak.hash_256() |
||||
|
||||
[ |
||||
<< |
||||
msg_value::binary-size(32), |
||||
value::binary-size(32), |
||||
gas_limit::binary-size(8), |
||||
is_creation::binary-size(1), |
||||
data::binary |
||||
>> |
||||
] = decode_data(opaque_data, [:bytes]) |
||||
|
||||
rlp_encoded = |
||||
ExRLP.encode( |
||||
[ |
||||
source_hash, |
||||
from_stripped |> Base.decode16!(case: :mixed), |
||||
to_stripped |> Base.decode16!(case: :mixed), |
||||
msg_value |> String.replace_leading(<<0>>, <<>>), |
||||
value |> String.replace_leading(<<0>>, <<>>), |
||||
gas_limit |> String.replace_leading(<<0>>, <<>>), |
||||
is_creation |> String.replace_leading(<<0>>, <<>>), |
||||
data |
||||
], |
||||
encoding: :hex |
||||
) |
||||
|
||||
l2_tx_hash = |
||||
"0x" <> ("7e#{rlp_encoded}" |> Base.decode16!(case: :mixed) |> ExKeccak.hash_256() |> Base.encode16(case: :lower)) |
||||
|
||||
block_number = quantity_to_integer(block_number_quantity) |
||||
|
||||
%{ |
||||
l1_block_number: block_number, |
||||
l1_block_timestamp: Map.get(timestamps, block_number), |
||||
l1_transaction_hash: transaction_hash, |
||||
l1_transaction_origin: "0x" <> from_stripped, |
||||
l2_transaction_hash: l2_tx_hash |
||||
} |
||||
end |
||||
|
||||
defp handle_reorgs(reorgs) do |
||||
if MapSet.size(reorgs) > 0 do |
||||
Logger.warning("L1 reorg detected. The following L1 blocks were removed: #{inspect(MapSet.to_list(reorgs))}") |
||||
|
||||
{deleted_count, _} = Repo.delete_all(from(d in Deposit, where: d.l1_block_number in ^reorgs)) |
||||
|
||||
if deleted_count > 0 do |
||||
Logger.warning( |
||||
"As L1 reorg was detected, all affected rows were removed from the op_deposits table. Number of removed rows: #{deleted_count}." |
||||
) |
||||
end |
||||
end |
||||
end |
||||
|
||||
defp get_block_timestamps_by_numbers(numbers, json_rpc_named_arguments, retries \\ 3) do |
||||
id_to_params = |
||||
numbers |
||||
|> Stream.map(fn number -> %{number: number} end) |
||||
|> Stream.with_index() |
||||
|> Enum.into(%{}, fn {params, id} -> {id, params} end) |
||||
|
||||
request = Blocks.requests(id_to_params, &ByNumber.request(&1, false)) |
||||
error_message = &"Cannot fetch timestamps for blocks #{numbers}. Error: #{inspect(&1)}" |
||||
|
||||
case Optimism.repeated_request(request, error_message, json_rpc_named_arguments, retries) do |
||||
{:ok, response} -> |
||||
%Blocks{blocks_params: blocks_params} = Blocks.from_responses(response, id_to_params) |
||||
|
||||
{:ok, |
||||
blocks_params |
||||
|> Enum.reduce(%{}, fn %{number: number, timestamp: timestamp}, acc -> Map.put_new(acc, number, timestamp) end)} |
||||
|
||||
err -> |
||||
err |
||||
end |
||||
end |
||||
|
||||
defp get_new_filter(from_block, to_block, address, topic0, json_rpc_named_arguments, retries \\ 3) do |
||||
processed_from_block = if is_integer(from_block), do: integer_to_quantity(from_block), else: from_block |
||||
processed_to_block = if is_integer(to_block), do: integer_to_quantity(to_block), else: to_block |
||||
|
||||
req = |
||||
request(%{ |
||||
id: 0, |
||||
method: "eth_newFilter", |
||||
params: [ |
||||
%{ |
||||
fromBlock: processed_from_block, |
||||
toBlock: processed_to_block, |
||||
address: address, |
||||
topics: [topic0] |
||||
} |
||||
] |
||||
}) |
||||
|
||||
error_message = &"Cannot create new log filter. Error: #{inspect(&1)}" |
||||
|
||||
Optimism.repeated_request(req, error_message, json_rpc_named_arguments, retries) |
||||
end |
||||
|
||||
defp get_filter_changes(filter_id, json_rpc_named_arguments, retries \\ 3) do |
||||
req = |
||||
request(%{ |
||||
id: 0, |
||||
method: "eth_getFilterChanges", |
||||
params: [filter_id] |
||||
}) |
||||
|
||||
error_message = &"Cannot fetch filter changes. Error: #{inspect(&1)}" |
||||
|
||||
case Optimism.repeated_request(req, error_message, json_rpc_named_arguments, retries) do |
||||
{:error, %{code: _, message: "filter not found"}} -> {:error, :filter_not_found} |
||||
response -> response |
||||
end |
||||
end |
||||
|
||||
defp uninstall_filter(filter_id, json_rpc_named_arguments, retries \\ 1) do |
||||
req = |
||||
request(%{ |
||||
id: 0, |
||||
method: "eth_getFilterChanges", |
||||
params: [filter_id] |
||||
}) |
||||
|
||||
error_message = &"Cannot uninstall filter. Error: #{inspect(&1)}" |
||||
|
||||
Optimism.repeated_request(req, error_message, json_rpc_named_arguments, retries) |
||||
end |
||||
|
||||
defp get_last_l1_item do |
||||
Deposit.last_deposit_l1_block_number_query() |
||||
|> Repo.one() |
||||
|> Kernel.||({0, nil}) |
||||
end |
||||
end |
@ -0,0 +1,187 @@ |
||||
defmodule Indexer.Fetcher.Optimism.OutputRoot do |
||||
@moduledoc """ |
||||
Fills op_output_roots DB table. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import Ecto.Query |
||||
|
||||
import EthereumJSONRPC, only: [quantity_to_integer: 1] |
||||
|
||||
alias Explorer.{Chain, Helper, Repo} |
||||
alias Explorer.Chain.Optimism.OutputRoot |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Helper, as: IndexerHelper |
||||
|
||||
@fetcher_name :optimism_output_roots |
||||
|
||||
# 32-byte signature of the event OutputProposed(bytes32 indexed outputRoot, uint256 indexed l2OutputIndex, uint256 indexed l2BlockNumber, uint256 l1Timestamp) |
||||
@output_proposed_event "0xa7aaf2512769da4e444e3de247be2564225c2e7a8f74cfe528e46e17d24868e2" |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(_args) do |
||||
{:ok, %{}, {:continue, :ok}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(:ok, _state) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
|
||||
env = Application.get_all_env(:indexer)[__MODULE__] |
||||
|
||||
Optimism.init_continue(env, env[:output_oracle], __MODULE__) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:continue, |
||||
%{ |
||||
contract_address: output_oracle, |
||||
block_check_interval: block_check_interval, |
||||
start_block: start_block, |
||||
end_block: end_block, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
} = state |
||||
) do |
||||
# credo:disable-for-next-line |
||||
time_before = Timex.now() |
||||
|
||||
chunks_number = ceil((end_block - start_block + 1) / Optimism.get_logs_range_size()) |
||||
chunk_range = Range.new(0, max(chunks_number - 1, 0), 1) |
||||
|
||||
last_written_block = |
||||
chunk_range |
||||
|> Enum.reduce_while(start_block - 1, fn current_chunk, _ -> |
||||
chunk_start = start_block + Optimism.get_logs_range_size() * current_chunk |
||||
chunk_end = min(chunk_start + Optimism.get_logs_range_size() - 1, end_block) |
||||
|
||||
if chunk_end >= chunk_start do |
||||
IndexerHelper.log_blocks_chunk_handling(chunk_start, chunk_end, start_block, end_block, nil, :L1) |
||||
|
||||
{:ok, result} = |
||||
Optimism.get_logs( |
||||
chunk_start, |
||||
chunk_end, |
||||
output_oracle, |
||||
@output_proposed_event, |
||||
json_rpc_named_arguments, |
||||
IndexerHelper.infinite_retries_number() |
||||
) |
||||
|
||||
output_roots = events_to_output_roots(result) |
||||
|
||||
{:ok, _} = |
||||
Chain.import(%{ |
||||
optimism_output_roots: %{params: output_roots}, |
||||
timeout: :infinity |
||||
}) |
||||
|
||||
IndexerHelper.log_blocks_chunk_handling( |
||||
chunk_start, |
||||
chunk_end, |
||||
start_block, |
||||
end_block, |
||||
"#{Enum.count(output_roots)} OutputProposed event(s)", |
||||
:L1 |
||||
) |
||||
end |
||||
|
||||
reorg_block = Optimism.reorg_block_pop(@fetcher_name) |
||||
|
||||
if !is_nil(reorg_block) && reorg_block > 0 do |
||||
{deleted_count, _} = Repo.delete_all(from(r in OutputRoot, where: r.l1_block_number >= ^reorg_block)) |
||||
|
||||
log_deleted_rows_count(reorg_block, deleted_count) |
||||
|
||||
{:halt, if(reorg_block <= chunk_end, do: reorg_block - 1, else: chunk_end)} |
||||
else |
||||
{:cont, chunk_end} |
||||
end |
||||
end) |
||||
|
||||
new_start_block = last_written_block + 1 |
||||
|
||||
{:ok, new_end_block} = |
||||
Optimism.get_block_number_by_tag("latest", json_rpc_named_arguments, IndexerHelper.infinite_retries_number()) |
||||
|
||||
delay = |
||||
if new_end_block == last_written_block do |
||||
# there is no new block, so wait for some time to let the chain issue the new block |
||||
max(block_check_interval - Timex.diff(Timex.now(), time_before, :milliseconds), 0) |
||||
else |
||||
0 |
||||
end |
||||
|
||||
Process.send_after(self(), :continue, delay) |
||||
|
||||
{:noreply, %{state | start_block: new_start_block, end_block: new_end_block}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({:chain_event, :optimism_reorg_block, :realtime, block_number}, state) do |
||||
Optimism.reorg_block_push(@fetcher_name, block_number) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({ref, _result}, state) do |
||||
Process.demonitor(ref, [:flush]) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
defp events_to_output_roots(events) do |
||||
Enum.map(events, fn event -> |
||||
[l1_timestamp] = Helper.decode_data(event["data"], [{:uint, 256}]) |
||||
{:ok, l1_timestamp} = DateTime.from_unix(l1_timestamp) |
||||
|
||||
%{ |
||||
l2_output_index: quantity_to_integer(Enum.at(event["topics"], 2)), |
||||
l2_block_number: quantity_to_integer(Enum.at(event["topics"], 3)), |
||||
l1_transaction_hash: event["transactionHash"], |
||||
l1_timestamp: l1_timestamp, |
||||
l1_block_number: quantity_to_integer(event["blockNumber"]), |
||||
output_root: Enum.at(event["topics"], 1) |
||||
} |
||||
end) |
||||
end |
||||
|
||||
defp log_deleted_rows_count(reorg_block, count) do |
||||
if count > 0 do |
||||
Logger.warning( |
||||
"As L1 reorg was detected, all rows with l1_block_number >= #{reorg_block} were removed from the op_output_roots table. Number of removed rows: #{count}." |
||||
) |
||||
end |
||||
end |
||||
|
||||
def get_last_l1_item do |
||||
query = |
||||
from(root in OutputRoot, |
||||
select: {root.l1_block_number, root.l1_transaction_hash}, |
||||
order_by: [desc: root.l2_output_index], |
||||
limit: 1 |
||||
) |
||||
|
||||
query |
||||
|> Repo.one() |
||||
|> Kernel.||({0, nil}) |
||||
end |
||||
end |
@ -0,0 +1,850 @@ |
||||
defmodule Indexer.Fetcher.Optimism.TxnBatch do |
||||
@moduledoc """ |
||||
Fills op_transaction_batches DB table. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import Ecto.Query |
||||
|
||||
import EthereumJSONRPC, only: [fetch_blocks_by_range: 2, json_rpc: 2, quantity_to_integer: 1] |
||||
|
||||
import Explorer.Helper, only: [parse_integer: 1] |
||||
|
||||
alias EthereumJSONRPC.Block.ByHash |
||||
alias EthereumJSONRPC.Blocks |
||||
alias Explorer.{Chain, Repo} |
||||
alias Explorer.Chain.Block |
||||
alias Explorer.Chain.Events.Subscriber |
||||
alias Explorer.Chain.Optimism.FrameSequence |
||||
alias Explorer.Chain.Optimism.TxnBatch, as: OptimismTxnBatch |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Helper |
||||
alias Varint.LEB128 |
||||
|
||||
@fetcher_name :optimism_txn_batches |
||||
|
||||
# Optimism chain block time is a constant (2 seconds) |
||||
@op_chain_block_time 2 |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(args) do |
||||
{:ok, %{json_rpc_named_arguments_l2: args[:json_rpc_named_arguments]}, {:continue, nil}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(_, state) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
# two seconds pause needed to avoid exceeding Supervisor restart intensity when DB issues |
||||
Process.send_after(self(), :init_with_delay, 2000) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info(:init_with_delay, %{json_rpc_named_arguments_l2: json_rpc_named_arguments_l2} = state) do |
||||
env = Application.get_all_env(:indexer)[__MODULE__] |
||||
|
||||
with {:start_block_l1_undefined, false} <- {:start_block_l1_undefined, is_nil(env[:start_block_l1])}, |
||||
{:genesis_block_l2_invalid, false} <- |
||||
{:genesis_block_l2_invalid, is_nil(env[:genesis_block_l2]) or env[:genesis_block_l2] < 0}, |
||||
{:reorg_monitor_started, true} <- {:reorg_monitor_started, !is_nil(Process.whereis(Indexer.Fetcher.Optimism))}, |
||||
optimism_l1_rpc = Application.get_all_env(:indexer)[Indexer.Fetcher.Optimism][:optimism_l1_rpc], |
||||
{:rpc_l1_undefined, false} <- {:rpc_l1_undefined, is_nil(optimism_l1_rpc)}, |
||||
{:batch_inbox_valid, true} <- {:batch_inbox_valid, Helper.address_correct?(env[:batch_inbox])}, |
||||
{:batch_submitter_valid, true} <- {:batch_submitter_valid, Helper.address_correct?(env[:batch_submitter])}, |
||||
start_block_l1 = parse_integer(env[:start_block_l1]), |
||||
false <- is_nil(start_block_l1), |
||||
true <- start_block_l1 > 0, |
||||
chunk_size = parse_integer(env[:blocks_chunk_size]), |
||||
{:chunk_size_valid, true} <- {:chunk_size_valid, !is_nil(chunk_size) && chunk_size > 0}, |
||||
json_rpc_named_arguments = Optimism.json_rpc_named_arguments(optimism_l1_rpc), |
||||
{last_l1_block_number, last_l1_transaction_hash, last_l1_tx} = get_last_l1_item(json_rpc_named_arguments), |
||||
{:start_block_l1_valid, true} <- |
||||
{:start_block_l1_valid, start_block_l1 <= last_l1_block_number || last_l1_block_number == 0}, |
||||
{:l1_tx_not_found, false} <- {:l1_tx_not_found, !is_nil(last_l1_transaction_hash) && is_nil(last_l1_tx)}, |
||||
{:ok, block_check_interval, last_safe_block} <- Optimism.get_block_check_interval(json_rpc_named_arguments) do |
||||
start_block = max(start_block_l1, last_l1_block_number) |
||||
|
||||
Subscriber.to(:optimism_reorg_block, :realtime) |
||||
|
||||
Process.send(self(), :continue, []) |
||||
|
||||
{:noreply, |
||||
%{ |
||||
batch_inbox: String.downcase(env[:batch_inbox]), |
||||
batch_submitter: String.downcase(env[:batch_submitter]), |
||||
block_check_interval: block_check_interval, |
||||
start_block: start_block, |
||||
end_block: last_safe_block, |
||||
chunk_size: chunk_size, |
||||
incomplete_channels: %{}, |
||||
genesis_block_l2: env[:genesis_block_l2], |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
json_rpc_named_arguments_l2: json_rpc_named_arguments_l2 |
||||
}} |
||||
else |
||||
{:start_block_l1_undefined, true} -> |
||||
# the process shouldn't start if the start block is not defined |
||||
{:stop, :normal, state} |
||||
|
||||
{:genesis_block_l2_invalid, true} -> |
||||
Logger.error("L2 genesis block number is undefined or invalid.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:reorg_monitor_started, false} -> |
||||
Logger.error("Cannot start this process as reorg monitor in Indexer.Fetcher.Optimism is not started.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:rpc_l1_undefined, true} -> |
||||
Logger.error("L1 RPC URL is not defined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:batch_inbox_valid, false} -> |
||||
Logger.error("Batch Inbox address is invalid or not defined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:batch_submitter_valid, false} -> |
||||
Logger.error("Batch Submitter address is invalid or not defined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:start_block_l1_valid, false} -> |
||||
Logger.error("Invalid L1 Start Block value. Please, check the value and op_transaction_batches table.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:chunk_size_valid, false} -> |
||||
Logger.error("Invalid blocks chunk size value.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:error, error_data} -> |
||||
Logger.error("Cannot get block timestamp by its number due to RPC error: #{inspect(error_data)}") |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
{:l1_tx_not_found, true} -> |
||||
Logger.error( |
||||
"Cannot find last L1 transaction from RPC by its hash. Probably, there was a reorg on L1 chain. Please, check op_transaction_batches table." |
||||
) |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
_ -> |
||||
Logger.error("Batch Start Block is invalid or zero.") |
||||
{:stop, :normal, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:continue, |
||||
%{ |
||||
batch_inbox: batch_inbox, |
||||
batch_submitter: batch_submitter, |
||||
block_check_interval: block_check_interval, |
||||
start_block: start_block, |
||||
end_block: end_block, |
||||
chunk_size: chunk_size, |
||||
incomplete_channels: incomplete_channels, |
||||
genesis_block_l2: genesis_block_l2, |
||||
json_rpc_named_arguments: json_rpc_named_arguments, |
||||
json_rpc_named_arguments_l2: json_rpc_named_arguments_l2 |
||||
} = state |
||||
) do |
||||
time_before = Timex.now() |
||||
|
||||
chunks_number = ceil((end_block - start_block + 1) / chunk_size) |
||||
chunk_range = Range.new(0, max(chunks_number - 1, 0), 1) |
||||
|
||||
{last_written_block, new_incomplete_channels} = |
||||
chunk_range |
||||
|> Enum.reduce_while({start_block - 1, incomplete_channels}, fn current_chunk, {_, incomplete_channels_acc} -> |
||||
chunk_start = start_block + chunk_size * current_chunk |
||||
chunk_end = min(chunk_start + chunk_size - 1, end_block) |
||||
|
||||
new_incomplete_channels = |
||||
if chunk_end >= chunk_start do |
||||
Helper.log_blocks_chunk_handling(chunk_start, chunk_end, start_block, end_block, nil, :L1) |
||||
|
||||
{:ok, new_incomplete_channels, batches, sequences} = |
||||
get_txn_batches( |
||||
Range.new(chunk_start, chunk_end), |
||||
batch_inbox, |
||||
batch_submitter, |
||||
genesis_block_l2, |
||||
incomplete_channels_acc, |
||||
json_rpc_named_arguments, |
||||
json_rpc_named_arguments_l2, |
||||
Helper.infinite_retries_number() |
||||
) |
||||
|
||||
{batches, sequences} = remove_duplicates(batches, sequences) |
||||
|
||||
{:ok, _} = |
||||
Chain.import(%{ |
||||
optimism_frame_sequences: %{params: sequences}, |
||||
optimism_txn_batches: %{params: batches}, |
||||
timeout: :infinity |
||||
}) |
||||
|
||||
Helper.log_blocks_chunk_handling( |
||||
chunk_start, |
||||
chunk_end, |
||||
start_block, |
||||
end_block, |
||||
"#{Enum.count(sequences)} batch(es) containing #{Enum.count(batches)} block(s).", |
||||
:L1 |
||||
) |
||||
|
||||
new_incomplete_channels |
||||
else |
||||
incomplete_channels_acc |
||||
end |
||||
|
||||
reorg_block = Optimism.reorg_block_pop(@fetcher_name) |
||||
|
||||
if !is_nil(reorg_block) && reorg_block > 0 do |
||||
new_incomplete_channels = handle_l1_reorg(reorg_block, new_incomplete_channels) |
||||
{:halt, {if(reorg_block <= chunk_end, do: reorg_block - 1, else: chunk_end), new_incomplete_channels}} |
||||
else |
||||
{:cont, {chunk_end, new_incomplete_channels}} |
||||
end |
||||
end) |
||||
|
||||
new_start_block = last_written_block + 1 |
||||
|
||||
{:ok, new_end_block} = |
||||
Optimism.get_block_number_by_tag("latest", json_rpc_named_arguments, Helper.infinite_retries_number()) |
||||
|
||||
delay = |
||||
if new_end_block == last_written_block do |
||||
# there is no new block, so wait for some time to let the chain issue the new block |
||||
max(block_check_interval - Timex.diff(Timex.now(), time_before, :milliseconds), 0) |
||||
else |
||||
0 |
||||
end |
||||
|
||||
Process.send_after(self(), :continue, delay) |
||||
|
||||
{:noreply, |
||||
%{ |
||||
state |
||||
| start_block: new_start_block, |
||||
end_block: new_end_block, |
||||
incomplete_channels: new_incomplete_channels |
||||
}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({:chain_event, :optimism_reorg_block, :realtime, block_number}, state) do |
||||
Optimism.reorg_block_push(@fetcher_name, block_number) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({ref, _result}, state) do |
||||
Process.demonitor(ref, [:flush]) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
defp get_block_numbers_by_hashes([], _json_rpc_named_arguments_l2) do |
||||
%{} |
||||
end |
||||
|
||||
defp get_block_numbers_by_hashes(hashes, json_rpc_named_arguments_l2) do |
||||
query = |
||||
from( |
||||
b in Block, |
||||
select: {b.hash, b.number}, |
||||
where: b.hash in ^hashes |
||||
) |
||||
|
||||
number_by_hash = |
||||
query |
||||
|> Repo.all(timeout: :infinity) |
||||
|> Enum.reduce(%{}, fn {hash, number}, acc -> |
||||
Map.put(acc, hash.bytes, number) |
||||
end) |
||||
|
||||
requests = |
||||
hashes |
||||
|> Enum.filter(fn hash -> is_nil(Map.get(number_by_hash, hash)) end) |
||||
|> Enum.with_index() |
||||
|> Enum.map(fn {hash, id} -> |
||||
ByHash.request(%{hash: "0x" <> Base.encode16(hash, case: :lower), id: id}, false) |
||||
end) |
||||
|
||||
chunk_size = 50 |
||||
chunks_number = ceil(Enum.count(requests) / chunk_size) |
||||
chunk_range = Range.new(0, chunks_number - 1, 1) |
||||
|
||||
chunk_range |
||||
|> Enum.reduce([], fn current_chunk, acc -> |
||||
{:ok, resp} = |
||||
requests |
||||
|> Enum.slice(chunk_size * current_chunk, chunk_size) |
||||
|> json_rpc(json_rpc_named_arguments_l2) |
||||
|
||||
acc ++ resp |
||||
end) |
||||
|> Enum.map(fn %{result: result} -> result end) |
||||
|> Enum.reduce(number_by_hash, fn block, acc -> |
||||
if is_nil(block) do |
||||
acc |
||||
else |
||||
block_number = quantity_to_integer(Map.get(block, "number")) |
||||
"0x" <> hash = Map.get(block, "hash") |
||||
{:ok, hash} = Base.decode16(hash, case: :lower) |
||||
Map.put(acc, hash, block_number) |
||||
end |
||||
end) |
||||
end |
||||
|
||||
defp get_block_timestamp_by_number(block_number, blocks_params) do |
||||
block = Enum.find(blocks_params, %{timestamp: nil}, fn b -> b.number == block_number end) |
||||
block.timestamp |
||||
end |
||||
|
||||
defp get_last_l1_item(json_rpc_named_arguments) do |
||||
l1_transaction_hashes = |
||||
Repo.one( |
||||
from( |
||||
tb in OptimismTxnBatch, |
||||
inner_join: fs in FrameSequence, |
||||
on: fs.id == tb.frame_sequence_id, |
||||
select: fs.l1_transaction_hashes, |
||||
order_by: [desc: tb.l2_block_number], |
||||
limit: 1 |
||||
) |
||||
) |
||||
|
||||
last_l1_transaction_hash = |
||||
if is_nil(l1_transaction_hashes) do |
||||
nil |
||||
else |
||||
List.last(l1_transaction_hashes) |
||||
end |
||||
|
||||
if is_nil(last_l1_transaction_hash) do |
||||
{0, nil, nil} |
||||
else |
||||
{:ok, last_l1_tx} = Optimism.get_transaction_by_hash(last_l1_transaction_hash, json_rpc_named_arguments) |
||||
last_l1_block_number = quantity_to_integer(Map.get(last_l1_tx || %{}, "blockNumber", 0)) |
||||
{last_l1_block_number, last_l1_transaction_hash, last_l1_tx} |
||||
end |
||||
end |
||||
|
||||
defp get_txn_batches( |
||||
block_range, |
||||
batch_inbox, |
||||
batch_submitter, |
||||
genesis_block_l2, |
||||
incomplete_channels, |
||||
json_rpc_named_arguments, |
||||
json_rpc_named_arguments_l2, |
||||
retries_left |
||||
) do |
||||
case fetch_blocks_by_range(block_range, json_rpc_named_arguments) do |
||||
{:ok, %Blocks{transactions_params: transactions_params, blocks_params: blocks_params, errors: []}} -> |
||||
transactions_params |
||||
|> txs_filter(batch_submitter, batch_inbox) |
||||
|> get_txn_batches_inner( |
||||
blocks_params, |
||||
genesis_block_l2, |
||||
incomplete_channels, |
||||
json_rpc_named_arguments_l2 |
||||
) |
||||
|
||||
{_, message_or_errors} -> |
||||
message = |
||||
case message_or_errors do |
||||
%Blocks{errors: errors} -> errors |
||||
msg -> msg |
||||
end |
||||
|
||||
retries_left = retries_left - 1 |
||||
|
||||
error_message = "Cannot fetch blocks #{inspect(block_range)}. Error(s): #{inspect(message)}" |
||||
|
||||
if retries_left <= 0 do |
||||
Logger.error(error_message) |
||||
{:error, message} |
||||
else |
||||
Logger.error("#{error_message} Retrying...") |
||||
:timer.sleep(3000) |
||||
|
||||
get_txn_batches( |
||||
block_range, |
||||
batch_inbox, |
||||
batch_submitter, |
||||
genesis_block_l2, |
||||
incomplete_channels, |
||||
json_rpc_named_arguments, |
||||
json_rpc_named_arguments_l2, |
||||
retries_left |
||||
) |
||||
end |
||||
end |
||||
end |
||||
|
||||
defp get_txn_batches_inner( |
||||
transactions_filtered, |
||||
blocks_params, |
||||
genesis_block_l2, |
||||
incomplete_channels, |
||||
json_rpc_named_arguments_l2 |
||||
) do |
||||
transactions_filtered |
||||
|> Enum.reduce({:ok, incomplete_channels, [], []}, fn t, {_, incomplete_channels_acc, batches_acc, sequences_acc} -> |
||||
frame = input_to_frame(t.input) |
||||
|
||||
channel = Map.get(incomplete_channels_acc, frame.channel_id, %{frames: %{}}) |
||||
|
||||
channel_frames = |
||||
Map.put(channel.frames, frame.number, %{ |
||||
data: frame.data, |
||||
is_last: frame.is_last, |
||||
block_number: t.block_number, |
||||
tx_hash: t.hash |
||||
}) |
||||
|
||||
l1_timestamp = |
||||
if frame.is_last do |
||||
get_block_timestamp_by_number(t.block_number, blocks_params) |
||||
else |
||||
Map.get(channel, :l1_timestamp) |
||||
end |
||||
|
||||
channel = |
||||
channel |
||||
|> Map.put_new(:id, frame.channel_id) |
||||
|> Map.put(:frames, channel_frames) |
||||
|> Map.put(:timestamp, DateTime.utc_now()) |
||||
|> Map.put(:l1_timestamp, l1_timestamp) |
||||
|
||||
if channel_complete?(channel) do |
||||
handle_channel( |
||||
channel, |
||||
incomplete_channels_acc, |
||||
batches_acc, |
||||
sequences_acc, |
||||
genesis_block_l2, |
||||
json_rpc_named_arguments_l2 |
||||
) |
||||
else |
||||
{:ok, Map.put(incomplete_channels_acc, frame.channel_id, channel), batches_acc, sequences_acc} |
||||
end |
||||
end) |
||||
end |
||||
|
||||
defp handle_channel( |
||||
channel, |
||||
incomplete_channels_acc, |
||||
batches_acc, |
||||
sequences_acc, |
||||
genesis_block_l2, |
||||
json_rpc_named_arguments_l2 |
||||
) do |
||||
frame_sequence_last = List.first(sequences_acc) |
||||
frame_sequence_id = next_frame_sequence_id(frame_sequence_last) |
||||
|
||||
{bytes, l1_transaction_hashes} = |
||||
0..(Enum.count(channel.frames) - 1) |
||||
|> Enum.reduce({<<>>, []}, fn frame_number, {bytes_acc, tx_hashes_acc} -> |
||||
frame = Map.get(channel.frames, frame_number) |
||||
{bytes_acc <> frame.data, [frame.tx_hash | tx_hashes_acc]} |
||||
end) |
||||
|
||||
batches_parsed = |
||||
parse_frame_sequence( |
||||
bytes, |
||||
frame_sequence_id, |
||||
channel.l1_timestamp, |
||||
genesis_block_l2, |
||||
json_rpc_named_arguments_l2 |
||||
) |
||||
|
||||
if batches_parsed == :error do |
||||
Logger.error("Cannot parse frame sequence from these L1 transaction(s): #{inspect(l1_transaction_hashes)}") |
||||
end |
||||
|
||||
seq = %{ |
||||
id: frame_sequence_id, |
||||
l1_transaction_hashes: Enum.reverse(l1_transaction_hashes), |
||||
l1_timestamp: channel.l1_timestamp |
||||
} |
||||
|
||||
new_incomplete_channels_acc = |
||||
incomplete_channels_acc |
||||
|> Map.delete(channel.id) |
||||
|> remove_expired_channels() |
||||
|
||||
if batches_parsed == :error or Enum.empty?(batches_parsed) do |
||||
{:ok, new_incomplete_channels_acc, batches_acc, sequences_acc} |
||||
else |
||||
{:ok, new_incomplete_channels_acc, batches_acc ++ batches_parsed, [seq | sequences_acc]} |
||||
end |
||||
end |
||||
|
||||
defp handle_l1_reorg(reorg_block, incomplete_channels) do |
||||
incomplete_channels |
||||
|> Enum.reduce(incomplete_channels, fn {channel_id, %{frames: frames} = channel}, acc -> |
||||
updated_frames = |
||||
frames |
||||
|> Enum.filter(fn {_frame_number, %{block_number: block_number}} -> |
||||
block_number < reorg_block |
||||
end) |
||||
|> Enum.into(%{}) |
||||
|
||||
if Enum.empty?(updated_frames) do |
||||
Map.delete(acc, channel_id) |
||||
else |
||||
Map.put(acc, channel_id, Map.put(channel, :frames, updated_frames)) |
||||
end |
||||
end) |
||||
end |
||||
|
||||
@doc """ |
||||
Removes rows from op_transaction_batches and op_frame_sequences tables written beginning from the L2 reorg block. |
||||
""" |
||||
@spec handle_l2_reorg(non_neg_integer()) :: any() |
||||
def handle_l2_reorg(reorg_block) do |
||||
frame_sequence_ids = |
||||
Repo.all( |
||||
from( |
||||
tb in OptimismTxnBatch, |
||||
select: tb.frame_sequence_id, |
||||
where: tb.l2_block_number >= ^reorg_block |
||||
), |
||||
timeout: :infinity |
||||
) |
||||
|
||||
{deleted_count, _} = Repo.delete_all(from(tb in OptimismTxnBatch, where: tb.l2_block_number >= ^reorg_block)) |
||||
|
||||
Repo.delete_all(from(fs in FrameSequence, where: fs.id in ^frame_sequence_ids)) |
||||
|
||||
if deleted_count > 0 do |
||||
Logger.warning( |
||||
"As L2 reorg was detected, all rows with l2_block_number >= #{reorg_block} were removed from the op_transaction_batches table. Number of removed rows: #{deleted_count}." |
||||
) |
||||
end |
||||
end |
||||
|
||||
defp channel_complete?(channel) do |
||||
last_frame_number = |
||||
channel.frames |
||||
|> Map.keys() |
||||
|> Enum.max() |
||||
|
||||
Map.get(channel.frames, last_frame_number).is_last and last_frame_number == Enum.count(channel.frames) - 1 |
||||
end |
||||
|
||||
defp remove_expired_channels(channels_map) do |
||||
now = DateTime.utc_now() |
||||
|
||||
Enum.reduce(channels_map, channels_map, fn {channel_id, %{timestamp: timestamp}}, channels_acc -> |
||||
if DateTime.diff(now, timestamp) >= 86400 do |
||||
Map.delete(channels_acc, channel_id) |
||||
else |
||||
channels_acc |
||||
end |
||||
end) |
||||
end |
||||
|
||||
defp input_to_frame("0x" <> input) do |
||||
input_binary = Base.decode16!(input, case: :mixed) |
||||
|
||||
# the structure of the input is as follows: |
||||
# |
||||
# input = derivation_version ++ channel_id ++ frame_number ++ frame_data_length ++ frame_data ++ is_last |
||||
# |
||||
# derivation_version = uint8 |
||||
# channel_id = bytes16 |
||||
# frame_number = uint16 |
||||
# frame_data_length = uint32 |
||||
# frame_data = bytes |
||||
# is_last = bool (uint8) |
||||
|
||||
derivation_version_length = 1 |
||||
channel_id_length = 16 |
||||
frame_number_size = 2 |
||||
frame_data_length_size = 4 |
||||
is_last_size = 1 |
||||
|
||||
# the first byte must be zero (so called Derivation Version) |
||||
[0] = :binary.bin_to_list(binary_part(input_binary, 0, derivation_version_length)) |
||||
|
||||
# channel id has 16 bytes |
||||
channel_id = binary_part(input_binary, derivation_version_length, channel_id_length) |
||||
|
||||
# frame number consists of 2 bytes |
||||
frame_number_offset = derivation_version_length + channel_id_length |
||||
frame_number = :binary.decode_unsigned(binary_part(input_binary, frame_number_offset, frame_number_size)) |
||||
|
||||
# frame data length consists of 4 bytes |
||||
frame_data_length_offset = frame_number_offset + frame_number_size |
||||
|
||||
frame_data_length = |
||||
:binary.decode_unsigned(binary_part(input_binary, frame_data_length_offset, frame_data_length_size)) |
||||
|
||||
input_length_must_be = |
||||
derivation_version_length + channel_id_length + frame_number_size + frame_data_length_size + frame_data_length + |
||||
is_last_size |
||||
|
||||
input_length_current = byte_size(input_binary) |
||||
|
||||
if input_length_current == input_length_must_be do |
||||
# frame data is a byte array of frame_data_length size |
||||
frame_data_offset = frame_data_length_offset + frame_data_length_size |
||||
frame_data = binary_part(input_binary, frame_data_offset, frame_data_length) |
||||
|
||||
# is_last is 1-byte item |
||||
is_last_offset = frame_data_offset + frame_data_length |
||||
is_last = :binary.decode_unsigned(binary_part(input_binary, is_last_offset, is_last_size)) > 0 |
||||
|
||||
%{number: frame_number, data: frame_data, is_last: is_last, channel_id: channel_id} |
||||
else |
||||
# workaround to remove a leading extra byte |
||||
# for example, the case for Base Goerli batch L1 transaction: https://goerli.etherscan.io/tx/0xa43fa9da683a6157a114e3175a625b5aed85d8c573aae226768c58a924a17be0 |
||||
input_to_frame("0x" <> Base.encode16(binary_part(input_binary, 1, input_length_current - 1))) |
||||
end |
||||
end |
||||
|
||||
defp next_frame_sequence_id(last_known_sequence) when is_nil(last_known_sequence) do |
||||
last_known_id = |
||||
Repo.one( |
||||
from( |
||||
fs in FrameSequence, |
||||
select: fs.id, |
||||
order_by: [desc: fs.id], |
||||
limit: 1 |
||||
) |
||||
) |
||||
|
||||
if is_nil(last_known_id) do |
||||
1 |
||||
else |
||||
last_known_id + 1 |
||||
end |
||||
end |
||||
|
||||
defp next_frame_sequence_id(last_known_sequence) do |
||||
last_known_sequence.id + 1 |
||||
end |
||||
|
||||
defp parse_frame_sequence( |
||||
bytes, |
||||
id, |
||||
l1_timestamp, |
||||
genesis_block_l2, |
||||
json_rpc_named_arguments_l2 |
||||
) do |
||||
uncompressed_bytes = zlib_decompress(bytes) |
||||
|
||||
batches = |
||||
Enum.reduce_while(Stream.iterate(0, &(&1 + 1)), {uncompressed_bytes, []}, fn _i, {remainder, batch_acc} -> |
||||
try do |
||||
{decoded, new_remainder} = ExRLP.decode(remainder, stream: true) |
||||
|
||||
<<version>> = binary_part(decoded, 0, 1) |
||||
content = binary_part(decoded, 1, byte_size(decoded) - 1) |
||||
|
||||
new_batch_acc = |
||||
cond do |
||||
version == 0 -> |
||||
handle_v0_batch(content, id, l1_timestamp, batch_acc) |
||||
|
||||
version <= 2 -> |
||||
# parsing the span batch |
||||
handle_v1_batch(content, id, l1_timestamp, genesis_block_l2, batch_acc) |
||||
|
||||
true -> |
||||
Logger.error("Unsupported batch version ##{version}") |
||||
:error |
||||
end |
||||
|
||||
if byte_size(new_remainder) > 0 and new_batch_acc != :error do |
||||
{:cont, {new_remainder, new_batch_acc}} |
||||
else |
||||
{:halt, new_batch_acc} |
||||
end |
||||
rescue |
||||
_ -> {:halt, :error} |
||||
end |
||||
end) |
||||
|
||||
if batches == :error do |
||||
:error |
||||
else |
||||
batches = Enum.reverse(batches) |
||||
|
||||
numbers_by_hashes = |
||||
batches |
||||
|> Stream.filter(&Map.has_key?(&1, :parent_hash)) |
||||
|> Enum.map(fn batch -> batch.parent_hash end) |
||||
|> get_block_numbers_by_hashes(json_rpc_named_arguments_l2) |
||||
|
||||
Enum.map(batches, &parent_hash_to_l2_block_number(&1, numbers_by_hashes)) |
||||
end |
||||
end |
||||
|
||||
defp handle_v0_batch(content, frame_sequence_id, l1_timestamp, batch_acc) do |
||||
content_decoded = ExRLP.decode(content) |
||||
|
||||
batch = %{ |
||||
parent_hash: Enum.at(content_decoded, 0), |
||||
frame_sequence_id: frame_sequence_id, |
||||
l1_timestamp: l1_timestamp |
||||
} |
||||
|
||||
[batch | batch_acc] |
||||
end |
||||
|
||||
defp handle_v1_batch(content, frame_sequence_id, l1_timestamp, genesis_block_l2, batch_acc) do |
||||
{rel_timestamp, content_remainder} = LEB128.decode(content) |
||||
|
||||
# skip l1_origin_num |
||||
{_l1_origin_num, checks_and_payload} = LEB128.decode(content_remainder) |
||||
|
||||
# skip `parent_check` and `l1_origin_check` fields (20 bytes each) |
||||
# and read the block count |
||||
{block_count, _} = |
||||
checks_and_payload |
||||
|> binary_part(40, byte_size(checks_and_payload) - 40) |
||||
|> LEB128.decode() |
||||
|
||||
# the first and last L2 blocks in the span |
||||
span_start = div(rel_timestamp, @op_chain_block_time) + genesis_block_l2 |
||||
span_end = span_start + block_count - 1 |
||||
|
||||
cond do |
||||
rem(rel_timestamp, @op_chain_block_time) != 0 -> |
||||
Logger.error("rel_timestamp is not divisible by #{@op_chain_block_time}. We ignore the span batch.") |
||||
batch_acc |
||||
|
||||
block_count <= 0 -> |
||||
Logger.error("Empty span batch found. We ignore it.") |
||||
batch_acc |
||||
|
||||
true -> |
||||
span_start..span_end |
||||
|> Enum.reduce(batch_acc, fn l2_block_number, batch_acc -> |
||||
[ |
||||
%{ |
||||
l2_block_number: l2_block_number, |
||||
frame_sequence_id: frame_sequence_id, |
||||
l1_timestamp: l1_timestamp |
||||
} |
||||
| batch_acc |
||||
] |
||||
end) |
||||
end |
||||
end |
||||
|
||||
defp parent_hash_to_l2_block_number(batch, numbers_by_hashes) do |
||||
if Map.has_key?(batch, :parent_hash) do |
||||
number = Map.get(numbers_by_hashes, batch.parent_hash) |
||||
|
||||
batch |
||||
|> Map.put(:l2_block_number, number + 1) |
||||
|> Map.delete(:parent_hash) |
||||
else |
||||
batch |
||||
end |
||||
end |
||||
|
||||
defp remove_duplicates(batches, sequences) do |
||||
unique_batches = |
||||
batches |
||||
|> Enum.sort(fn b1, b2 -> |
||||
b1.l2_block_number < b2.l2_block_number or |
||||
(b1.l2_block_number == b2.l2_block_number and b1.l1_timestamp < b2.l1_timestamp) |
||||
end) |
||||
|> Enum.reduce(%{}, fn b, acc -> |
||||
Map.put(acc, b.l2_block_number, Map.delete(b, :l1_timestamp)) |
||||
end) |
||||
|> Map.values() |
||||
|
||||
unique_sequences = |
||||
if Enum.empty?(sequences) do |
||||
[] |
||||
else |
||||
sequences |
||||
|> Enum.reverse() |
||||
|> Enum.filter(fn seq -> |
||||
Enum.any?(unique_batches, fn batch -> batch.frame_sequence_id == seq.id end) |
||||
end) |
||||
end |
||||
|
||||
{unique_batches, unique_sequences} |
||||
end |
||||
|
||||
defp txs_filter(transactions_params, batch_submitter, batch_inbox) do |
||||
transactions_params |
||||
|> Enum.filter(fn t -> |
||||
from_address_hash = Map.get(t, :from_address_hash) |
||||
to_address_hash = Map.get(t, :to_address_hash) |
||||
|
||||
if is_nil(from_address_hash) or is_nil(to_address_hash) do |
||||
false |
||||
else |
||||
String.downcase(from_address_hash) == batch_submitter and String.downcase(to_address_hash) == batch_inbox |
||||
end |
||||
end) |
||||
end |
||||
|
||||
defp zlib_decompress(bytes) do |
||||
z = :zlib.open() |
||||
:zlib.inflateInit(z) |
||||
|
||||
uncompressed_bytes = |
||||
try do |
||||
zlib_inflate(z, bytes) |
||||
rescue |
||||
_ -> <<>> |
||||
end |
||||
|
||||
try do |
||||
:zlib.inflateEnd(z) |
||||
rescue |
||||
_ -> nil |
||||
end |
||||
|
||||
:zlib.close(z) |
||||
|
||||
uncompressed_bytes |
||||
end |
||||
|
||||
defp zlib_inflate_handler(z, {:continue, [uncompressed_bytes]}, acc) do |
||||
zlib_inflate(z, [], acc <> uncompressed_bytes) |
||||
end |
||||
|
||||
defp zlib_inflate_handler(_z, {:finished, [uncompressed_bytes]}, acc) do |
||||
acc <> uncompressed_bytes |
||||
end |
||||
|
||||
defp zlib_inflate_handler(_z, {:finished, []}, acc) do |
||||
acc |
||||
end |
||||
|
||||
defp zlib_inflate(z, compressed_bytes, acc \\ <<>>) do |
||||
result = :zlib.safeInflate(z, compressed_bytes) |
||||
zlib_inflate_handler(z, result, acc) |
||||
end |
||||
end |
@ -0,0 +1,361 @@ |
||||
defmodule Indexer.Fetcher.Optimism.Withdrawal do |
||||
@moduledoc """ |
||||
Fills op_withdrawals DB table. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import Ecto.Query |
||||
|
||||
import EthereumJSONRPC, only: [quantity_to_integer: 1] |
||||
import Explorer.Helper, only: [decode_data: 2, parse_integer: 1] |
||||
|
||||
alias Explorer.{Chain, Repo} |
||||
alias Explorer.Chain.Log |
||||
alias Explorer.Chain.Optimism.Withdrawal, as: OptimismWithdrawal |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Helper |
||||
|
||||
@fetcher_name :optimism_withdrawals |
||||
|
||||
# 32-byte signature of the event MessagePassed(uint256 indexed nonce, address indexed sender, address indexed target, uint256 value, uint256 gasLimit, bytes data, bytes32 withdrawalHash) |
||||
@message_passed_event "0x02a52367d10742d8032712c1bb8e0144ff1ec5ffda1ed7d70bb05a2744955054" |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(args) do |
||||
json_rpc_named_arguments = args[:json_rpc_named_arguments] |
||||
{:ok, %{}, {:continue, json_rpc_named_arguments}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(json_rpc_named_arguments, state) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
|
||||
env = Application.get_all_env(:indexer)[__MODULE__] |
||||
|
||||
with {:start_block_l2_undefined, false} <- {:start_block_l2_undefined, is_nil(env[:start_block_l2])}, |
||||
{:message_passer_valid, true} <- {:message_passer_valid, Helper.address_correct?(env[:message_passer])}, |
||||
start_block_l2 = parse_integer(env[:start_block_l2]), |
||||
false <- is_nil(start_block_l2), |
||||
true <- start_block_l2 > 0, |
||||
{last_l2_block_number, last_l2_transaction_hash} <- get_last_l2_item(), |
||||
{safe_block, safe_block_is_latest} = Optimism.get_safe_block(json_rpc_named_arguments), |
||||
{:start_block_l2_valid, true} <- |
||||
{:start_block_l2_valid, |
||||
(start_block_l2 <= last_l2_block_number || last_l2_block_number == 0) && start_block_l2 <= safe_block}, |
||||
{:ok, last_l2_tx} <- Optimism.get_transaction_by_hash(last_l2_transaction_hash, json_rpc_named_arguments), |
||||
{:l2_tx_not_found, false} <- {:l2_tx_not_found, !is_nil(last_l2_transaction_hash) && is_nil(last_l2_tx)} do |
||||
Process.send(self(), :continue, []) |
||||
|
||||
{:noreply, |
||||
%{ |
||||
start_block: max(start_block_l2, last_l2_block_number), |
||||
start_block_l2: start_block_l2, |
||||
safe_block: safe_block, |
||||
safe_block_is_latest: safe_block_is_latest, |
||||
message_passer: env[:message_passer], |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
}} |
||||
else |
||||
{:start_block_l2_undefined, true} -> |
||||
# the process shouldn't start if the start block is not defined |
||||
{:stop, :normal, state} |
||||
|
||||
{:message_passer_valid, false} -> |
||||
Logger.error("L2ToL1MessagePasser contract address is invalid or not defined.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:start_block_l2_valid, false} -> |
||||
Logger.error("Invalid L2 Start Block value. Please, check the value and op_withdrawals table.") |
||||
{:stop, :normal, state} |
||||
|
||||
{:error, error_data} -> |
||||
Logger.error("Cannot get last L2 transaction from RPC by its hash due to RPC error: #{inspect(error_data)}") |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
{:l2_tx_not_found, true} -> |
||||
Logger.error( |
||||
"Cannot find last L2 transaction from RPC by its hash. Probably, there was a reorg on L2 chain. Please, check op_withdrawals table." |
||||
) |
||||
|
||||
{:stop, :normal, state} |
||||
|
||||
_ -> |
||||
Logger.error("Withdrawals L2 Start Block is invalid or zero.") |
||||
{:stop, :normal, state} |
||||
end |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:continue, |
||||
%{ |
||||
start_block_l2: start_block_l2, |
||||
message_passer: message_passer, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
} = state |
||||
) do |
||||
fill_msg_nonce_gaps(start_block_l2, message_passer, json_rpc_named_arguments) |
||||
Process.send(self(), :find_new_events, []) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:find_new_events, |
||||
%{ |
||||
start_block: start_block, |
||||
safe_block: safe_block, |
||||
safe_block_is_latest: safe_block_is_latest, |
||||
message_passer: message_passer, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
} = state |
||||
) do |
||||
# find and fill all events between start_block and "safe" block |
||||
# the "safe" block can be "latest" (when safe_block_is_latest == true) |
||||
fill_block_range(start_block, safe_block, message_passer, json_rpc_named_arguments) |
||||
|
||||
if not safe_block_is_latest do |
||||
# find and fill all events between "safe" and "latest" block (excluding "safe") |
||||
{:ok, latest_block} = Optimism.get_block_number_by_tag("latest", json_rpc_named_arguments) |
||||
fill_block_range(safe_block + 1, latest_block, message_passer, json_rpc_named_arguments) |
||||
end |
||||
|
||||
{:stop, :normal, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({ref, _result}, state) do |
||||
Process.demonitor(ref, [:flush]) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
def remove(starting_block) do |
||||
Repo.delete_all(from(w in OptimismWithdrawal, where: w.l2_block_number >= ^starting_block)) |
||||
end |
||||
|
||||
def event_to_withdrawal(second_topic, data, l2_transaction_hash, l2_block_number) do |
||||
[_value, _gas_limit, _data, hash] = decode_data(data, [{:uint, 256}, {:uint, 256}, :bytes, {:bytes, 32}]) |
||||
|
||||
msg_nonce = |
||||
second_topic |
||||
|> Helper.log_topic_to_string() |
||||
|> quantity_to_integer() |
||||
|> Decimal.new() |
||||
|
||||
%{ |
||||
msg_nonce: msg_nonce, |
||||
hash: hash, |
||||
l2_transaction_hash: l2_transaction_hash, |
||||
l2_block_number: quantity_to_integer(l2_block_number) |
||||
} |
||||
end |
||||
|
||||
defp msg_nonce_gap_starts(nonce_max) do |
||||
Repo.all( |
||||
from(w in OptimismWithdrawal, |
||||
select: w.l2_block_number, |
||||
order_by: w.msg_nonce, |
||||
where: |
||||
fragment( |
||||
"NOT EXISTS (SELECT msg_nonce FROM op_withdrawals WHERE msg_nonce = (? + 1)) AND msg_nonce != ?", |
||||
w.msg_nonce, |
||||
^nonce_max |
||||
) |
||||
) |
||||
) |
||||
end |
||||
|
||||
defp msg_nonce_gap_ends(nonce_min) do |
||||
Repo.all( |
||||
from(w in OptimismWithdrawal, |
||||
select: w.l2_block_number, |
||||
order_by: w.msg_nonce, |
||||
where: |
||||
fragment( |
||||
"NOT EXISTS (SELECT msg_nonce FROM op_withdrawals WHERE msg_nonce = (? - 1)) AND msg_nonce != ?", |
||||
w.msg_nonce, |
||||
^nonce_min |
||||
) |
||||
) |
||||
) |
||||
end |
||||
|
||||
defp find_and_save_withdrawals( |
||||
scan_db, |
||||
message_passer, |
||||
block_start, |
||||
block_end, |
||||
json_rpc_named_arguments |
||||
) do |
||||
withdrawals = |
||||
if scan_db do |
||||
query = |
||||
from(log in Log, |
||||
select: {log.second_topic, log.data, log.transaction_hash, log.block_number}, |
||||
where: |
||||
log.first_topic == ^@message_passed_event and log.address_hash == ^message_passer and |
||||
log.block_number >= ^block_start and log.block_number <= ^block_end |
||||
) |
||||
|
||||
query |
||||
|> Repo.all(timeout: :infinity) |
||||
|> Enum.map(fn {second_topic, data, l2_transaction_hash, l2_block_number} -> |
||||
event_to_withdrawal(second_topic, data, l2_transaction_hash, l2_block_number) |
||||
end) |
||||
else |
||||
{:ok, result} = |
||||
Optimism.get_logs( |
||||
block_start, |
||||
block_end, |
||||
message_passer, |
||||
@message_passed_event, |
||||
json_rpc_named_arguments, |
||||
3 |
||||
) |
||||
|
||||
Enum.map(result, fn event -> |
||||
event_to_withdrawal( |
||||
Enum.at(event["topics"], 1), |
||||
event["data"], |
||||
event["transactionHash"], |
||||
event["blockNumber"] |
||||
) |
||||
end) |
||||
end |
||||
|
||||
{:ok, _} = |
||||
Chain.import(%{ |
||||
optimism_withdrawals: %{params: withdrawals}, |
||||
timeout: :infinity |
||||
}) |
||||
|
||||
Enum.count(withdrawals) |
||||
end |
||||
|
||||
defp fill_block_range(l2_block_start, l2_block_end, message_passer, json_rpc_named_arguments, scan_db) do |
||||
chunks_number = |
||||
if scan_db do |
||||
1 |
||||
else |
||||
ceil((l2_block_end - l2_block_start + 1) / Optimism.get_logs_range_size()) |
||||
end |
||||
|
||||
chunk_range = Range.new(0, max(chunks_number - 1, 0), 1) |
||||
|
||||
Enum.reduce(chunk_range, 0, fn current_chunk, withdrawals_count_acc -> |
||||
chunk_start = l2_block_start + Optimism.get_logs_range_size() * current_chunk |
||||
|
||||
chunk_end = |
||||
if scan_db do |
||||
l2_block_end |
||||
else |
||||
min(chunk_start + Optimism.get_logs_range_size() - 1, l2_block_end) |
||||
end |
||||
|
||||
Helper.log_blocks_chunk_handling(chunk_start, chunk_end, l2_block_start, l2_block_end, nil, :L2) |
||||
|
||||
withdrawals_count = |
||||
find_and_save_withdrawals( |
||||
scan_db, |
||||
message_passer, |
||||
chunk_start, |
||||
chunk_end, |
||||
json_rpc_named_arguments |
||||
) |
||||
|
||||
Helper.log_blocks_chunk_handling( |
||||
chunk_start, |
||||
chunk_end, |
||||
l2_block_start, |
||||
l2_block_end, |
||||
"#{withdrawals_count} MessagePassed event(s)", |
||||
:L2 |
||||
) |
||||
|
||||
withdrawals_count_acc + withdrawals_count |
||||
end) |
||||
end |
||||
|
||||
defp fill_block_range(start_block, end_block, message_passer, json_rpc_named_arguments) do |
||||
fill_block_range(start_block, end_block, message_passer, json_rpc_named_arguments, true) |
||||
fill_msg_nonce_gaps(start_block, message_passer, json_rpc_named_arguments, false) |
||||
{last_l2_block_number, _} = get_last_l2_item() |
||||
fill_block_range(max(start_block, last_l2_block_number), end_block, message_passer, json_rpc_named_arguments, false) |
||||
end |
||||
|
||||
defp fill_msg_nonce_gaps(start_block_l2, message_passer, json_rpc_named_arguments, scan_db \\ true) do |
||||
nonce_min = Repo.aggregate(OptimismWithdrawal, :min, :msg_nonce) |
||||
nonce_max = Repo.aggregate(OptimismWithdrawal, :max, :msg_nonce) |
||||
|
||||
with true <- !is_nil(nonce_min) and !is_nil(nonce_max), |
||||
starts = msg_nonce_gap_starts(nonce_max), |
||||
ends = msg_nonce_gap_ends(nonce_min), |
||||
min_block_l2 = l2_block_number_by_msg_nonce(nonce_min), |
||||
{new_starts, new_ends} = |
||||
if(start_block_l2 < min_block_l2, |
||||
do: {[start_block_l2 | starts], [min_block_l2 | ends]}, |
||||
else: {starts, ends} |
||||
), |
||||
true <- Enum.count(new_starts) == Enum.count(new_ends) do |
||||
new_starts |
||||
|> Enum.zip(new_ends) |
||||
|> Enum.each(fn {l2_block_start, l2_block_end} -> |
||||
withdrawals_count = |
||||
fill_block_range(l2_block_start, l2_block_end, message_passer, json_rpc_named_arguments, scan_db) |
||||
|
||||
if withdrawals_count > 0 do |
||||
log_fill_msg_nonce_gaps(scan_db, l2_block_start, l2_block_end, withdrawals_count) |
||||
end |
||||
end) |
||||
|
||||
if scan_db do |
||||
fill_msg_nonce_gaps(start_block_l2, message_passer, json_rpc_named_arguments, false) |
||||
end |
||||
end |
||||
end |
||||
|
||||
defp get_last_l2_item do |
||||
query = |
||||
from(w in OptimismWithdrawal, |
||||
select: {w.l2_block_number, w.l2_transaction_hash}, |
||||
order_by: [desc: w.msg_nonce], |
||||
limit: 1 |
||||
) |
||||
|
||||
query |
||||
|> Repo.one() |
||||
|> Kernel.||({0, nil}) |
||||
end |
||||
|
||||
defp log_fill_msg_nonce_gaps(scan_db, l2_block_start, l2_block_end, withdrawals_count) do |
||||
find_place = if scan_db, do: "in DB", else: "through RPC" |
||||
|
||||
Logger.info( |
||||
"Filled gaps between L2 blocks #{l2_block_start} and #{l2_block_end}. #{withdrawals_count} event(s) were found #{find_place} and written to op_withdrawals table." |
||||
) |
||||
end |
||||
|
||||
defp l2_block_number_by_msg_nonce(nonce) do |
||||
Repo.one(from(w in OptimismWithdrawal, select: w.l2_block_number, where: w.msg_nonce == ^nonce)) |
||||
end |
||||
end |
@ -0,0 +1,226 @@ |
||||
defmodule Indexer.Fetcher.Optimism.WithdrawalEvent do |
||||
@moduledoc """ |
||||
Fills op_withdrawal_events DB table. |
||||
""" |
||||
|
||||
use GenServer |
||||
use Indexer.Fetcher |
||||
|
||||
require Logger |
||||
|
||||
import Ecto.Query |
||||
|
||||
import EthereumJSONRPC, only: [quantity_to_integer: 1] |
||||
|
||||
alias EthereumJSONRPC.Block.ByNumber |
||||
alias EthereumJSONRPC.Blocks |
||||
alias Explorer.{Chain, Repo} |
||||
alias Explorer.Chain.Optimism.WithdrawalEvent |
||||
alias Indexer.Fetcher.Optimism |
||||
alias Indexer.Helper |
||||
|
||||
@fetcher_name :optimism_withdrawal_events |
||||
|
||||
# 32-byte signature of the event WithdrawalProven(bytes32 indexed withdrawalHash, address indexed from, address indexed to) |
||||
@withdrawal_proven_event "0x67a6208cfcc0801d50f6cbe764733f4fddf66ac0b04442061a8a8c0cb6b63f62" |
||||
|
||||
# 32-byte signature of the event WithdrawalFinalized(bytes32 indexed withdrawalHash, bool success) |
||||
@withdrawal_finalized_event "0xdb5c7652857aa163daadd670e116628fb42e869d8ac4251ef8971d9e5727df1b" |
||||
|
||||
def child_spec(start_link_arguments) do |
||||
spec = %{ |
||||
id: __MODULE__, |
||||
start: {__MODULE__, :start_link, start_link_arguments}, |
||||
restart: :transient, |
||||
type: :worker |
||||
} |
||||
|
||||
Supervisor.child_spec(spec, []) |
||||
end |
||||
|
||||
def start_link(args, gen_server_options \\ []) do |
||||
GenServer.start_link(__MODULE__, args, Keyword.put_new(gen_server_options, :name, __MODULE__)) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def init(_args) do |
||||
{:ok, %{}, {:continue, :ok}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_continue(:ok, _state) do |
||||
Logger.metadata(fetcher: @fetcher_name) |
||||
|
||||
env = Application.get_all_env(:indexer)[__MODULE__] |
||||
optimism_l1_portal = Application.get_all_env(:indexer)[Indexer.Fetcher.Optimism][:optimism_l1_portal] |
||||
|
||||
Optimism.init_continue(env, optimism_l1_portal, __MODULE__) |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info( |
||||
:continue, |
||||
%{ |
||||
contract_address: optimism_portal, |
||||
block_check_interval: block_check_interval, |
||||
start_block: start_block, |
||||
end_block: end_block, |
||||
json_rpc_named_arguments: json_rpc_named_arguments |
||||
} = state |
||||
) do |
||||
# credo:disable-for-next-line |
||||
time_before = Timex.now() |
||||
|
||||
chunks_number = ceil((end_block - start_block + 1) / Optimism.get_logs_range_size()) |
||||
chunk_range = Range.new(0, max(chunks_number - 1, 0), 1) |
||||
|
||||
last_written_block = |
||||
chunk_range |
||||
|> Enum.reduce_while(start_block - 1, fn current_chunk, _ -> |
||||
chunk_start = start_block + Optimism.get_logs_range_size() * current_chunk |
||||
chunk_end = min(chunk_start + Optimism.get_logs_range_size() - 1, end_block) |
||||
|
||||
if chunk_end >= chunk_start do |
||||
Helper.log_blocks_chunk_handling(chunk_start, chunk_end, start_block, end_block, nil, :L1) |
||||
|
||||
{:ok, result} = |
||||
Optimism.get_logs( |
||||
chunk_start, |
||||
chunk_end, |
||||
optimism_portal, |
||||
[@withdrawal_proven_event, @withdrawal_finalized_event], |
||||
json_rpc_named_arguments, |
||||
Helper.infinite_retries_number() |
||||
) |
||||
|
||||
withdrawal_events = prepare_events(result, json_rpc_named_arguments) |
||||
|
||||
{:ok, _} = |
||||
Chain.import(%{ |
||||
optimism_withdrawal_events: %{params: withdrawal_events}, |
||||
timeout: :infinity |
||||
}) |
||||
|
||||
Helper.log_blocks_chunk_handling( |
||||
chunk_start, |
||||
chunk_end, |
||||
start_block, |
||||
end_block, |
||||
"#{Enum.count(withdrawal_events)} WithdrawalProven/WithdrawalFinalized event(s)", |
||||
:L1 |
||||
) |
||||
end |
||||
|
||||
reorg_block = Optimism.reorg_block_pop(@fetcher_name) |
||||
|
||||
if !is_nil(reorg_block) && reorg_block > 0 do |
||||
{deleted_count, _} = Repo.delete_all(from(we in WithdrawalEvent, where: we.l1_block_number >= ^reorg_block)) |
||||
|
||||
log_deleted_rows_count(reorg_block, deleted_count) |
||||
|
||||
{:halt, if(reorg_block <= chunk_end, do: reorg_block - 1, else: chunk_end)} |
||||
else |
||||
{:cont, chunk_end} |
||||
end |
||||
end) |
||||
|
||||
new_start_block = last_written_block + 1 |
||||
|
||||
{:ok, new_end_block} = |
||||
Optimism.get_block_number_by_tag("latest", json_rpc_named_arguments, Helper.infinite_retries_number()) |
||||
|
||||
delay = |
||||
if new_end_block == last_written_block do |
||||
# there is no new block, so wait for some time to let the chain issue the new block |
||||
max(block_check_interval - Timex.diff(Timex.now(), time_before, :milliseconds), 0) |
||||
else |
||||
0 |
||||
end |
||||
|
||||
Process.send_after(self(), :continue, delay) |
||||
|
||||
{:noreply, %{state | start_block: new_start_block, end_block: new_end_block}} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({:chain_event, :optimism_reorg_block, :realtime, block_number}, state) do |
||||
Optimism.reorg_block_push(@fetcher_name, block_number) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
@impl GenServer |
||||
def handle_info({ref, _result}, state) do |
||||
Process.demonitor(ref, [:flush]) |
||||
{:noreply, state} |
||||
end |
||||
|
||||
defp log_deleted_rows_count(reorg_block, count) do |
||||
if count > 0 do |
||||
Logger.warning( |
||||
"As L1 reorg was detected, all rows with l1_block_number >= #{reorg_block} were removed from the op_withdrawal_events table. Number of removed rows: #{count}." |
||||
) |
||||
end |
||||
end |
||||
|
||||
defp prepare_events(events, json_rpc_named_arguments) do |
||||
timestamps = |
||||
events |
||||
|> get_blocks_by_events(json_rpc_named_arguments, Helper.infinite_retries_number()) |
||||
|> Enum.reduce(%{}, fn block, acc -> |
||||
block_number = quantity_to_integer(Map.get(block, "number")) |
||||
{:ok, timestamp} = DateTime.from_unix(quantity_to_integer(Map.get(block, "timestamp"))) |
||||
Map.put(acc, block_number, timestamp) |
||||
end) |
||||
|
||||
Enum.map(events, fn event -> |
||||
l1_event_type = |
||||
if Enum.at(event["topics"], 0) == @withdrawal_proven_event do |
||||
"WithdrawalProven" |
||||
else |
||||
"WithdrawalFinalized" |
||||
end |
||||
|
||||
l1_block_number = quantity_to_integer(event["blockNumber"]) |
||||
|
||||
%{ |
||||
withdrawal_hash: Enum.at(event["topics"], 1), |
||||
l1_event_type: l1_event_type, |
||||
l1_timestamp: Map.get(timestamps, l1_block_number), |
||||
l1_transaction_hash: event["transactionHash"], |
||||
l1_block_number: l1_block_number |
||||
} |
||||
end) |
||||
end |
||||
|
||||
def get_last_l1_item do |
||||
query = |
||||
from(we in WithdrawalEvent, |
||||
select: {we.l1_block_number, we.l1_transaction_hash}, |
||||
order_by: [desc: we.l1_timestamp], |
||||
limit: 1 |
||||
) |
||||
|
||||
query |
||||
|> Repo.one() |
||||
|> Kernel.||({0, nil}) |
||||
end |
||||
|
||||
defp get_blocks_by_events(events, json_rpc_named_arguments, retries) do |
||||
request = |
||||
events |
||||
|> Enum.reduce(%{}, fn event, acc -> |
||||
Map.put(acc, event["blockNumber"], 0) |
||||
end) |
||||
|> Stream.map(fn {block_number, _} -> %{number: block_number} end) |
||||
|> Stream.with_index() |
||||
|> Enum.into(%{}, fn {params, id} -> {id, params} end) |
||||
|> Blocks.requests(&ByNumber.request(&1, false, false)) |
||||
|
||||
error_message = &"Cannot fetch blocks with batch request. Error: #{inspect(&1)}. Request: #{inspect(request)}" |
||||
|
||||
case Optimism.repeated_request(request, error_message, json_rpc_named_arguments, retries) do |
||||
{:ok, results} -> Enum.map(results, fn %{result: result} -> result end) |
||||
{:error, _} -> [] |
||||
end |
||||
end |
||||
end |
@ -0,0 +1,49 @@ |
||||
defmodule Indexer.Transform.Optimism.Withdrawals do |
||||
@moduledoc """ |
||||
Helper functions for transforming data for Optimism withdrawals. |
||||
""" |
||||
|
||||
require Logger |
||||
|
||||
alias Indexer.Fetcher.Optimism.Withdrawal, as: OptimismWithdrawal |
||||
alias Indexer.Helper |
||||
|
||||
# 32-byte signature of the event MessagePassed(uint256 indexed nonce, address indexed sender, address indexed target, uint256 value, uint256 gasLimit, bytes data, bytes32 withdrawalHash) |
||||
@message_passed_event "0x02a52367d10742d8032712c1bb8e0144ff1ec5ffda1ed7d70bb05a2744955054" |
||||
|
||||
@doc """ |
||||
Returns a list of withdrawals given a list of logs. |
||||
""" |
||||
def parse(logs) do |
||||
prev_metadata = Logger.metadata() |
||||
Logger.metadata(fetcher: :optimism_withdrawals_realtime) |
||||
|
||||
items = |
||||
with false <- is_nil(Application.get_env(:indexer, Indexer.Fetcher.OptimismWithdrawal)[:start_block_l2]), |
||||
message_passer = Application.get_env(:indexer, Indexer.Fetcher.OptimismWithdrawal)[:message_passer], |
||||
true <- Helper.address_correct?(message_passer) do |
||||
message_passer = String.downcase(message_passer) |
||||
|
||||
logs |
||||
|> Enum.filter(fn log -> |
||||
!is_nil(log.first_topic) && String.downcase(log.first_topic) == @message_passed_event && |
||||
String.downcase(Helper.address_hash_to_string(log.address_hash)) == message_passer |
||||
end) |
||||
|> Enum.map(fn log -> |
||||
Logger.info("Withdrawal message found, nonce: #{log.second_topic}.") |
||||
OptimismWithdrawal.event_to_withdrawal(log.second_topic, log.data, log.transaction_hash, log.block_number) |
||||
end) |
||||
else |
||||
true -> |
||||
[] |
||||
|
||||
false -> |
||||
Logger.error("L2ToL1MessagePasser contract address is incorrect. Cannot use #{__MODULE__} for parsing logs.") |
||||
[] |
||||
end |
||||
|
||||
Logger.reset_metadata(prev_metadata) |
||||
|
||||
items |
||||
end |
||||
end |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue