commit
3abe91bb9b
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/cli': minor |
||||
--- |
||||
|
||||
Add CLI e2e typescript tests |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Optimize HyperlaneRelayer routing config derivation |
@ -0,0 +1,6 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
'@hyperlane-xyz/core': minor |
||||
--- |
||||
|
||||
Checking for sufficient fees in `AbstractMessageIdAuthHook` and refund surplus |
@ -0,0 +1,6 @@ |
||||
--- |
||||
'@hyperlane-xyz/utils': patch |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Dedupe internals of hook and ISM module deploy code |
@ -1,8 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/infra': minor |
||||
'@hyperlane-xyz/cli': minor |
||||
'@hyperlane-xyz/sdk': minor |
||||
'@hyperlane-xyz/core': minor |
||||
--- |
||||
|
||||
Added sdk support for Stake weighted ISM |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
--- |
||||
|
||||
Deploy to apechain, arbitrumnova, b3, fantom, gravity, harmony, kaia, morph, orderly, snaxchain, zeronetwork, zksync. Update default metadata in `HyperlaneCore` to `0x00001` to ensure empty metadata does not break on zksync. |
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
--- |
||||
|
||||
Enroll new validators for cyber degenchain kroma lisk lukso merlin metis mint proofofplay real sanko tangle xai taiko |
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Estimate and add 10% gas bump for ICA initialization and enrollment |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/utils': patch |
||||
--- |
||||
|
||||
fix median utils func + add test |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/core': minor |
||||
--- |
||||
|
||||
Added msg.value to preverifyMessage to commit it as part of external hook payload |
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Support DefaultFallbackRoutingIsm in metadata builder |
@ -0,0 +1,11 @@ |
||||
--- |
||||
'@hyperlane-xyz/widgets': minor |
||||
--- |
||||
|
||||
Update widgets with components from explorer and warp ui |
||||
|
||||
- Add icons: Discord, Docs, Github, History, LinkedIn, Medium, Twitter, Wallet and Web |
||||
- Add animation component: Fade component |
||||
- Add components: DatetimeField and SelectField |
||||
- New stories: IconList and Fade |
||||
- Add "Icon" suffix for icons that did not have it |
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
--- |
||||
|
||||
Sorted cwNative funds by denom in transfer tx |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/core': minor |
||||
--- |
||||
|
||||
disabled the ICARouter's ability to change hook given that the user doesn't expect the hook to change after they deploy their ICA account. Hook is not part of the derivation like ism on the destination chain and hence, cannot be configured custom by the user. |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/cli': minor |
||||
--- |
||||
|
||||
Enable configuration of IGP hooks in the CLI |
@ -1,6 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
'@hyperlane-xyz/core': minor |
||||
--- |
||||
|
||||
ArbL2ToL1Ism handles value via the executeTransaction branch |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Fix ICA ISM self relay |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': minor |
||||
--- |
||||
|
||||
Introduce utils that can be reused by the CLI and Infra for fetching token prices from Coingecko and gas prices from EVM/Cosmos chains. |
@ -0,0 +1,5 @@ |
||||
--- |
||||
'@hyperlane-xyz/utils': patch |
||||
--- |
||||
|
||||
Filter undefined/null values in invertKeysAndValues function |
@ -1,5 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/sdk': patch |
||||
--- |
||||
|
||||
Improved check for mailbox initialization |
@ -1,6 +0,0 @@ |
||||
--- |
||||
'@hyperlane-xyz/cli': minor |
||||
'@hyperlane-xyz/sdk': minor |
||||
--- |
||||
|
||||
Add Safe submit functionality to warp apply |
@ -1,5 +1,5 @@ |
||||
[codespell] |
||||
skip = .git,node_modules,yarn.lock,Cargo.lock,./typescript/helloworld,./rust/config,./rust/sealevel/environments/mainnet3/chain-config.json |
||||
skip = .git,node_modules,yarn.lock,Cargo.lock,./typescript/helloworld,./rust/main/config,./rust/sealevel/environments/mainnet3/chain-config.json |
||||
count = |
||||
quiet-level = 3 |
||||
ignore-words = ./.codespell/ignore.txt |
||||
|
@ -1,4 +1,4 @@ |
||||
typescript/sdk/src/cw-types/*.types.ts linguist-generated=true |
||||
rust/chains/hyperlane-ethereum/abis/*.abi.json linguist-generated=true |
||||
rust/main/chains/hyperlane-ethereum/abis/*.abi.json linguist-generated=true |
||||
solidity/contracts/interfaces/avs/*.sol linguist-vendored=true |
||||
solidity/contracts/avs/ECDSA*.sol linguist-vendored=true |
||||
|
@ -1,29 +1,22 @@ |
||||
# File extension owners |
||||
|
||||
*.sol @yorhodes @tkporter @aroralanuk @nbayindirli |
||||
*.ts @yorhodes @jmrossy @nbayindirli |
||||
*.rs @tkporter @daniel-savu |
||||
*.md @Skunkchain @avious00 |
||||
*.sol @yorhodes @aroralanuk @ltyu |
||||
*.ts @yorhodes @jmrossy |
||||
*.rs @tkporter @daniel-savu @ameten |
||||
|
||||
# Package owners |
||||
|
||||
## Contracts |
||||
solidity/ @yorhodes @tkporter @aroralanuk @nbayindirli |
||||
solidity/ @yorhodes @tkporter @aroralanuk @ltyu |
||||
|
||||
## Agents |
||||
rust/ @tkporter @daniel-savu |
||||
|
||||
## SDK |
||||
typescript/sdk @yorhodes @jmrossy |
||||
|
||||
## Token |
||||
typescript/token @yorhodes @jmrossy @tkporter @aroralanuk @nbayindirli |
||||
|
||||
## Hello World |
||||
typescript/helloworld @yorhodes |
||||
typescript/sdk @yorhodes @jmrossy @ltyu @paulbalaji |
||||
|
||||
## CLI |
||||
typescript/cli @jmrossy @yorhodes @aroralanuk @nbayindirli |
||||
typescript/cli @jmrossy @yorhodes @ltyu |
||||
|
||||
## Infra |
||||
typescript/infra @tkporter |
||||
typescript/infra @tkporter @paulbalaji @Mo-Hussain |
||||
|
@ -0,0 +1,37 @@ |
||||
name: 'Yarn Build with Cache' |
||||
description: 'Run yarn build using yarn cache' |
||||
|
||||
inputs: |
||||
ref: |
||||
description: 'The Git ref to checkout' |
||||
required: true |
||||
|
||||
runs: |
||||
using: "composite" |
||||
steps: |
||||
- name: Cache |
||||
uses: buildjet/cache@v4 |
||||
id: cache |
||||
with: |
||||
path: | |
||||
**/node_modules |
||||
.yarn |
||||
key: ${{ runner.os }}-yarn-cache-${{ hashFiles('./yarn.lock') }} |
||||
|
||||
# Typically, the cache will be hit, but if there's a network error when |
||||
# restoring the cache, let's run the install step ourselves. |
||||
- name: Install dependencies |
||||
if: steps.cache.outputs.cache-hit != 'true' |
||||
shell: bash |
||||
run: | |
||||
yarn install |
||||
CHANGES=$(git status -s --ignore-submodules) |
||||
if [[ ! -z $CHANGES ]]; then |
||||
echo "Changes found: $CHANGES" |
||||
git diff |
||||
exit 1 |
||||
fi |
||||
|
||||
- name: Build |
||||
shell: bash |
||||
run: yarn build |
@ -1,106 +0,0 @@ |
||||
name: test |
||||
|
||||
on: |
||||
push: |
||||
branches: [main] |
||||
paths: |
||||
- '*.md' |
||||
- '!**/*' |
||||
pull_request: |
||||
branches: |
||||
- '*' |
||||
paths: |
||||
- '*.md' |
||||
- '!**/*' |
||||
merge_group: |
||||
|
||||
concurrency: |
||||
group: e2e-${{ github.ref }} |
||||
cancel-in-progress: ${{ github.ref_name != 'main' }} |
||||
|
||||
jobs: |
||||
yarn-install: |
||||
runs-on: ubuntu-latest |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "yarn-install job passed" |
||||
|
||||
yarn-build: |
||||
runs-on: ubuntu-latest |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "yarn-build job passed" |
||||
|
||||
lint-prettier: |
||||
runs-on: ubuntu-latest |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "lint-prettier job passed" |
||||
|
||||
yarn-test: |
||||
runs-on: ubuntu-latest |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "yarn-test job passed" |
||||
|
||||
agent-configs: |
||||
runs-on: ubuntu-latest |
||||
strategy: |
||||
fail-fast: false |
||||
matrix: |
||||
environment: [mainnet3, testnet4] |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "agent-configs job passed" |
||||
|
||||
e2e-matrix: |
||||
runs-on: ubuntu-latest |
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.base_ref == 'main') || github.event_name == 'merge_group' |
||||
strategy: |
||||
matrix: |
||||
e2e-type: [cosmwasm, non-cosmwasm] |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "e2e-matrix job passed" |
||||
|
||||
e2e: |
||||
runs-on: ubuntu-latest |
||||
if: always() |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "e2e job passed" |
||||
|
||||
cli-advanced-e2e: |
||||
runs-on: ubuntu-latest |
||||
if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.base_ref == 'main') || github.event_name == 'merge_group' |
||||
strategy: |
||||
matrix: |
||||
include: |
||||
- test-type: preset_hook_enabled |
||||
- test-type: configure_hook_enabled |
||||
- test-type: pi_with_core_chain |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "cli-advanced-e2e job passed" |
||||
|
||||
env-test: |
||||
runs-on: ubuntu-latest |
||||
strategy: |
||||
fail-fast: false |
||||
matrix: |
||||
environment: [mainnet3] |
||||
chain: [ethereum, arbitrum, optimism, inevm, viction] |
||||
module: [core, igp] |
||||
include: |
||||
- environment: testnet4 |
||||
chain: sepolia |
||||
module: core |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "env-test job passed" |
||||
|
||||
coverage: |
||||
runs-on: ubuntu-latest |
||||
steps: |
||||
- name: Instant pass |
||||
run: echo "coverage job passed" |
@ -1 +1 @@ |
||||
1d43e33fc84f486d0edf20a9e573f914e53fe94c |
||||
302be4817c063629cec70c0b02322b250df71122 |
||||
|
@ -0,0 +1,6 @@ |
||||
{ |
||||
"rust-analyzer.linkedProjects": [ |
||||
"./rust/main/Cargo.toml", |
||||
"./rust/sealevel/Cargo.toml", |
||||
], |
||||
} |
@ -1,170 +0,0 @@ |
||||
use axum::{ |
||||
extract::{Query, State}, |
||||
routing, Router, |
||||
}; |
||||
use derive_new::new; |
||||
use hyperlane_core::{ChainCommunicationError, QueueOperation, H256}; |
||||
use serde::Deserialize; |
||||
use std::str::FromStr; |
||||
use tokio::sync::broadcast::Sender; |
||||
|
||||
const MESSAGE_RETRY_API_BASE: &str = "/message_retry"; |
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq)] |
||||
pub enum MessageRetryRequest { |
||||
MessageId(H256), |
||||
DestinationDomain(u32), |
||||
} |
||||
|
||||
impl PartialEq<QueueOperation> for &MessageRetryRequest { |
||||
fn eq(&self, other: &QueueOperation) -> bool { |
||||
match self { |
||||
MessageRetryRequest::MessageId(message_id) => message_id == &other.id(), |
||||
MessageRetryRequest::DestinationDomain(destination_domain) => { |
||||
destination_domain == &other.destination_domain().id() |
||||
} |
||||
} |
||||
} |
||||
} |
||||
|
||||
#[derive(new, Clone)] |
||||
pub struct MessageRetryApi { |
||||
tx: Sender<MessageRetryRequest>, |
||||
} |
||||
|
||||
#[derive(Deserialize)] |
||||
struct RawMessageRetryRequest { |
||||
message_id: Option<String>, |
||||
destination_domain: Option<u32>, |
||||
} |
||||
|
||||
impl TryFrom<RawMessageRetryRequest> for Vec<MessageRetryRequest> { |
||||
type Error = ChainCommunicationError; |
||||
|
||||
fn try_from(request: RawMessageRetryRequest) -> Result<Self, Self::Error> { |
||||
let mut retry_requests = Vec::new(); |
||||
if let Some(message_id) = request.message_id { |
||||
retry_requests.push(MessageRetryRequest::MessageId(H256::from_str(&message_id)?)); |
||||
} |
||||
if let Some(destination_domain) = request.destination_domain { |
||||
retry_requests.push(MessageRetryRequest::DestinationDomain(destination_domain)); |
||||
} |
||||
Ok(retry_requests) |
||||
} |
||||
} |
||||
|
||||
async fn retry_message( |
||||
State(tx): State<Sender<MessageRetryRequest>>, |
||||
Query(request): Query<RawMessageRetryRequest>, |
||||
) -> String { |
||||
let retry_requests: Vec<MessageRetryRequest> = match request.try_into() { |
||||
Ok(retry_requests) => retry_requests, |
||||
// Technically it's bad practice to print the error message to the user, but
|
||||
// this endpoint is for debugging purposes only.
|
||||
Err(err) => { |
||||
return format!("Failed to parse retry request: {}", err); |
||||
} |
||||
}; |
||||
|
||||
if retry_requests.is_empty() { |
||||
return "No retry requests found. Please provide either a message_id or destination_domain.".to_string(); |
||||
} |
||||
|
||||
if let Err(err) = retry_requests |
||||
.into_iter() |
||||
.map(|req| tx.send(req)) |
||||
.collect::<Result<Vec<_>, _>>() |
||||
{ |
||||
return format!("Failed to send retry request to the queue: {}", err); |
||||
} |
||||
|
||||
"Moved message(s) to the front of the queue".to_string() |
||||
} |
||||
|
||||
impl MessageRetryApi { |
||||
pub fn router(&self) -> Router { |
||||
Router::new() |
||||
.route("/", routing::get(retry_message)) |
||||
.with_state(self.tx.clone()) |
||||
} |
||||
|
||||
pub fn get_route(&self) -> (&'static str, Router) { |
||||
(MESSAGE_RETRY_API_BASE, self.router()) |
||||
} |
||||
} |
||||
|
||||
#[cfg(test)] |
||||
mod tests { |
||||
use crate::server::ENDPOINT_MESSAGES_QUEUE_SIZE; |
||||
|
||||
use super::*; |
||||
use axum::http::StatusCode; |
||||
use ethers::utils::hex::ToHex; |
||||
use std::net::SocketAddr; |
||||
use tokio::sync::broadcast::{Receiver, Sender}; |
||||
|
||||
fn setup_test_server() -> (SocketAddr, Receiver<MessageRetryRequest>) { |
||||
let broadcast_tx = Sender::<MessageRetryRequest>::new(ENDPOINT_MESSAGES_QUEUE_SIZE); |
||||
let message_retry_api = MessageRetryApi::new(broadcast_tx.clone()); |
||||
let (path, retry_router) = message_retry_api.get_route(); |
||||
let app = Router::new().nest(path, retry_router); |
||||
|
||||
// Running the app in the background using a test server
|
||||
let server = |
||||
axum::Server::bind(&"127.0.0.1:0".parse().unwrap()).serve(app.into_make_service()); |
||||
let addr = server.local_addr(); |
||||
tokio::spawn(server); |
||||
|
||||
(addr, broadcast_tx.subscribe()) |
||||
} |
||||
|
||||
#[tokio::test] |
||||
async fn test_message_id_retry() { |
||||
let (addr, mut rx) = setup_test_server(); |
||||
|
||||
// Create a random message ID
|
||||
let message_id = H256::random(); |
||||
|
||||
// Send a GET request to the server
|
||||
let response = reqwest::get(format!( |
||||
"http://{}{}?message_id={}", |
||||
addr, |
||||
MESSAGE_RETRY_API_BASE, |
||||
message_id.encode_hex::<String>() |
||||
)) |
||||
.await |
||||
.unwrap(); |
||||
|
||||
// Check that the response status code is OK
|
||||
assert_eq!(response.status(), StatusCode::OK); |
||||
|
||||
assert_eq!( |
||||
rx.try_recv().unwrap(), |
||||
MessageRetryRequest::MessageId(message_id) |
||||
); |
||||
} |
||||
|
||||
#[tokio::test] |
||||
async fn test_destination_domain_retry() { |
||||
let (addr, mut rx) = setup_test_server(); |
||||
|
||||
// Create a random destination domain
|
||||
let destination_domain = 42; |
||||
|
||||
// Send a GET request to the server
|
||||
let response = reqwest::get(format!( |
||||
"http://{}{}?destination_domain={}", |
||||
addr, MESSAGE_RETRY_API_BASE, destination_domain |
||||
)) |
||||
.await |
||||
.unwrap(); |
||||
|
||||
// Check that the response status code is OK
|
||||
assert_eq!(response.status(), StatusCode::OK); |
||||
|
||||
assert_eq!( |
||||
rx.try_recv().unwrap(), |
||||
MessageRetryRequest::DestinationDomain(destination_domain) |
||||
); |
||||
} |
||||
} |
@ -1,39 +0,0 @@ |
||||
use num_bigint::{BigInt, Sign}; |
||||
use sea_orm::prelude::BigDecimal; |
||||
|
||||
use hyperlane_core::{H256, U256}; |
||||
|
||||
// Creates a big-endian hex representation of the address
|
||||
pub fn address_to_bytes(data: &H256) -> Vec<u8> { |
||||
if hex::is_h160(data.as_fixed_bytes()) { |
||||
// take the last 20 bytes
|
||||
data.as_fixed_bytes()[12..32].into() |
||||
} else { |
||||
h256_to_bytes(data) |
||||
} |
||||
} |
||||
|
||||
// Creates a big-endian hex representation of the address
|
||||
pub fn bytes_to_address(data: Vec<u8>) -> eyre::Result<H256> { |
||||
if (data.len() != 20) && (data.len() != 32) { |
||||
return Err(eyre::eyre!("Invalid address length")); |
||||
} |
||||
if data.len() == 20 { |
||||
let mut prefix = vec![0; 12]; |
||||
prefix.extend(data); |
||||
Ok(H256::from_slice(&prefix[..])) |
||||
} else { |
||||
Ok(H256::from_slice(&data[..])) |
||||
} |
||||
} |
||||
|
||||
// Creates a big-endian hex representation of the address hash
|
||||
pub fn h256_to_bytes(data: &H256) -> Vec<u8> { |
||||
data.as_fixed_bytes().as_slice().into() |
||||
} |
||||
|
||||
pub fn u256_to_decimal(v: U256) -> BigDecimal { |
||||
let mut buf = [0u8; 32]; |
||||
v.to_little_endian(&mut buf); |
||||
BigDecimal::from(BigInt::from_bytes_le(Sign::Plus, &buf as &[u8])) |
||||
} |
@ -1,314 +0,0 @@ |
||||
use std::num::NonZeroU64; |
||||
use std::sync::Arc; |
||||
use std::time::{Duration, Instant}; |
||||
use std::vec; |
||||
|
||||
use hyperlane_core::rpc_clients::call_and_retry_indefinitely; |
||||
use hyperlane_core::{ChainResult, MerkleTreeHook}; |
||||
use prometheus::IntGauge; |
||||
use tokio::time::sleep; |
||||
use tracing::{debug, error, info}; |
||||
|
||||
use hyperlane_base::{db::HyperlaneRocksDB, CheckpointSyncer, CoreMetrics}; |
||||
use hyperlane_core::{ |
||||
accumulator::incremental::IncrementalMerkle, Checkpoint, CheckpointWithMessageId, |
||||
HyperlaneChain, HyperlaneContract, HyperlaneDomain, HyperlaneSignerExt, |
||||
}; |
||||
use hyperlane_ethereum::SingletonSignerHandle; |
||||
|
||||
#[derive(Clone)] |
||||
pub(crate) struct ValidatorSubmitter { |
||||
interval: Duration, |
||||
reorg_period: Option<NonZeroU64>, |
||||
signer: SingletonSignerHandle, |
||||
merkle_tree_hook: Arc<dyn MerkleTreeHook>, |
||||
checkpoint_syncer: Arc<dyn CheckpointSyncer>, |
||||
message_db: HyperlaneRocksDB, |
||||
metrics: ValidatorSubmitterMetrics, |
||||
} |
||||
|
||||
impl ValidatorSubmitter { |
||||
pub(crate) fn new( |
||||
interval: Duration, |
||||
reorg_period: u64, |
||||
merkle_tree_hook: Arc<dyn MerkleTreeHook>, |
||||
signer: SingletonSignerHandle, |
||||
checkpoint_syncer: Arc<dyn CheckpointSyncer>, |
||||
message_db: HyperlaneRocksDB, |
||||
metrics: ValidatorSubmitterMetrics, |
||||
) -> Self { |
||||
Self { |
||||
reorg_period: NonZeroU64::new(reorg_period), |
||||
interval, |
||||
merkle_tree_hook, |
||||
signer, |
||||
checkpoint_syncer, |
||||
message_db, |
||||
metrics, |
||||
} |
||||
} |
||||
|
||||
pub(crate) fn checkpoint(&self, tree: &IncrementalMerkle) -> Checkpoint { |
||||
Checkpoint { |
||||
root: tree.root(), |
||||
index: tree.index(), |
||||
merkle_tree_hook_address: self.merkle_tree_hook.address(), |
||||
mailbox_domain: self.merkle_tree_hook.domain().id(), |
||||
} |
||||
} |
||||
|
||||
/// Submits signed checkpoints from index 0 until the target checkpoint (inclusive).
|
||||
/// Runs idly forever once the target checkpoint is reached to avoid exiting the task.
|
||||
pub(crate) async fn backfill_checkpoint_submitter(self, target_checkpoint: Checkpoint) { |
||||
let mut tree = IncrementalMerkle::default(); |
||||
self.submit_checkpoints_until_correctness_checkpoint(&mut tree, &target_checkpoint) |
||||
.await; |
||||
|
||||
info!( |
||||
?target_checkpoint, |
||||
"Backfill checkpoint submitter successfully reached target checkpoint" |
||||
); |
||||
} |
||||
|
||||
/// Submits signed checkpoints indefinitely, starting from the `tree`.
|
||||
pub(crate) async fn checkpoint_submitter(self, mut tree: IncrementalMerkle) { |
||||
// How often to log checkpoint info - once every minute
|
||||
let checkpoint_info_log_period = Duration::from_secs(60); |
||||
// The instant in which we last logged checkpoint info, if at all
|
||||
let mut latest_checkpoint_info_log: Option<Instant> = None; |
||||
// Returns whether checkpoint info should be logged based off the
|
||||
// checkpoint_info_log_period having elapsed since the last log.
|
||||
// Sets latest_checkpoint_info_log to the current instant if true.
|
||||
let mut should_log_checkpoint_info = || { |
||||
if let Some(instant) = latest_checkpoint_info_log { |
||||
if instant.elapsed() < checkpoint_info_log_period { |
||||
return false; |
||||
} |
||||
} |
||||
latest_checkpoint_info_log = Some(Instant::now()); |
||||
true |
||||
}; |
||||
|
||||
loop { |
||||
// Lag by reorg period because this is our correctness checkpoint.
|
||||
let latest_checkpoint = call_and_retry_indefinitely(|| { |
||||
let merkle_tree_hook = self.merkle_tree_hook.clone(); |
||||
Box::pin(async move { merkle_tree_hook.latest_checkpoint(self.reorg_period).await }) |
||||
}) |
||||
.await; |
||||
|
||||
self.metrics |
||||
.latest_checkpoint_observed |
||||
.set(latest_checkpoint.index as i64); |
||||
|
||||
if should_log_checkpoint_info() { |
||||
info!( |
||||
?latest_checkpoint, |
||||
tree_count = tree.count(), |
||||
"Latest checkpoint" |
||||
); |
||||
} |
||||
|
||||
// This may occur e.g. if RPC providers are unreliable and make calls against
|
||||
// inconsistent block tips.
|
||||
//
|
||||
// In this case, we just sleep a bit until we fetch a new latest checkpoint
|
||||
// that at least meets the tree.
|
||||
if tree_exceeds_checkpoint(&latest_checkpoint, &tree) { |
||||
debug!( |
||||
?latest_checkpoint, |
||||
tree_count = tree.count(), |
||||
"Latest checkpoint is behind tree, sleeping briefly" |
||||
); |
||||
sleep(self.interval).await; |
||||
continue; |
||||
} |
||||
self.submit_checkpoints_until_correctness_checkpoint(&mut tree, &latest_checkpoint) |
||||
.await; |
||||
|
||||
self.metrics |
||||
.latest_checkpoint_processed |
||||
.set(latest_checkpoint.index as i64); |
||||
|
||||
sleep(self.interval).await; |
||||
} |
||||
} |
||||
|
||||
/// Submits signed checkpoints relating to the given tree until the correctness checkpoint (inclusive).
|
||||
/// Only submits the signed checkpoints once the correctness checkpoint is reached.
|
||||
async fn submit_checkpoints_until_correctness_checkpoint( |
||||
&self, |
||||
tree: &mut IncrementalMerkle, |
||||
correctness_checkpoint: &Checkpoint, |
||||
) { |
||||
// This should never be called with a tree that is ahead of the correctness checkpoint.
|
||||
assert!( |
||||
!tree_exceeds_checkpoint(correctness_checkpoint, tree), |
||||
"tree (count: {}) is ahead of correctness checkpoint {:?}", |
||||
tree.count(), |
||||
correctness_checkpoint, |
||||
); |
||||
|
||||
// All intermediate checkpoints will be stored here and signed once the correctness
|
||||
// checkpoint is reached.
|
||||
let mut checkpoint_queue = vec![]; |
||||
|
||||
// If the correctness checkpoint is ahead of the tree, we need to ingest more messages.
|
||||
//
|
||||
// tree.index() will panic if the tree is empty, so we use tree.count() instead
|
||||
// and convert the correctness_checkpoint.index to a count by adding 1.
|
||||
while tree.count() as u32 <= correctness_checkpoint.index { |
||||
if let Some(insertion) = self |
||||
.message_db |
||||
.retrieve_merkle_tree_insertion_by_leaf_index(&(tree.count() as u32)) |
||||
.unwrap_or_else(|err| { |
||||
panic!( |
||||
"Error fetching merkle tree insertion for leaf index {}: {}", |
||||
tree.count(), |
||||
err |
||||
) |
||||
}) |
||||
{ |
||||
debug!( |
||||
index = insertion.index(), |
||||
queue_length = checkpoint_queue.len(), |
||||
"Ingesting leaf to tree" |
||||
); |
||||
let message_id = insertion.message_id(); |
||||
tree.ingest(message_id); |
||||
|
||||
let checkpoint = self.checkpoint(tree); |
||||
|
||||
checkpoint_queue.push(CheckpointWithMessageId { |
||||
checkpoint, |
||||
message_id, |
||||
}); |
||||
} else { |
||||
// If we haven't yet indexed the next merkle tree insertion but know that
|
||||
// it will soon exist (because we know the correctness checkpoint), wait a bit and
|
||||
// try again.
|
||||
sleep(Duration::from_millis(100)).await |
||||
} |
||||
} |
||||
|
||||
// At this point we know that correctness_checkpoint.index == tree.index().
|
||||
assert_eq!( |
||||
correctness_checkpoint.index, |
||||
tree.index(), |
||||
"correctness checkpoint index {} != tree index {}", |
||||
correctness_checkpoint.index, |
||||
tree.index(), |
||||
); |
||||
|
||||
let checkpoint = self.checkpoint(tree); |
||||
|
||||
// If the tree's checkpoint doesn't match the correctness checkpoint, something went wrong
|
||||
// and we bail loudly.
|
||||
if checkpoint != *correctness_checkpoint { |
||||
error!( |
||||
?checkpoint, |
||||
?correctness_checkpoint, |
||||
"Incorrect tree root, something went wrong" |
||||
); |
||||
panic!("Incorrect tree root, something went wrong"); |
||||
} |
||||
|
||||
if !checkpoint_queue.is_empty() { |
||||
info!( |
||||
index = checkpoint.index, |
||||
queue_len = checkpoint_queue.len(), |
||||
"Reached tree consistency" |
||||
); |
||||
self.sign_and_submit_checkpoints(checkpoint_queue).await; |
||||
|
||||
info!( |
||||
index = checkpoint.index, |
||||
"Signed all queued checkpoints until index" |
||||
); |
||||
} |
||||
} |
||||
|
||||
async fn sign_and_submit_checkpoint( |
||||
&self, |
||||
checkpoint: CheckpointWithMessageId, |
||||
) -> ChainResult<()> { |
||||
let existing = self |
||||
.checkpoint_syncer |
||||
.fetch_checkpoint(checkpoint.index) |
||||
.await?; |
||||
if existing.is_some() { |
||||
debug!(index = checkpoint.index, "Checkpoint already submitted"); |
||||
return Ok(()); |
||||
} |
||||
let signed_checkpoint = self.signer.sign(checkpoint).await?; |
||||
self.checkpoint_syncer |
||||
.write_checkpoint(&signed_checkpoint) |
||||
.await?; |
||||
debug!(index = checkpoint.index, "Signed and submitted checkpoint"); |
||||
|
||||
// TODO: move these into S3 implementations
|
||||
// small sleep before signing next checkpoint to avoid rate limiting
|
||||
sleep(Duration::from_millis(100)).await; |
||||
Ok(()) |
||||
} |
||||
|
||||
/// Signs and submits any previously unsubmitted checkpoints.
|
||||
async fn sign_and_submit_checkpoints(&self, checkpoints: Vec<CheckpointWithMessageId>) { |
||||
let last_checkpoint = checkpoints.as_slice()[checkpoints.len() - 1]; |
||||
// Submits checkpoints to the store in reverse order. This speeds up processing historic checkpoints (those before the validator is spun up),
|
||||
// since those are the most likely to make messages become processable.
|
||||
// A side effect is that new checkpoints will also be submitted in reverse order.
|
||||
for queued_checkpoint in checkpoints.into_iter().rev() { |
||||
// certain checkpoint stores rate limit very aggressively, so we retry indefinitely
|
||||
call_and_retry_indefinitely(|| { |
||||
let self_clone = self.clone(); |
||||
Box::pin(async move { |
||||
self_clone |
||||
.sign_and_submit_checkpoint(queued_checkpoint) |
||||
.await?; |
||||
Ok(()) |
||||
}) |
||||
}) |
||||
.await; |
||||
} |
||||
|
||||
call_and_retry_indefinitely(|| { |
||||
let self_clone = self.clone(); |
||||
Box::pin(async move { |
||||
self_clone |
||||
.checkpoint_syncer |
||||
.update_latest_index(last_checkpoint.index) |
||||
.await?; |
||||
Ok(()) |
||||
}) |
||||
}) |
||||
.await; |
||||
} |
||||
} |
||||
|
||||
/// Returns whether the tree exceeds the checkpoint.
|
||||
fn tree_exceeds_checkpoint(checkpoint: &Checkpoint, tree: &IncrementalMerkle) -> bool { |
||||
// tree.index() will panic if the tree is empty, so we use tree.count() instead
|
||||
// and convert the correctness_checkpoint.index to a count by adding 1.
|
||||
checkpoint.index + 1 < tree.count() as u32 |
||||
} |
||||
|
||||
#[derive(Clone)] |
||||
pub(crate) struct ValidatorSubmitterMetrics { |
||||
latest_checkpoint_observed: IntGauge, |
||||
latest_checkpoint_processed: IntGauge, |
||||
} |
||||
|
||||
impl ValidatorSubmitterMetrics { |
||||
pub fn new(metrics: &CoreMetrics, mailbox_chain: &HyperlaneDomain) -> Self { |
||||
let chain_name = mailbox_chain.name(); |
||||
Self { |
||||
latest_checkpoint_observed: metrics |
||||
.latest_checkpoint() |
||||
.with_label_values(&["validator_observed", chain_name]), |
||||
latest_checkpoint_processed: metrics |
||||
.latest_checkpoint() |
||||
.with_label_values(&["validator_processed", chain_name]), |
||||
} |
||||
} |
||||
} |
@ -1,57 +0,0 @@ |
||||
use cosmrs::{crypto::PublicKey, AccountId}; |
||||
use tendermint::account::Id as TendermintAccountId; |
||||
use tendermint::public_key::PublicKey as TendermintPublicKey; |
||||
|
||||
use hyperlane_core::Error::Overflow; |
||||
use hyperlane_core::{ChainCommunicationError, ChainResult, H256}; |
||||
|
||||
use crate::HyperlaneCosmosError; |
||||
|
||||
pub(crate) struct CosmosAccountId<'a> { |
||||
account_id: &'a AccountId, |
||||
} |
||||
|
||||
impl<'a> CosmosAccountId<'a> { |
||||
pub fn new(account_id: &'a AccountId) -> Self { |
||||
Self { account_id } |
||||
} |
||||
|
||||
pub fn account_id_from_pubkey(pub_key: PublicKey, prefix: &str) -> ChainResult<AccountId> { |
||||
// Get the inner type
|
||||
let tendermint_pub_key = TendermintPublicKey::from(pub_key); |
||||
// Get the RIPEMD160(SHA256(pub_key))
|
||||
let tendermint_id = TendermintAccountId::from(tendermint_pub_key); |
||||
// Bech32 encoding
|
||||
let account_id = AccountId::new(prefix, tendermint_id.as_bytes()) |
||||
.map_err(Into::<HyperlaneCosmosError>::into)?; |
||||
|
||||
Ok(account_id) |
||||
} |
||||
} |
||||
|
||||
impl TryFrom<&CosmosAccountId<'_>> for H256 { |
||||
type Error = ChainCommunicationError; |
||||
|
||||
/// Builds a H256 digest from a cosmos AccountId (Bech32 encoding)
|
||||
fn try_from(account_id: &CosmosAccountId) -> Result<Self, Self::Error> { |
||||
let bytes = account_id.account_id.to_bytes(); |
||||
let h256_len = H256::len_bytes(); |
||||
let Some(start_point) = h256_len.checked_sub(bytes.len()) else { |
||||
// input is too large to fit in a H256
|
||||
return Err(Overflow.into()); |
||||
}; |
||||
let mut empty_hash = H256::default(); |
||||
let result = empty_hash.as_bytes_mut(); |
||||
result[start_point..].copy_from_slice(bytes.as_slice()); |
||||
Ok(H256::from_slice(result)) |
||||
} |
||||
} |
||||
|
||||
impl TryFrom<CosmosAccountId<'_>> for H256 { |
||||
type Error = ChainCommunicationError; |
||||
|
||||
/// Builds a H256 digest from a cosmos AccountId (Bech32 encoding)
|
||||
fn try_from(account_id: CosmosAccountId) -> Result<Self, Self::Error> { |
||||
(&account_id).try_into() |
||||
} |
||||
} |
@ -1,287 +0,0 @@ |
||||
use async_trait::async_trait; |
||||
use cosmrs::cosmwasm::MsgExecuteContract; |
||||
use cosmrs::crypto::PublicKey; |
||||
use cosmrs::tx::{MessageExt, SequenceNumber, SignerInfo}; |
||||
use cosmrs::{AccountId, Tx}; |
||||
use itertools::Itertools; |
||||
use tendermint::hash::Algorithm; |
||||
use tendermint::Hash; |
||||
use tendermint_rpc::{client::CompatMode, Client, HttpClient}; |
||||
use time::OffsetDateTime; |
||||
|
||||
use hyperlane_core::{ |
||||
BlockInfo, ChainCommunicationError, ChainInfo, ChainResult, ContractLocator, HyperlaneChain, |
||||
HyperlaneDomain, HyperlaneProvider, TxnInfo, TxnReceiptInfo, H256, U256, |
||||
}; |
||||
|
||||
use crate::address::CosmosAddress; |
||||
use crate::grpc::WasmProvider; |
||||
use crate::libs::account::CosmosAccountId; |
||||
use crate::{ConnectionConf, CosmosAmount, HyperlaneCosmosError, Signer}; |
||||
|
||||
use self::grpc::WasmGrpcProvider; |
||||
|
||||
/// cosmos grpc provider
|
||||
pub mod grpc; |
||||
/// cosmos rpc provider
|
||||
pub mod rpc; |
||||
|
||||
/// Abstraction over a connection to a Cosmos chain
|
||||
#[derive(Debug, Clone)] |
||||
pub struct CosmosProvider { |
||||
domain: HyperlaneDomain, |
||||
connection_conf: ConnectionConf, |
||||
grpc_client: WasmGrpcProvider, |
||||
rpc_client: HttpClient, |
||||
} |
||||
|
||||
impl CosmosProvider { |
||||
/// Create a reference to a Cosmos chain
|
||||
pub fn new( |
||||
domain: HyperlaneDomain, |
||||
conf: ConnectionConf, |
||||
locator: Option<ContractLocator>, |
||||
signer: Option<Signer>, |
||||
) -> ChainResult<Self> { |
||||
let gas_price = CosmosAmount::try_from(conf.get_minimum_gas_price().clone())?; |
||||
let grpc_client = WasmGrpcProvider::new( |
||||
domain.clone(), |
||||
conf.clone(), |
||||
gas_price.clone(), |
||||
locator, |
||||
signer, |
||||
)?; |
||||
let rpc_client = HttpClient::builder( |
||||
conf.get_rpc_url() |
||||
.parse() |
||||
.map_err(Into::<HyperlaneCosmosError>::into)?, |
||||
) |
||||
// Consider supporting different compatibility modes.
|
||||
.compat_mode(CompatMode::latest()) |
||||
.build() |
||||
.map_err(Into::<HyperlaneCosmosError>::into)?; |
||||
|
||||
Ok(Self { |
||||
domain, |
||||
connection_conf: conf, |
||||
rpc_client, |
||||
grpc_client, |
||||
}) |
||||
} |
||||
|
||||
/// Get a grpc client
|
||||
pub fn grpc(&self) -> &WasmGrpcProvider { |
||||
&self.grpc_client |
||||
} |
||||
|
||||
/// Get an rpc client
|
||||
pub fn rpc(&self) -> &HttpClient { |
||||
&self.rpc_client |
||||
} |
||||
|
||||
fn search_payer_in_signer_infos( |
||||
&self, |
||||
signer_infos: &[SignerInfo], |
||||
payer: &AccountId, |
||||
) -> ChainResult<(AccountId, SequenceNumber)> { |
||||
signer_infos |
||||
.iter() |
||||
.map(|si| self.convert_signer_info_into_account_id_and_nonce(si)) |
||||
// After the following we have a single Ok entry and, possibly, many Err entries
|
||||
.filter_ok(|(a, s)| payer == a) |
||||
// If we have Ok entry, use it since it is the payer, if not, use the first entry with error
|
||||
.find_or_first(|r| match r { |
||||
Ok((a, s)) => payer == a, |
||||
Err(e) => false, |
||||
}) |
||||
// If there were not any signer info with non-empty public key or no signers for the transaction,
|
||||
// we get None here
|
||||
.unwrap_or_else(|| Err(ChainCommunicationError::from_other_str("no signer info"))) |
||||
} |
||||
|
||||
fn convert_signer_info_into_account_id_and_nonce( |
||||
&self, |
||||
signer_info: &SignerInfo, |
||||
) -> ChainResult<(AccountId, SequenceNumber)> { |
||||
let signer_public_key = signer_info.public_key.clone().ok_or_else(|| { |
||||
HyperlaneCosmosError::PublicKeyError("no public key for default signer".to_owned()) |
||||
})?; |
||||
|
||||
let public_key = PublicKey::try_from(signer_public_key)?; |
||||
|
||||
let account_id = CosmosAccountId::account_id_from_pubkey( |
||||
public_key, |
||||
&self.connection_conf.get_bech32_prefix(), |
||||
)?; |
||||
|
||||
Ok((account_id, signer_info.sequence)) |
||||
} |
||||
|
||||
/// Calculates the sender and the nonce for the transaction.
|
||||
/// We use `payer` of the fees as the sender of the transaction, and we search for `payer`
|
||||
/// signature information to find the nonce.
|
||||
/// If `payer` is not specified, we use the account which signed the transaction first, as
|
||||
/// the sender.
|
||||
fn sender_and_nonce(&self, tx: &Tx) -> ChainResult<(H256, SequenceNumber)> { |
||||
let (sender, nonce) = tx |
||||
.auth_info |
||||
.fee |
||||
.payer |
||||
.as_ref() |
||||
.map(|payer| self.search_payer_in_signer_infos(&tx.auth_info.signer_infos, payer)) |
||||
.map_or_else( |
||||
|| { |
||||
let signer_info = tx.auth_info.signer_infos.get(0).ok_or_else(|| { |
||||
HyperlaneCosmosError::SignerInfoError( |
||||
"no signer info in default signer".to_owned(), |
||||
) |
||||
})?; |
||||
self.convert_signer_info_into_account_id_and_nonce(signer_info) |
||||
}, |
||||
|p| p, |
||||
) |
||||
.map(|(a, n)| CosmosAddress::from_account_id(a).map(|a| (a.digest(), n)))??; |
||||
Ok((sender, nonce)) |
||||
} |
||||
|
||||
/// Extract contract address from transaction.
|
||||
/// Assumes that there is only one `MsgExecuteContract` message in the transaction
|
||||
fn contract(tx: &Tx) -> ChainResult<H256> { |
||||
use cosmrs::proto::cosmwasm::wasm::v1::MsgExecuteContract as ProtoMsgExecuteContract; |
||||
|
||||
let any = tx |
||||
.body |
||||
.messages |
||||
.iter() |
||||
.find(|a| a.type_url == "/cosmwasm.wasm.v1.MsgExecuteContract") |
||||
.ok_or_else(|| { |
||||
ChainCommunicationError::from_other_str("could not find contract execution message") |
||||
})?; |
||||
let proto = |
||||
ProtoMsgExecuteContract::from_any(any).map_err(Into::<HyperlaneCosmosError>::into)?; |
||||
let msg = MsgExecuteContract::try_from(proto)?; |
||||
let contract = H256::try_from(CosmosAccountId::new(&msg.contract))?; |
||||
Ok(contract) |
||||
} |
||||
} |
||||
|
||||
impl HyperlaneChain for CosmosProvider { |
||||
fn domain(&self) -> &HyperlaneDomain { |
||||
&self.domain |
||||
} |
||||
|
||||
fn provider(&self) -> Box<dyn HyperlaneProvider> { |
||||
Box::new(self.clone()) |
||||
} |
||||
} |
||||
|
||||
#[async_trait] |
||||
impl HyperlaneProvider for CosmosProvider { |
||||
async fn get_block_by_hash(&self, hash: &H256) -> ChainResult<BlockInfo> { |
||||
let tendermint_hash = Hash::from_bytes(Algorithm::Sha256, hash.as_bytes()) |
||||
.expect("block hash should be of correct size"); |
||||
|
||||
let response = self |
||||
.rpc_client |
||||
.block_by_hash(tendermint_hash) |
||||
.await |
||||
.map_err(ChainCommunicationError::from_other)?; |
||||
|
||||
let received_hash = H256::from_slice(response.block_id.hash.as_bytes()); |
||||
|
||||
if &received_hash != hash { |
||||
return Err(ChainCommunicationError::from_other_str( |
||||
&format!("received incorrect block, expected hash: {hash:?}, received hash: {received_hash:?}") |
||||
)); |
||||
} |
||||
|
||||
let block = response.block.ok_or_else(|| { |
||||
ChainCommunicationError::from_other_str(&format!( |
||||
"empty block info for block: {:?}", |
||||
hash |
||||
)) |
||||
})?; |
||||
|
||||
let time: OffsetDateTime = block.header.time.into(); |
||||
|
||||
let block_info = BlockInfo { |
||||
hash: hash.to_owned(), |
||||
timestamp: time.unix_timestamp() as u64, |
||||
number: block.header.height.value(), |
||||
}; |
||||
|
||||
Ok(block_info) |
||||
} |
||||
|
||||
async fn get_txn_by_hash(&self, hash: &H256) -> ChainResult<TxnInfo> { |
||||
let tendermint_hash = Hash::from_bytes(Algorithm::Sha256, hash.as_bytes()) |
||||
.expect("transaction hash should be of correct size"); |
||||
|
||||
let response = self |
||||
.rpc_client |
||||
.tx(tendermint_hash, false) |
||||
.await |
||||
.map_err(Into::<HyperlaneCosmosError>::into)?; |
||||
|
||||
let received_hash = H256::from_slice(response.hash.as_bytes()); |
||||
|
||||
if &received_hash != hash { |
||||
return Err(ChainCommunicationError::from_other_str(&format!( |
||||
"received incorrect transaction, expected hash: {:?}, received hash: {:?}", |
||||
hash, received_hash, |
||||
))); |
||||
} |
||||
|
||||
let tx = Tx::from_bytes(&response.tx)?; |
||||
|
||||
let contract = Self::contract(&tx)?; |
||||
let (sender, nonce) = self.sender_and_nonce(&tx)?; |
||||
|
||||
// TODO support multiple denomination for amount
|
||||
let gas_limit = U256::from(tx.auth_info.fee.gas_limit); |
||||
let fee = tx |
||||
.auth_info |
||||
.fee |
||||
.amount |
||||
.iter() |
||||
.fold(U256::zero(), |acc, a| acc + a.amount); |
||||
|
||||
let gas_price = fee / gas_limit; |
||||
|
||||
let tx_info = TxnInfo { |
||||
hash: hash.to_owned(), |
||||
gas_limit: U256::from(response.tx_result.gas_wanted), |
||||
max_priority_fee_per_gas: None, |
||||
max_fee_per_gas: None, |
||||
gas_price: Some(gas_price), |
||||
nonce, |
||||
sender, |
||||
recipient: Some(contract), |
||||
receipt: Some(TxnReceiptInfo { |
||||
gas_used: U256::from(response.tx_result.gas_used), |
||||
cumulative_gas_used: U256::from(response.tx_result.gas_used), |
||||
effective_gas_price: Some(gas_price), |
||||
}), |
||||
}; |
||||
|
||||
Ok(tx_info) |
||||
} |
||||
|
||||
async fn is_contract(&self, address: &H256) -> ChainResult<bool> { |
||||
match self.grpc_client.wasm_contract_info().await { |
||||
Ok(c) => Ok(true), |
||||
Err(e) => Ok(false), |
||||
} |
||||
} |
||||
|
||||
async fn get_balance(&self, address: String) -> ChainResult<U256> { |
||||
Ok(self |
||||
.grpc_client |
||||
.get_balance(address, self.connection_conf.get_canonical_asset()) |
||||
.await?) |
||||
} |
||||
|
||||
async fn get_chain_metrics(&self) -> ChainResult<Option<ChainInfo>> { |
||||
Ok(None) |
||||
} |
||||
} |
@ -1,54 +0,0 @@ |
||||
use hyperlane_core::{config::OperationBatchConfig, U256}; |
||||
use url::Url; |
||||
|
||||
/// Ethereum RPC connection configuration
|
||||
#[derive(Debug, Clone)] |
||||
pub enum RpcConnectionConf { |
||||
/// An HTTP-only quorum.
|
||||
HttpQuorum { |
||||
/// List of urls to connect to
|
||||
urls: Vec<Url>, |
||||
}, |
||||
/// An HTTP-only fallback set.
|
||||
HttpFallback { |
||||
/// List of urls to connect to in order of priority
|
||||
urls: Vec<Url>, |
||||
}, |
||||
/// HTTP connection details
|
||||
Http { |
||||
/// Url to connect to
|
||||
url: Url, |
||||
}, |
||||
/// Websocket connection details
|
||||
Ws { |
||||
/// Url to connect to
|
||||
url: Url, |
||||
}, |
||||
} |
||||
|
||||
/// Ethereum connection configuration
|
||||
#[derive(Debug, Clone)] |
||||
pub struct ConnectionConf { |
||||
/// RPC connection configuration
|
||||
pub rpc_connection: RpcConnectionConf, |
||||
/// Transaction overrides to use when sending transactions.
|
||||
pub transaction_overrides: TransactionOverrides, |
||||
/// Operation batching configuration
|
||||
pub operation_batch: OperationBatchConfig, |
||||
} |
||||
|
||||
/// Ethereum transaction overrides.
|
||||
#[derive(Debug, Clone, Default)] |
||||
pub struct TransactionOverrides { |
||||
/// Gas price to use for transactions, in wei.
|
||||
/// If specified, non-1559 transactions will be used with this gas price.
|
||||
pub gas_price: Option<U256>, |
||||
/// Gas limit to use for transactions.
|
||||
/// If unspecified, the gas limit will be estimated.
|
||||
/// If specified, transactions will use `max(estimated_gas, gas_limit)`
|
||||
pub gas_limit: Option<U256>, |
||||
/// Max fee per gas to use for EIP-1559 transactions.
|
||||
pub max_fee_per_gas: Option<U256>, |
||||
/// Max priority fee per gas to use for EIP-1559 transactions.
|
||||
pub max_priority_fee_per_gas: Option<U256>, |
||||
} |
@ -1,530 +0,0 @@ |
||||
{ |
||||
"types": [ |
||||
{ |
||||
"typeId": 0, |
||||
"type": "()", |
||||
"components": [], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 1, |
||||
"type": "(_, _)", |
||||
"components": [ |
||||
{ |
||||
"name": "__tuple_element", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "__tuple_element", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 2, |
||||
"type": "b256", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 3, |
||||
"type": "bool", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 4, |
||||
"type": "enum Identity", |
||||
"components": [ |
||||
{ |
||||
"name": "Address", |
||||
"type": 14, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "ContractId", |
||||
"type": 15, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 5, |
||||
"type": "enum Option", |
||||
"components": [ |
||||
{ |
||||
"name": "None", |
||||
"type": 0, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "Some", |
||||
"type": 6, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": [ |
||||
6 |
||||
] |
||||
}, |
||||
{ |
||||
"typeId": 6, |
||||
"type": "generic T", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 7, |
||||
"type": "raw untyped ptr", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 8, |
||||
"type": "str[12]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 9, |
||||
"type": "str[16]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 10, |
||||
"type": "str[6]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 11, |
||||
"type": "str[7]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 12, |
||||
"type": "str[8]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 13, |
||||
"type": "str[9]", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 14, |
||||
"type": "struct Address", |
||||
"components": [ |
||||
{ |
||||
"name": "value", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 15, |
||||
"type": "struct ContractId", |
||||
"components": [ |
||||
{ |
||||
"name": "value", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 16, |
||||
"type": "struct Message", |
||||
"components": [ |
||||
{ |
||||
"name": "version", |
||||
"type": 22, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "nonce", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "origin", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "sender", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "destination", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "recipient", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "body", |
||||
"type": 19, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 22, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 17, |
||||
"type": "struct OwnershipTransferredEvent", |
||||
"components": [ |
||||
{ |
||||
"name": "previous_owner", |
||||
"type": 5, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 4, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
}, |
||||
{ |
||||
"name": "new_owner", |
||||
"type": 5, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 4, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
} |
||||
], |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 18, |
||||
"type": "struct RawVec", |
||||
"components": [ |
||||
{ |
||||
"name": "ptr", |
||||
"type": 7, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "cap", |
||||
"type": 21, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": [ |
||||
6 |
||||
] |
||||
}, |
||||
{ |
||||
"typeId": 19, |
||||
"type": "struct Vec", |
||||
"components": [ |
||||
{ |
||||
"name": "buf", |
||||
"type": 18, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 6, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
}, |
||||
{ |
||||
"name": "len", |
||||
"type": 21, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"typeParameters": [ |
||||
6 |
||||
] |
||||
}, |
||||
{ |
||||
"typeId": 20, |
||||
"type": "u32", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 21, |
||||
"type": "u64", |
||||
"components": null, |
||||
"typeParameters": null |
||||
}, |
||||
{ |
||||
"typeId": 22, |
||||
"type": "u8", |
||||
"components": null, |
||||
"typeParameters": null |
||||
} |
||||
], |
||||
"functions": [ |
||||
{ |
||||
"inputs": [], |
||||
"name": "count", |
||||
"output": { |
||||
"name": "", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [ |
||||
{ |
||||
"name": "message_id", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"name": "delivered", |
||||
"output": { |
||||
"name": "", |
||||
"type": 3, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [ |
||||
{ |
||||
"name": "destination_domain", |
||||
"type": 20, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "recipient", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
}, |
||||
{ |
||||
"name": "message_body", |
||||
"type": 19, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 22, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
} |
||||
], |
||||
"name": "dispatch", |
||||
"output": { |
||||
"name": "", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [], |
||||
"name": "get_default_ism", |
||||
"output": { |
||||
"name": "", |
||||
"type": 15, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [], |
||||
"name": "latest_checkpoint", |
||||
"output": { |
||||
"name": "", |
||||
"type": 1, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [ |
||||
{ |
||||
"name": "metadata", |
||||
"type": 19, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 22, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
}, |
||||
{ |
||||
"name": "_message", |
||||
"type": 16, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"name": "process", |
||||
"output": { |
||||
"name": "", |
||||
"type": 0, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [], |
||||
"name": "root", |
||||
"output": { |
||||
"name": "", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [ |
||||
{ |
||||
"name": "module", |
||||
"type": 15, |
||||
"typeArguments": null |
||||
} |
||||
], |
||||
"name": "set_default_ism", |
||||
"output": { |
||||
"name": "", |
||||
"type": 0, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [], |
||||
"name": "owner", |
||||
"output": { |
||||
"name": "", |
||||
"type": 5, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 4, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
} |
||||
}, |
||||
{ |
||||
"inputs": [ |
||||
{ |
||||
"name": "new_owner", |
||||
"type": 5, |
||||
"typeArguments": [ |
||||
{ |
||||
"name": "", |
||||
"type": 4, |
||||
"typeArguments": null |
||||
} |
||||
] |
||||
} |
||||
], |
||||
"name": "transfer_ownership", |
||||
"output": { |
||||
"name": "", |
||||
"type": 0, |
||||
"typeArguments": null |
||||
} |
||||
} |
||||
], |
||||
"loggedTypes": [ |
||||
{ |
||||
"logId": 0, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 8, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 1, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 9, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 2, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 12, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 3, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 8, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 4, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 13, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 5, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 11, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 6, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 2, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 7, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 10, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 8, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 10, |
||||
"typeArguments": null |
||||
} |
||||
}, |
||||
{ |
||||
"logId": 9, |
||||
"loggedType": { |
||||
"name": "", |
||||
"type": 17, |
||||
"typeArguments": [] |
||||
} |
||||
} |
||||
], |
||||
"messagesTypes": [] |
||||
} |
@ -1,164 +0,0 @@ |
||||
use std::collections::HashMap; |
||||
use std::fmt::{Debug, Formatter}; |
||||
use std::num::NonZeroU64; |
||||
use std::ops::RangeInclusive; |
||||
|
||||
use async_trait::async_trait; |
||||
use fuels::prelude::{Bech32ContractId, WalletUnlocked}; |
||||
use hyperlane_core::Indexed; |
||||
use tracing::instrument; |
||||
|
||||
use hyperlane_core::{ |
||||
utils::bytes_to_hex, ChainCommunicationError, ChainResult, ContractLocator, HyperlaneAbi, |
||||
HyperlaneChain, HyperlaneContract, HyperlaneDomain, HyperlaneMessage, HyperlaneProvider, |
||||
Indexer, LogMeta, Mailbox, TxCostEstimate, TxOutcome, H256, U256, |
||||
}; |
||||
|
||||
use crate::{ |
||||
contracts::mailbox::Mailbox as FuelMailboxInner, conversions::*, make_provider, ConnectionConf, |
||||
}; |
||||
|
||||
/// A reference to a Mailbox contract on some Fuel chain
|
||||
pub struct FuelMailbox { |
||||
contract: FuelMailboxInner, |
||||
domain: HyperlaneDomain, |
||||
} |
||||
|
||||
impl FuelMailbox { |
||||
/// Create a new fuel mailbox
|
||||
pub fn new( |
||||
conf: &ConnectionConf, |
||||
locator: ContractLocator, |
||||
mut wallet: WalletUnlocked, |
||||
) -> ChainResult<Self> { |
||||
let provider = make_provider(conf)?; |
||||
wallet.set_provider(provider); |
||||
let address = Bech32ContractId::from_h256(&locator.address); |
||||
|
||||
Ok(FuelMailbox { |
||||
contract: FuelMailboxInner::new(address, wallet), |
||||
domain: locator.domain.clone(), |
||||
}) |
||||
} |
||||
} |
||||
|
||||
impl HyperlaneContract for FuelMailbox { |
||||
fn address(&self) -> H256 { |
||||
self.contract.contract_id().into_h256() |
||||
} |
||||
} |
||||
|
||||
impl HyperlaneChain for FuelMailbox { |
||||
fn domain(&self) -> &HyperlaneDomain { |
||||
&self.domain |
||||
} |
||||
|
||||
fn provider(&self) -> Box<dyn HyperlaneProvider> { |
||||
todo!() |
||||
} |
||||
} |
||||
|
||||
impl Debug for FuelMailbox { |
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { |
||||
write!(f, "{:?}", self as &dyn HyperlaneContract) |
||||
} |
||||
} |
||||
|
||||
#[async_trait] |
||||
impl Mailbox for FuelMailbox { |
||||
#[instrument(level = "debug", err, ret, skip(self))] |
||||
async fn count(&self, lag: Option<NonZeroU64>) -> ChainResult<u32> { |
||||
assert!( |
||||
lag.is_none(), |
||||
"Fuel does not support querying point-in-time" |
||||
); |
||||
self.contract |
||||
.methods() |
||||
.count() |
||||
.simulate() |
||||
.await |
||||
.map(|r| r.value) |
||||
.map_err(ChainCommunicationError::from_other) |
||||
} |
||||
|
||||
#[instrument(level = "debug", err, ret, skip(self))] |
||||
async fn delivered(&self, id: H256) -> ChainResult<bool> { |
||||
todo!() |
||||
} |
||||
|
||||
#[instrument(err, ret, skip(self))] |
||||
async fn default_ism(&self) -> ChainResult<H256> { |
||||
todo!() |
||||
} |
||||
|
||||
#[instrument(err, ret, skip(self))] |
||||
async fn recipient_ism(&self, recipient: H256) -> ChainResult<H256> { |
||||
todo!() |
||||
} |
||||
|
||||
#[instrument(err, ret, skip(self))] |
||||
async fn process( |
||||
&self, |
||||
message: &HyperlaneMessage, |
||||
metadata: &[u8], |
||||
tx_gas_limit: Option<U256>, |
||||
) -> ChainResult<TxOutcome> { |
||||
todo!() |
||||
} |
||||
|
||||
#[instrument(err, ret, skip(self), fields(msg=%message, metadata=%bytes_to_hex(metadata)))] |
||||
async fn process_estimate_costs( |
||||
&self, |
||||
message: &HyperlaneMessage, |
||||
metadata: &[u8], |
||||
) -> ChainResult<TxCostEstimate> { |
||||
todo!() |
||||
} |
||||
|
||||
fn process_calldata(&self, message: &HyperlaneMessage, metadata: &[u8]) -> Vec<u8> { |
||||
todo!() |
||||
} |
||||
} |
||||
|
||||
/// Struct that retrieves event data for a Fuel Mailbox contract
|
||||
#[derive(Debug)] |
||||
pub struct FuelMailboxIndexer {} |
||||
|
||||
#[async_trait] |
||||
impl Indexer<HyperlaneMessage> for FuelMailboxIndexer { |
||||
async fn fetch_logs_in_range( |
||||
&self, |
||||
range: RangeInclusive<u32>, |
||||
) -> ChainResult<Vec<(Indexed<HyperlaneMessage>, LogMeta)>> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn get_finalized_block_number(&self) -> ChainResult<u32> { |
||||
todo!() |
||||
} |
||||
} |
||||
|
||||
#[async_trait] |
||||
impl Indexer<H256> for FuelMailboxIndexer { |
||||
async fn fetch_logs_in_range( |
||||
&self, |
||||
range: RangeInclusive<u32>, |
||||
) -> ChainResult<Vec<(Indexed<H256>, LogMeta)>> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn get_finalized_block_number(&self) -> ChainResult<u32> { |
||||
todo!() |
||||
} |
||||
} |
||||
|
||||
struct FuelMailboxAbi; |
||||
|
||||
impl HyperlaneAbi for FuelMailboxAbi { |
||||
const SELECTOR_SIZE_BYTES: usize = 8; |
||||
|
||||
fn fn_map() -> HashMap<Vec<u8>, &'static str> { |
||||
// Can't support this without Fuels exporting it in the generated code
|
||||
todo!() |
||||
} |
||||
} |
@ -1,43 +0,0 @@ |
||||
use async_trait::async_trait; |
||||
|
||||
use hyperlane_core::{ |
||||
BlockInfo, ChainInfo, ChainResult, HyperlaneChain, HyperlaneDomain, HyperlaneProvider, TxnInfo, |
||||
H256, U256, |
||||
}; |
||||
|
||||
/// A wrapper around a fuel provider to get generic blockchain information.
|
||||
#[derive(Debug)] |
||||
pub struct FuelProvider {} |
||||
|
||||
impl HyperlaneChain for FuelProvider { |
||||
fn domain(&self) -> &HyperlaneDomain { |
||||
todo!() |
||||
} |
||||
|
||||
fn provider(&self) -> Box<dyn HyperlaneProvider> { |
||||
todo!() |
||||
} |
||||
} |
||||
|
||||
#[async_trait] |
||||
impl HyperlaneProvider for FuelProvider { |
||||
async fn get_block_by_hash(&self, hash: &H256) -> ChainResult<BlockInfo> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn get_txn_by_hash(&self, hash: &H256) -> ChainResult<TxnInfo> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn is_contract(&self, address: &H256) -> ChainResult<bool> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn get_balance(&self, address: String) -> ChainResult<U256> { |
||||
todo!() |
||||
} |
||||
|
||||
async fn get_chain_metrics(&self) -> ChainResult<Option<ChainInfo>> { |
||||
Ok(None) |
||||
} |
||||
} |
@ -1,35 +0,0 @@ |
||||
cargo-features = ["workspace-inheritance"] |
||||
|
||||
[package] |
||||
name = "hyperlane-sealevel" |
||||
version = "0.1.0" |
||||
edition = "2021" |
||||
|
||||
[dependencies] |
||||
anyhow.workspace = true |
||||
async-trait.workspace = true |
||||
base64.workspace = true |
||||
borsh.workspace = true |
||||
derive-new.workspace = true |
||||
jsonrpc-core.workspace = true |
||||
num-traits.workspace = true |
||||
serde.workspace = true |
||||
solana-account-decoder.workspace = true |
||||
solana-client.workspace = true |
||||
solana-sdk.workspace = true |
||||
solana-transaction-status.workspace = true |
||||
thiserror.workspace = true |
||||
tracing-futures.workspace = true |
||||
tracing.workspace = true |
||||
url.workspace = true |
||||
|
||||
account-utils = { path = "../../sealevel/libraries/account-utils" } |
||||
hyperlane-core = { path = "../../hyperlane-core", features = ["solana", "async"] } |
||||
hyperlane-sealevel-interchain-security-module-interface = { path = "../../sealevel/libraries/interchain-security-module-interface" } |
||||
hyperlane-sealevel-mailbox = { path = "../../sealevel/programs/mailbox", features = ["no-entrypoint"] } |
||||
hyperlane-sealevel-igp = { path = "../../sealevel/programs/hyperlane-sealevel-igp", features = ["no-entrypoint"] } |
||||
hyperlane-sealevel-message-recipient-interface = { path = "../../sealevel/libraries/message-recipient-interface" } |
||||
hyperlane-sealevel-multisig-ism-message-id = { path = "../../sealevel/programs/ism/multisig-ism-message-id", features = ["no-entrypoint"] } |
||||
hyperlane-sealevel-validator-announce = { path = "../../sealevel/programs/validator-announce", features = ["no-entrypoint"] } |
||||
multisig-ism = { path = "../../sealevel/libraries/multisig-ism" } |
||||
serializable-account-meta = { path = "../../sealevel/libraries/serializable-account-meta" } |
@ -1,29 +0,0 @@ |
||||
use solana_client::nonblocking::rpc_client::RpcClient; |
||||
use solana_sdk::commitment_config::CommitmentConfig; |
||||
|
||||
/// Kludge to implement Debug for RpcClient.
|
||||
pub struct RpcClientWithDebug(RpcClient); |
||||
|
||||
impl RpcClientWithDebug { |
||||
pub fn new(rpc_endpoint: String) -> Self { |
||||
Self(RpcClient::new(rpc_endpoint)) |
||||
} |
||||
|
||||
pub fn new_with_commitment(rpc_endpoint: String, commitment: CommitmentConfig) -> Self { |
||||
Self(RpcClient::new_with_commitment(rpc_endpoint, commitment)) |
||||
} |
||||
} |
||||
|
||||
impl std::fmt::Debug for RpcClientWithDebug { |
||||
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { |
||||
f.write_str("RpcClient { ... }") |
||||
} |
||||
} |
||||
|
||||
impl std::ops::Deref for RpcClientWithDebug { |
||||
type Target = RpcClient; |
||||
|
||||
fn deref(&self) -> &Self::Target { |
||||
&self.0 |
||||
} |
||||
} |
@ -1,23 +0,0 @@ |
||||
use hyperlane_core::ChainCommunicationError; |
||||
use solana_client::client_error::ClientError; |
||||
use solana_sdk::pubkey::ParsePubkeyError; |
||||
|
||||
/// Errors from the crates specific to the hyperlane-sealevel
|
||||
/// implementation.
|
||||
/// This error can then be converted into the broader error type
|
||||
/// in hyperlane-core using the `From` trait impl
|
||||
#[derive(Debug, thiserror::Error)] |
||||
pub enum HyperlaneSealevelError { |
||||
/// ParsePubkeyError error
|
||||
#[error("{0}")] |
||||
ParsePubkeyError(#[from] ParsePubkeyError), |
||||
/// ClientError error
|
||||
#[error("{0}")] |
||||
ClientError(#[from] ClientError), |
||||
} |
||||
|
||||
impl From<HyperlaneSealevelError> for ChainCommunicationError { |
||||
fn from(value: HyperlaneSealevelError) -> Self { |
||||
ChainCommunicationError::from_other(value) |
||||
} |
||||
} |
@ -1,84 +0,0 @@ |
||||
use std::{str::FromStr, sync::Arc}; |
||||
|
||||
use async_trait::async_trait; |
||||
|
||||
use hyperlane_core::{ |
||||
BlockInfo, ChainInfo, ChainResult, HyperlaneChain, HyperlaneDomain, HyperlaneProvider, TxnInfo, |
||||
H256, U256, |
||||
}; |
||||
use solana_sdk::{commitment_config::CommitmentConfig, pubkey::Pubkey}; |
||||
|
||||
use crate::{client::RpcClientWithDebug, error::HyperlaneSealevelError, ConnectionConf}; |
||||
|
||||
/// A wrapper around a Sealevel provider to get generic blockchain information.
|
||||
#[derive(Debug)] |
||||
pub struct SealevelProvider { |
||||
domain: HyperlaneDomain, |
||||
rpc_client: Arc<RpcClientWithDebug>, |
||||
} |
||||
|
||||
impl SealevelProvider { |
||||
/// Create a new Sealevel provider.
|
||||
pub fn new(domain: HyperlaneDomain, conf: &ConnectionConf) -> Self { |
||||
// Set the `processed` commitment at rpc level
|
||||
let rpc_client = Arc::new(RpcClientWithDebug::new_with_commitment( |
||||
conf.url.to_string(), |
||||
CommitmentConfig::processed(), |
||||
)); |
||||
|
||||
SealevelProvider { domain, rpc_client } |
||||
} |
||||
|
||||
/// Get an rpc client
|
||||
pub fn rpc(&self) -> &RpcClientWithDebug { |
||||
&self.rpc_client |
||||
} |
||||
|
||||
/// Get the balance of an address
|
||||
pub async fn get_balance(&self, address: String) -> ChainResult<U256> { |
||||
let pubkey = Pubkey::from_str(&address).map_err(Into::<HyperlaneSealevelError>::into)?; |
||||
let balance = self |
||||
.rpc_client |
||||
.get_balance(&pubkey) |
||||
.await |
||||
.map_err(Into::<HyperlaneSealevelError>::into)?; |
||||
Ok(balance.into()) |
||||
} |
||||
} |
||||
|
||||
impl HyperlaneChain for SealevelProvider { |
||||
fn domain(&self) -> &HyperlaneDomain { |
||||
&self.domain |
||||
} |
||||
|
||||
fn provider(&self) -> Box<dyn HyperlaneProvider> { |
||||
Box::new(SealevelProvider { |
||||
domain: self.domain.clone(), |
||||
rpc_client: self.rpc_client.clone(), |
||||
}) |
||||
} |
||||
} |
||||
|
||||
#[async_trait] |
||||
impl HyperlaneProvider for SealevelProvider { |
||||
async fn get_block_by_hash(&self, _hash: &H256) -> ChainResult<BlockInfo> { |
||||
todo!() // FIXME
|
||||
} |
||||
|
||||
async fn get_txn_by_hash(&self, _hash: &H256) -> ChainResult<TxnInfo> { |
||||
todo!() // FIXME
|
||||
} |
||||
|
||||
async fn is_contract(&self, _address: &H256) -> ChainResult<bool> { |
||||
// FIXME
|
||||
Ok(true) |
||||
} |
||||
|
||||
async fn get_balance(&self, address: String) -> ChainResult<U256> { |
||||
self.get_balance(address).await |
||||
} |
||||
|
||||
async fn get_chain_metrics(&self) -> ChainResult<Option<ChainInfo>> { |
||||
Ok(None) |
||||
} |
||||
} |
@ -1,93 +0,0 @@ |
||||
use base64::Engine; |
||||
use borsh::{BorshDeserialize, BorshSerialize}; |
||||
use hyperlane_core::{ChainCommunicationError, ChainResult}; |
||||
|
||||
use serializable_account_meta::{SerializableAccountMeta, SimulationReturnData}; |
||||
use solana_client::nonblocking::rpc_client::RpcClient; |
||||
use solana_sdk::{ |
||||
commitment_config::CommitmentConfig, |
||||
instruction::{AccountMeta, Instruction}, |
||||
message::Message, |
||||
signature::{Keypair, Signer}, |
||||
transaction::Transaction, |
||||
}; |
||||
use solana_transaction_status::UiReturnDataEncoding; |
||||
|
||||
use crate::client::RpcClientWithDebug; |
||||
|
||||
/// Simulates an instruction, and attempts to deserialize it into a T.
|
||||
/// If no return data at all was returned, returns Ok(None).
|
||||
/// If some return data was returned but deserialization was unsuccessful,
|
||||
/// an Err is returned.
|
||||
pub async fn simulate_instruction<T: BorshDeserialize + BorshSerialize>( |
||||
rpc_client: &RpcClient, |
||||
payer: &Keypair, |
||||
instruction: Instruction, |
||||
) -> ChainResult<Option<T>> { |
||||
let commitment = CommitmentConfig::finalized(); |
||||
let (recent_blockhash, _) = rpc_client |
||||
.get_latest_blockhash_with_commitment(commitment) |
||||
.await |
||||
.map_err(ChainCommunicationError::from_other)?; |
||||
let return_data = rpc_client |
||||
.simulate_transaction(&Transaction::new_unsigned(Message::new_with_blockhash( |
||||
&[instruction], |
||||
Some(&payer.pubkey()), |
||||
&recent_blockhash, |
||||
))) |
||||
.await |
||||
.map_err(ChainCommunicationError::from_other)? |
||||
.value |
||||
.return_data; |
||||
|
||||
if let Some(return_data) = return_data { |
||||
let bytes = match return_data.data.1 { |
||||
UiReturnDataEncoding::Base64 => base64::engine::general_purpose::STANDARD |
||||
.decode(return_data.data.0) |
||||
.map_err(ChainCommunicationError::from_other)?, |
||||
}; |
||||
|
||||
let decoded_data = |
||||
T::try_from_slice(bytes.as_slice()).map_err(ChainCommunicationError::from_other)?; |
||||
|
||||
return Ok(Some(decoded_data)); |
||||
} |
||||
|
||||
Ok(None) |
||||
} |
||||
|
||||
/// Simulates an Instruction that will return a list of AccountMetas.
|
||||
pub async fn get_account_metas( |
||||
rpc_client: &RpcClient, |
||||
payer: &Keypair, |
||||
instruction: Instruction, |
||||
) -> ChainResult<Vec<AccountMeta>> { |
||||
// If there's no data at all, default to an empty vec.
|
||||
let account_metas = simulate_instruction::<SimulationReturnData<Vec<SerializableAccountMeta>>>( |
||||
rpc_client, |
||||
payer, |
||||
instruction, |
||||
) |
||||
.await? |
||||
.map(|serializable_account_metas| { |
||||
serializable_account_metas |
||||
.return_data |
||||
.into_iter() |
||||
.map(|serializable_account_meta| serializable_account_meta.into()) |
||||
.collect() |
||||
}) |
||||
.unwrap_or_else(Vec::new); |
||||
|
||||
Ok(account_metas) |
||||
} |
||||
|
||||
pub async fn get_finalized_block_number(rpc_client: &RpcClientWithDebug) -> ChainResult<u32> { |
||||
let height = rpc_client |
||||
.get_block_height() |
||||
.await |
||||
.map_err(ChainCommunicationError::from_other)? |
||||
.try_into() |
||||
// FIXME solana block height is u64...
|
||||
.expect("sealevel block height exceeds u32::MAX"); |
||||
Ok(height) |
||||
} |
File diff suppressed because it is too large
Load Diff
@ -1,2 +0,0 @@ |
||||
pub use rocks::*; |
||||
mod rocks; |
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,329 @@ |
||||
[workspace] |
||||
members = [ |
||||
"agents/relayer", |
||||
"agents/scraper", |
||||
"agents/validator", |
||||
"chains/hyperlane-cosmos", |
||||
"chains/hyperlane-ethereum", |
||||
"chains/hyperlane-fuel", |
||||
"chains/hyperlane-sealevel", |
||||
"ethers-prometheus", |
||||
"hyperlane-base", |
||||
"hyperlane-core", |
||||
"hyperlane-test", |
||||
"utils/abigen", |
||||
"utils/backtrace-oneline", |
||||
"utils/crypto", |
||||
"utils/hex", |
||||
"utils/run-locally", |
||||
] |
||||
|
||||
[workspace.package] |
||||
documentation = "https://docs.hyperlane.xyz" |
||||
edition = "2021" |
||||
homepage = "https://hyperlane.xyz" |
||||
license-file = "../LICENSE.md" |
||||
publish = false |
||||
version = "0.1.0" |
||||
|
||||
[workspace.dependencies] |
||||
Inflector = "0.11.4" |
||||
anyhow = "1.0" |
||||
async-trait = "0.1" |
||||
async-rwlock = "1.3" |
||||
auto_impl = "1.0" |
||||
axum = "0.6.1" |
||||
backtrace = "0.3" |
||||
base64 = "0.21.2" |
||||
bigdecimal = "0.4.2" |
||||
bincode = "1.3" |
||||
borsh = "0.9" |
||||
bs58 = "0.5.0" |
||||
bytes = "1" |
||||
clap = "4" |
||||
chrono = "*" |
||||
color-eyre = "0.6" |
||||
config = "0.13.3" |
||||
console-subscriber = "0.2.0" |
||||
convert_case = "0.6" |
||||
cosmrs = { version = "0.14", default-features = false, features = [ |
||||
"cosmwasm", |
||||
"rpc", |
||||
"tokio", |
||||
"grpc", |
||||
] } |
||||
cosmwasm-std = "*" |
||||
crunchy = "0.2" |
||||
ctrlc = "3.2" |
||||
curve25519-dalek = { version = "~3.2", features = ["serde"] } |
||||
derive-new = "0.5" |
||||
derive_builder = "0.12" |
||||
derive_more = "0.99" |
||||
dhat = "0.3.3" |
||||
ed25519-dalek = "~1.0" |
||||
eyre = "=0.6.8" |
||||
fixed-hash = "0.8.0" |
||||
fuels = "0.65.0" |
||||
fuels-code-gen = "0.65.0" |
||||
futures = "0.3" |
||||
futures-util = "0.3" |
||||
generic-array = { version = "0.14", features = ["serde", "more_lengths"] } |
||||
# Required for WASM support https://docs.rs/getrandom/latest/getrandom/#webassembly-support |
||||
bech32 = "0.9.1" |
||||
elliptic-curve = "0.13.8" |
||||
getrandom = { version = "0.2", features = ["js"] } |
||||
hex = "0.4.3" |
||||
http = "0.2.12" |
||||
hyper = "0.14" |
||||
hyper-tls = "0.5.0" |
||||
hyperlane-cosmwasm-interface = "=0.0.6-rc6" |
||||
injective-protobuf = "0.2.2" |
||||
injective-std = "=0.1.5" |
||||
itertools = "*" |
||||
jobserver = "=0.1.26" |
||||
jsonrpc-core = "18.0" |
||||
k256 = { version = "0.13.4", features = ["arithmetic", "std", "ecdsa"] } |
||||
log = "0.4" |
||||
macro_rules_attribute = "0.2" |
||||
maplit = "1.0" |
||||
mockall = "0.11" |
||||
nix = { version = "0.26", default-features = false } |
||||
num = "0.4" |
||||
num-bigint = "0.4" |
||||
num-derive = "0.4.0" |
||||
num-traits = "0.2" |
||||
once_cell = "1.18.0" |
||||
parking_lot = "0.12" |
||||
paste = "1.0" |
||||
pretty_env_logger = "0.5.0" |
||||
primitive-types = "=0.12.1" |
||||
prometheus = "0.13" |
||||
protobuf = "*" |
||||
rand = "0.8.5" |
||||
regex = "1.5" |
||||
reqwest = "0.11" |
||||
ripemd = "0.1.3" |
||||
rlp = "=0.5.2" |
||||
rocksdb = "0.21.0" |
||||
sea-orm = { version = "0.11.1", features = [ |
||||
"sqlx-postgres", |
||||
"runtime-tokio-native-tls", |
||||
"with-bigdecimal", |
||||
"with-time", |
||||
"macros", |
||||
] } |
||||
sea-orm-migration = { version = "0.11.1", features = [ |
||||
"sqlx-postgres", |
||||
"runtime-tokio-native-tls", |
||||
] } |
||||
semver = "1.0" |
||||
serde = { version = "1.0", features = ["derive"] } |
||||
serde_bytes = "0.11" |
||||
serde_derive = "1.0" |
||||
serde_json = "1.0" |
||||
sha2 = { version = "0.10.6", default-features = false } |
||||
sha256 = "1.1.4" |
||||
sha3 = "0.10" |
||||
solana-account-decoder = "=1.14.13" |
||||
solana-banks-client = "=1.14.13" |
||||
solana-banks-interface = "=1.14.13" |
||||
solana-banks-server = "=1.14.13" |
||||
solana-clap-utils = "=1.14.13" |
||||
solana-cli-config = "=1.14.13" |
||||
solana-client = "=1.14.13" |
||||
solana-program = "=1.14.13" |
||||
solana-program-test = "=1.14.13" |
||||
solana-sdk = "=1.14.13" |
||||
solana-transaction-status = "=1.14.13" |
||||
solana-zk-token-sdk = "=1.14.13" |
||||
spl-associated-token-account = { version = "=1.1.2", features = [ |
||||
"no-entrypoint", |
||||
] } |
||||
spl-noop = { version = "=0.1.3", features = ["no-entrypoint"] } |
||||
spl-token = { version = "=3.5.0", features = ["no-entrypoint"] } |
||||
spl-token-2022 = { version = "=0.5.0", features = ["no-entrypoint"] } |
||||
spl-type-length-value = "=0.1.0" |
||||
static_assertions = "1.1" |
||||
strum = "0.26.2" |
||||
strum_macros = "0.26.2" |
||||
tempfile = "3.3" |
||||
tendermint = "0.32.2" |
||||
tendermint-rpc = { version = "0.32.0", features = ["http-client", "tokio"] } |
||||
thiserror = "1.0" |
||||
time = "0.3" |
||||
tiny-keccak = "2.0.2" |
||||
tokio = { version = "1.4", features = ["parking_lot", "tracing"] } |
||||
tokio-metrics = { version = "0.3.1", default-features = false } |
||||
tokio-test = "0.4" |
||||
toml_edit = "0.19.14" |
||||
tonic = "0.9.2" |
||||
tracing = { version = "0.1" } |
||||
tracing-error = "0.2" |
||||
tracing-futures = "0.2" |
||||
tracing-subscriber = { version = "0.3", default-features = false } |
||||
tracing-test = "0.2.2" |
||||
typetag = "0.2" |
||||
uint = "0.9.5" |
||||
ureq = { version = "2.4", default-features = false } |
||||
url = "2.3" |
||||
walkdir = "2" |
||||
warp = "0.3" |
||||
which = "4.3" |
||||
ya-gcp = { version = "0.11.3", features = ["storage"] } |
||||
|
||||
## TODO: remove this |
||||
cosmwasm-schema = "1.2.7" |
||||
|
||||
[profile.release.package.access-control] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.account-utils] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.ecdsa-signature] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-interchain-security-module-interface] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-message-recipient-interface] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.multisig-ism] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.serializable-account-meta] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-mailbox] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-igp] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-multisig-ism-message-id] |
||||
overflow-checks = true |
||||
|
||||
[profile.release.package.hyperlane-sealevel-validator-announce] |
||||
overflow-checks = true |
||||
|
||||
|
||||
[workspace.dependencies.ethers] |
||||
features = [] |
||||
git = "https://github.com/hyperlane-xyz/ethers-rs" |
||||
tag = "2024-04-25" |
||||
|
||||
[workspace.dependencies.ethers-contract] |
||||
features = ["legacy"] |
||||
git = "https://github.com/hyperlane-xyz/ethers-rs" |
||||
tag = "2024-04-25" |
||||
|
||||
[workspace.dependencies.ethers-core] |
||||
features = [] |
||||
git = "https://github.com/hyperlane-xyz/ethers-rs" |
||||
tag = "2024-04-25" |
||||
|
||||
[workspace.dependencies.ethers-providers] |
||||
features = [] |
||||
git = "https://github.com/hyperlane-xyz/ethers-rs" |
||||
tag = "2024-04-25" |
||||
|
||||
[workspace.dependencies.ethers-signers] |
||||
features = ["aws"] |
||||
git = "https://github.com/hyperlane-xyz/ethers-rs" |
||||
tag = "2024-04-25" |
||||
|
||||
[patch.crates-io.curve25519-dalek] |
||||
branch = "v3.2.2-relax-zeroize" |
||||
git = "https://github.com/Eclipse-Laboratories-Inc/curve25519-dalek" |
||||
version = "3.2.2" |
||||
|
||||
[patch.crates-io.ed25519-dalek] |
||||
branch = "main" |
||||
git = "https://github.com/Eclipse-Laboratories-Inc/ed25519-dalek" |
||||
version = "1.0.1" |
||||
|
||||
[patch.crates-io.primitive-types] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/parity-common.git" |
||||
version = "=0.12.1" |
||||
|
||||
[patch.crates-io.rlp] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/parity-common.git" |
||||
version = "=0.5.2" |
||||
|
||||
[patch.crates-io.solana-account-decoder] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-clap-utils] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-cli-config] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-client] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-program] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-sdk] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-transaction-status] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.solana-zk-token-sdk] |
||||
git = "https://github.com/hyperlane-xyz/solana.git" |
||||
tag = "hyperlane-1.14.13-2023-07-04" |
||||
version = "=1.14.13" |
||||
|
||||
[patch.crates-io.spl-associated-token-account] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/solana-program-library.git" |
||||
version = "=1.1.2" |
||||
|
||||
[patch.crates-io.spl-noop] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/solana-program-library.git" |
||||
version = "=0.1.3" |
||||
|
||||
[patch.crates-io.spl-token] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/solana-program-library.git" |
||||
version = "=3.5.0" |
||||
|
||||
[patch.crates-io.spl-token-2022] |
||||
branch = "hyperlane" |
||||
git = "https://github.com/hyperlane-xyz/solana-program-library.git" |
||||
version = "=0.5.0" |
||||
|
||||
[patch.crates-io.spl-type-length-value] |
||||
version = "=0.1.0" |
||||
git = "https://github.com/hyperlane-xyz/solana-program-library.git" |
||||
branch = "hyperlane" |
||||
|
||||
[patch.crates-io.tendermint] |
||||
branch = "trevor/0.32.2-fork" |
||||
git = "https://github.com/hyperlane-xyz/tendermint-rs.git" |
||||
version = "=0.32.2" |
||||
|
||||
[patch.crates-io.tendermint-rpc] |
||||
branch = "trevor/0.32.2-fork" |
||||
git = "https://github.com/hyperlane-xyz/tendermint-rs.git" |
||||
version = "=0.32.2" |
@ -1,3 +1,5 @@ |
||||
#![allow(clippy::blocks_in_conditions)] // TODO: `rustc` 1.80.1 clippy issue
|
||||
|
||||
use async_trait::async_trait; |
||||
use derive_more::Deref; |
||||
use derive_new::new; |
@ -1,3 +1,5 @@ |
||||
#![allow(clippy::doc_lazy_continuation)] // TODO: `rustc` 1.80.1 clippy issue
|
||||
|
||||
//! Processor scans DB for new messages and wraps relevant messages as a
|
||||
//! `PendingOperation` and then sends it over a channel to a submitter for
|
||||
//! delivery.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue