Existing deployment support in Router tooling, small ISM factory fixes (#2392)
### Description To accommodate warp routes that exist on EVM chains *and* other execution environments, we need the ability to enroll remote routers on EVM-based routers that weren't deployed by the router deployer. This adds a new optional `existingDeployment` property to the `RouterConfig`, which means that deployment will not be attempted to this chain, but it will be enrolled as a remote router on any contracts that are deployed to. The general flow looks something like: * Deploy the warp route contracts on all EVM chains, enrolling each other as routers * Deploy the warp route contracts on the alt execution environment chains, also configuring the existing EVM contract deployments as `existingDeployment`s so that they are enrolled as routers on the alt execution environment contracts * Now equipped with *all* the router addresses, any new non-EVM router addresses are configured as `existingDeployment`s, which now enrolls these routers on the EVM chains Note there's still a lot left to be desired even with this change -- e.g. much of the tooling expects the MultiProvider to include metadata (& even providers) for each and every chain. Because the MultiProvider is of course EVM only, things get a bit weird. For now, I've been able to deploy warp routes with configs relating to a sealevel chain successfully by just running a local Anvil instance and setting the RPC URL of the sealevel config to that anvil instance. We'll want to readdress this at some point, but for now I'd prefer to have a flow that just works and we can be more intentional about bigger deploy / tooling changes to support alt execution environments in the future This also includes some small changes to the ISM factory - because we werent actually waiting for the `enrollValidators` tx to succeed before trying `setThreshold`, the gas estimation of `setThreshold` would fail because the threshold of 1 exceeded the current on-chain validator set size of 0. It also sometimes resulted in nonce contention I was able to get everything in hyperlane-deploy working locally by having dependencies be local filepaths The order of operations for things iiuc looks like: 1. Merge this 2. Ship new SDK version 3. Update hyperlane-token to use this new SDK version 4. Ship new hyperlane-token version 5. Update hyperlane-deploy to use the new SDK version and the new hyperlane-token ### Drive-by changes none ### Related issues - partially https://github.com/hyperlane-xyz/hyperlane-monorepo/issues/2366 ### Backward compatibility _Are these changes backward compatible?_ Yes _Are there any infrastructure implications, e.g. changes that would prohibit deploying older commits using this infra tooling?_ None ### Testing _What kind of testing have these changes undergone?_ unit tests, ran deploy toolingpull/2400/head
parent
feb6d2b4e9
commit
186fcdab36
Loading…
Reference in new issue