[Docs] docs migration from Notion (#663)

* failure cases doc

* added watcher collusion case

* added xapp developing doc

* added (outdated) token bridge docs

* added faq

* added optics root doc

* added versioning doc

* added failure cases doc

* added architecture doc

* added governance doc

* added agent operations doc

* added contributing doc

* more edits and reorg of content
buddies-main-deployment
Conner Swann 3 years ago committed by GitHub
parent 31c7c16dcb
commit 88ef164908
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 205
      docs/agents/agent-operations.md
  2. 84
      docs/architecture.md
  3. 56
      docs/contributing.md
  4. 97
      docs/failure-cases.md
  5. 111
      docs/faq.md
  6. 170
      docs/governance/cross-chain-governance.md
  7. 43
      docs/optics.md
  8. 12
      docs/versioning.md
  9. 83
      docs/xapps/developing.md
  10. 84
      docs/xapps/token-bridge.md
  11. 145
      rust/README.md

@ -0,0 +1,205 @@
## Deployment Environments
There will exist several logical deployments of Optics to enable us to test new code/logic before releasing it to Mainnet. Each environment encompasses the various Home/Replica contracts deployed to many blockchains, as well as the agent deployments and their associated account secrets.
The environments have various purposes and can be described as follows:
### Development
Purpose: Allows us to test changes to contracts and agents. *Bugs should be found here.*
- Deployed against testnets
- Agent Accounts: HexKeys stored in a secret manager for ease of rotation/access
- Agent Infrastructure: Optics core team will operate agent infrastructure for this.
- Node Infrastructure: Forno/Infura
- Agent Deployments: Automatic, continuous deployment
- Contract Deployments: Automatic, with human intervention required for updating the **upgradeBeacon**.
### Staging
Purpose: Allows us to test the full-stack integration, specifically surrounding the KMS access control and federated secret management. *Issues with process should be found here.*
- Deployed against testnets, mirrors Mainnet deployment.
- Agent Accounts: KMS-provisioned keys
- Agent Infrastructure: Agent operations will be decentralized
- Node Infrastructure: Node infrastructure will be decentralized
- Agent Deployments: Determined by whoever is running the agents
- Contract Deployments: Automatic, with human intervention required for updating the **upgradeBeacon**.
### Production
Purpose: Where the magic happens, **things should not break here.**
- Deployed against Mainnets
- Agent Accounts: KMS-provisioned keys
- Agent Infrastructure: Agent operations will be decentralized
- Node Infrastructure: Node infrastructure will be decentralized
- Agent Deployments: Determined by whoever is running the agents
- Contract Deployments: ***Manual*** - Existing tooling can be used, but deploys will be gated and require approval as contract deployments are expensive on Mainnet.
## Key Material
Keys for Staging and Production environments will be stored in AWS KMS, which is a highly flexible solution in terms of granting access. It guarantees nobody will ever have access to the key material itself, while still allowing granular permissions over access to remote signing.
At the outset, the Optics team will have full control over agent keys, and any contracted party will simply be granted access through existing IAM tooling/roles.
### Provision KMS Keys
There exists a script in this repository (`rust/provision_kms_keys.py `) that facilitates KMS key provisioning for agent roles.
The script will produce a single set of keys per "environment." Where an __environment__ is a logical set of smart contract deployments. By default there are two environments configured, `staging` and `production` where `staging` is testnet deployments of the contracts and `production` corresponds to mainnet deployments.
The current strategy, in order to reduce complexity, is to use the same keys for transaction signing on both Celo and Ethereum networks. Should you desire, the key names to be provisioned can be modified such that the script creates unique keys per-network. Ex:
```
# Agent Keys
required_keys = [
"watcher-signer-alfajores",
"watcher-attestation-alfajores",
"watcher-signer-kovan",
"watcher-attestation-kovan",
"updater-signer-alfajores",
"updater-attestation-alfajores",
"updater-signer-kovan",
"updater-attestation-kovan",
"processor-signer-alfajores",
"processor-signer-kovan",
"relayer-signer-alfajores",
"relayer-signer-kovan"
]
```
#### Run the Key Provisioning Script
```
AWS_ACCESS_KEY_ID=accesskey AWS_SECRET_ACCESS_KEY=secretkey python3 provision_kms_keys.py
```
If the required keys are not present, the script will generate them. If they keys _are_ present, their information will be fetched and displayed non-destructively.
Upon successful operation, the script will output a table of the required keys, their ARNs, ETH addresses (for funding the accounts), and their regions.
#### Provision IAM Policies and Users
This is an opinionated setup that works for most general agent operations use-cases. The same permissions boundaries can be achieved through different means, like using only [Key Policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html)
Background Reading/Documentation:
- KMS Policy Conditions: https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.htm
- KMS Policy Examples: https://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html
- CMK Alias Authorization: https://docs.aws.amazon.com/kms/latest/developerguide/alias-authorization.html
The following sequence describes how to set up IAM policies staging and production deployments.
- Create two users
- optics-signer-staging
- optics-signer-production
- kms-admin
- Save IAM credential CSV
- Create staging signer policies
- staging-processor-signer
- staging-relayer-signer
- staging-updater-signer
- staging-watcher-signer
- With the following policy, modified appropriately:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OpticsStagingPolicy",
"Effect": "Allow",
"Action": [
"kms:GetPublicKey",
"kms:Sign"
],
"Resource": "arn:aws:kms:*:11111111111:key/*",
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/staging-processor*"
}
}
}
]
}
```
- production-processor-signer
- production-relayer-signer
- production-updater-signer
- production-watcher-signer
- With the following policy, modified appropriately:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OpticsProductionPolicy",
"Effect": "Allow",
"Action": [
"kms:GetPublicKey",
"kms:Sign"
],
"Resource": "arn:aws:kms:*:11111111111:key/*",
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/production-processor*"
}
}
}
]
}
```
- Create kms-admin policy
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "KMSAdminPolicy",
"Effect": "Allow",
"Action": [
"kms:DescribeCustomKeyStores",
"kms:ListKeys",
"kms:DeleteCustomKeyStore",
"kms:GenerateRandom",
"kms:UpdateCustomKeyStore",
"kms:ListAliases",
"kms:DisconnectCustomKeyStore",
"kms:CreateKey",
"kms:ConnectCustomKeyStore",
"kms:CreateCustomKeyStore"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "kms:*",
"Resource": [
"arn:aws:kms:*:756467427867:alias/*",
"arn:aws:kms:*:756467427867:key/*"
]
}
]
}
```
- Create IAM groups
- staging-signer
- production-signer
- kms-admin
- Add previously created users to the corresponding groups
- optics-signer-staging -> staging-signer
- opticics-signer-production -> production-signer
- kms-admin -> kms-admin
## Funding Addresses
Each agent should be configured with a unique wallet to be used to signing transactions and paying gas. This section describes the process of funding these signer wallets.
Note: It is currently inadvisable to to run multiple Agent setups with the same set of Transaction Signers.
### Steps
1. Generate KMS keys using instructions from the previous section.
2. Enumerate Signer Addresses via the table included as part of the output of `provision_kms_keys.py`, or via whatever method you used to generate keys.
3. Send individual funding transactions to each address
- Note: 500 ETH should be sufficient for testnet addresses.
4. Edit deployment config to match new signers
5.

@ -0,0 +1,84 @@
# Optics Architecture
## Components
![https://s3-us-west-2.amazonaws.com/secure.notion-static.com/94390acd-d363-4abe-a3f6-c979f9e25a11/Optics-Architecture.svg](https://s3-us-west-2.amazonaws.com/secure.notion-static.com/94390acd-d363-4abe-a3f6-c979f9e25a11/Optics-Architecture.svg)
Optics has several logical components:
- Home - The on-chain contract responsible for producing the message tree
- Replica - The on-chain contract responsible for replicating the message root on some other chain
- Updater - The off-chain participant responsible for submitting updates to the home chain
- Watcher - The off-chain participant responsible for observing a replica, and submitting fraud proofs to the home chain
- Relayer - The off-chain participant responsible for submitting updates to a replica
- Processor - The off-chain participant responsible for causing messages to be processed
### On-chain (Contracts)
#### Home
The home contract is responsible for managing production of the message tree and holding custody of the updater bond. It performs the following functions:
1. Expose a "send message" API to other contracts on the home chain
2. Enforce the message format
3. Commit messages to the message tree
4. Maintain a queue of tree roots
5. Hold the updater bond
6. Slash on double-update proofs
7. Slash on improper update proofs
8. Future: manage updater rotation
#### Replica
The replica contract is responsible for managing optimistic replication and dispatching messages to end recipients. It performs the following functions:
1. Maintain a queue of pending updates
2. Finalize updates as their timeouts elapse
3. Accept double-update proofs
4. Validate message proofs
5. Enforce the message format
6. Ensure messages are processed in order
7. Dispatch messages to their destination
8. Expose a "disconnect" feature
### Off-chain (Agents)
#### Updater
The updater is responsible for signing attestations of new roots. It is an off-chain actor that does the following:
1. Observe the home chain contract
2. Sign attestations to new roots
3. Publish the signed attestation to the home chain
#### Watcher
The watcher observes the Updater's interactions with the Home contract (by watching the Home contract) and reacts to malicious or faulty attestations. It also observes any number of replicas to ensure the Updater does not bypass the Home and go straight to a replica. It is an off-chain actor that does the following:
1. Observe the home
2. Observe 1 or more replicas
3. Maintain a DB of seen updates
4. Submit double-update proofs
5. Submit invalid update proofs
6. If configured, issue an emergency halt transaction
#### Relayer
The relayer forwards updates from the home to one or more replicas. It is an off-chain actor that does the following:
1. Observe the home
2. Observe 1 or more replicas
3. Polls home for new signed updates (since replica's current root) and submits them to replica
4. Polls replica for confirmable updates (that have passed their optimistic time window) and confirms if available (updating replica's current root)
#### Processor
The processor proves the validity of pending messages and sends them to end recipients. It is an off-chain actor that does the following:
1. Observe the home
2. Generate and submit merkle proofs for pending (unproven) messages
3. Maintain local merkle tree with all leaves
4. Observe 1 or more replicas
5. Maintain list of messages corresponding to each leaf
6. Dispatch proven messages to end recipients

@ -0,0 +1,56 @@
### Signing Commits
💡 Please set up commit signing: [See docs here](https://docs.github.com/en/github/authenticating-to-github/managing-commit-signature-verification)
Sign them!
### Set pulls to fast forward only
💡 Please read [this article](https://blog.dnsimple.com/2019/01/two-years-of-squash-merge/) about squash merging
- `git config pull.ff only`
- Consider setting this globally because it's way better this way
### **Naming Branches**
We want to know who is working on a branch, and what it does. Please use this format:
- `name/short-description`
- Examples:
- `prestwich/refactor-home`
- `erinhales/merkle-tests`
- `pranay/relayer-config`
### **Commit messages**
We want to know what a commit will do and be able to skim the commit list for specific things. Please add a short 1-word tag to the front, and a short sentence that fills in the blank "If applied, this commit will __________"
- Examples:
- `docs: improve rustdoc on the Relay run function`
- `feature: add gas escalator configuration to optics-base`
- `test: add test vector JSON files for the merkle trees`
For large commits, please add a commit body with a longer description, and bullet points describing key changes.
### **PRs**
Please name the PR with a short sentence that fills in the blank "If applied, this PR will _________". To be merged into `main` a PR must pass CI in order to be merged.
Please use the [Github Draft PR](https://github.blog/2019-02-14-introducing-draft-pull-requests/) feature for WIP PRs. When ready for review, assign at least one reviewer from the core team. PRs should be reviewed by at least 1 other person.
PRs should **ALWAYS** be [merged by squashing the branch](https://blog.carbonfive.com/always-squash-and-rebase-your-git-commits/#:~:text=It's%20simple%20%E2%80%93%20before%20you%20merge,Here's%20a%20breakdown.&text=Make%20changes%20as%20needed%20with%20as%20many%20commits%20that%20you%20need%20to.).
### Merging PRs
PRs can be merged once the author says it's ready and one core team-member has signed off on the changes.
Before approving and merging please do the following:
1. Ensure that you feel comfortable with the changes being made
2. If an existing `Request Changes` review exists, ask the reviewer to re-review
3. Pull the branch locally
4. Run the pre-commit script
5. Ensure that the build and tests pass
6. Give an approval
7. Ensure that any issues the PR addresses are properly linked
8. If any changes are needed to local environments (e.g. re-installing the build script, or installing new tooling) please record it in the documentation folder.

@ -0,0 +1,97 @@
## Optics Failure Cases
Optics is a robust system, resistant to all sorts of problems. However, there are a set of failure cases that require human intervention and need to be enumerated
### Agent State/Config
#### Updater
- *Two `updater`s deployed with the same config*
- (See Double Update)
- *Extended updater downtime*
- **Effect:**
- Updates stop being sent for a period of time
- **Mitigation:**
- `Updater` Rotation (not implemented)
- *Fraudulent `updater`*
- **Effect:**
- Invalid or fraudulent update is sent
- **Mitigation:**
- `Watcher` detects fraud, submits fraud proof (see Improper Update)
#### Relayer
- *`relayer` "relays" the same update more than once*
- **Effect:**
- Only the first one works
- Subsequent transactions are rejected by the replicas
- **Mitigation:**
- Mempool scanning
- "is there a tx in the mempool already that does what I want to do?"
If so, do nothing, pick another message to process.
- __If minimizing gas use:__ Increase polling interval (check less often)
#### Processor
- *`processor` "processes" the same message more than once*
- **Effect:**
- Only the first one works
- Subsequent transactions are rejected by the smart contracts
#### Watcher
- *Watcher and Fraudulent Updater Collude*
- **Effect:**
- Fraud is possible
- **Mitigation:**
- Distribute watcher operations to disparate entities. Anyone can run a watcher.
#### General
- *Transaction Wallets Empty*
- **Effect:**
- Transactions cease to be sent
- **Mitigation:**
- Monitor and top-up wallets on a regular basis
### Contract State
- *Double Update*
- Happens if `Updater` (single key), submits two updates building off the "old root" with different "new root"
- If two `updater`s were polling often but message volume was low, would likely result in the "same update"
- If two `updater`s were polling often but message volume was high, would likely result in a "double update"
- Doesn't necessarily need to be the __two updaters__, edge case could occur where the updater is submitting a transaction, crashes, and then reboots and submits a double update
- **Effect:**
- Home and Replicas go into a **Failed** state (stops working)
- **Mitigation:**
- Agent code has the ability to check its Database for a signed update, check whether it is going to submit a double update, and prevent itself from doing so
- Need to improve things there
- Updater wait time
- `Updater` doesn't want to double-update, so it creates an update and sits on it for some interval. If still valid after the interval, submit. __(Reorg mitigation)__
- __"Just don't run multiple updaters with the same config"__
- *Improper Update*
- Should only occur if the chain has a "deep reorg" that is longer than the `Updater`'s __pause period__ OR if the `Updater` is actively committing fraud.
- **Effect:**
- `Home` goes into a **FAILED** state (stops working)
- No plan for dealing with this currently
- `Updater` gets slashed
- (not implemented currently)
- **Mitigation:**
- `Watcher`(s) unenroll `xapps`
- Humans look at the situation, determine if the `Updater` was committing fraud or just the victim of poor consensus environment.
### Network Environment
- *Network Partition*
- When multiple nodes split off on a fork and break consensus
- Especially bad if the `updater` is off on the least-power chain (results in __Improper Update__)
- **Effect:**
- Manifests as a double-update
- Manifests as an improper update
- Messages simply stop
- **Mitigation:**
- Pay attention and be on the right fork
- **Stop signing updates when this occurs!**
- Have a reliable mechanism for determining this is happening and pull the kill-switch.
- *PoW Chain Reorg (See Network Partition)*
- What happens when a __network partition__ ends
- **Mitigation:**
- *PoS Chain Reorg (See Network Partition)*
- Safety failure (BPs producing conflicting blocks)
- Liveness Failure (no new blocks, chain stops finalizing new blocks)
- **Effect:**
- Slows down finality
- Blocks stop being produced
- How would this manifest in Celo?
- Celo would stop producing blocks.
- Agents would __pause__ and sit there
- When agents see new blocks, they continue normal operations.

@ -0,0 +1,111 @@
# Frequently Asked Questions
**Q: What is the point of the Updater’s attestations?? Why are they necessary / helpful?**
I am confused, because it seems like the Updater has very little discretion as to what messages should or should not be enqueued.. it has to sign a message that’s been added to the Home contract’s queue, and it can’t sign the most recent messages without also signing every message before it, so if it detects some kind of “invalid” or malicious message, the only optionality it has is to stop attesting to messages altogether, thereby halting the system entirely, right?
the updates to the state root are already calculated on-chain each time a message is Dispatched.. why can’t the Home contract just update the current state each time a message is Dispatched, not needing to bother with enqueuing intermediate state roots and waiting for an update to attest to the current state?
**A:**
the updater should have very little discretion. their job is to be a notary and to rubber-stamp something that has already objectively occurred. The Updater should sign roots regardless of the message content. There’s no concept of malicious or fraudulent messages on the Home chain. If someone calls `enqueue` on the Home contract, they want that message to be dispatched and the Updater should have little power to prevent thatThe Updater’s role isn’t to filter messages or worry about them at all. It’s to produce an `Update` that is cheap for all replicas and off-chain observers to verify, and to post a bond that can be slashed. The Replicas **cannot** read the Home’s `current`. They **require** the signed update from a known party in order to know that fraudulent `Update`s are expensive.
The reason for the Root Queue is to prevent the Updater from being slashed due to timing errors outside her control. What if the root changes while the signed Update transaction is in the transaction pool? If Home stored only the latest root, it might change after the signature is made, but before the TX is processed, resulting in slashing an honest Updater
- The updater does not need a copy of the tree
- The updater polls the `suggestUpdate` mechanism on the Home to get a valid update
- The updater does not update a local tree and attest the match
- The updater (and everyone else) can assume the Home calculated the tree root correctly
So, the purpose of the Updater is simply providing a signature that the Replica chains can verify. The Replica chains know that if they can verify the Updater’s signature on a new root, that it is the truth of what happened on the Home chain — or else the Updater will lose their bonded stake on the Home chain.
**Q: does the Updater need to**
- maintain an off-chain copy of the sparse merkle tree
- No, it relies on the contract
- parse transactions submitted to the Home contract and Dispatch events emitted on-chain to learn about new messages
- No. It only needs to poll `suggestUpdate`
- update its local copy of the merkle tree with these new messages
- No, it relies on the contract
- calculate the tree root of its local copy of the merkle tree
- No, it relies on the contract
- compare its locally calculated tree root to the one generated on-chain
- No, it relies on the contract
- attest that the two match?
- No, it relies on the contract
I am definitely missing some insight into what function the Updater serves in the system
- It simply attests to the tree that the contract produced
**Q: How does a message get processed?**
**A:**
1. Enqueue on [Home](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-core/contracts/Home.sol)
2. Updater attests
3. Relayer relays the attestation
4. Processor submits the proof and the message
5. Replica [dispatches message](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-core/contracts/Replica.sol#L202-L239) to the recipient contract
**Q: What happens if the Updater commits fraud?**
**A:**
1. ****The watcher is able to detect fraud
2. The watcher notifies the Home contract, which halts (and slash the Updater's bond)
3. The watcher warns the XApps of the fraudulent update by calling `unenrollReplica`
- [link](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-core/contracts/XAppConnectionManager.sol#L38-L42)
4. When the fraudulent message is received, the XApp rejects it, because the Replica has been unenrolled
**Q: What happens if the Updater equivocates (signs 2 conflicting updates)?**
**A:**
1. The watcher is able to detect this
2. The watcher notifies the Home contract, which halts (and slash the Updater's bond)
3. The watcher notifies each Replica contract, which halts
**Q: Why would an Updater submit a double update?**
**A:** The updater is trying to send different (fraudulent) inbound messages to 2 different chains. E.g the updater wants to move the same $100 to 2 different chains.
**Q: If all double updates are fraud, why do we handle them separately?**
**A:** Because unlike regular fraud, a double update can be detected by all chains. So we can *certainly* defend against it everywhere. Since we have a strong defense against this type of fraud, we just ban it outright.
**Q: What do each of the agents do?**
**A:** See [Optics Architecture](./architecture.md)
**Q: How do XApps know what channels to listen to? And which Home to send messages to?**
**A:** xApps "know" both of these things by querying the xAppConnectionManager.
The xAppConnectionManager is a contract that stores a registry of the address of the Home contract and the Replica contracts on the local chain. xApps query this information from the xAppConnectionManager.
When an external contract attempts to trigger a xApp, it queries the xAppConnectionManager to ensure that the contract is a verified Replica - this way, the xApp "knows" that the message is coming from Optics (rather than a malicious smart contract fabricating Optics messages).
When a xApp needs to send a message via Optics, it queries the xAppConnectionManager to get the address of the Home contract, where it will send the message.
The xAppConnectionManager's registry is maintained by permissioned agents in the Optics system. Enrolling new Replicas or changing the Home address must be done by Optics Governance; Un-enrolling Replicas is performed by either Governance or a permissioned role called a Watcher.
Watchers are permissioned for specific chains, and responsible for "watching" for fraud on the Home contract of that chain. If fraud occurs on the Home, the Watcher must sign a message attesting to the fraud, which can be submitted to the xAppConnectionManager to un-enroll the Replica. In this way, xApps no longer accept messages from Replicas whose Home contracts have been proven fraudulent.
In the future, Watchers will be bonded on the Home chain that they watch, to incentivize timely and consistent submission of their signature if fraud occurs, thereby reducing the trust that must be placed in the Watcher.
**Q: What is a domain? Why do use domain numbers everywhere?**
**A:** The domain is an identifier for a chain (or other execution environment). It is a number, and eventually we'll have a registry for these. We tag each message with an origin domain so that the recipient knows where it came from. And a destination domain so that the relayer and processor know where to deliver it to (and so the Replica knows that it ought to deliver it). This also lets contracts permission specific people or contracts from other chains.
**Q: Why do we use 32-byte addresses instead of Ethereum-style 20-byte addresses?**
**A:** This is future-proofing. We want to go to non-EVM chains in the future. And 20-bytes won't be enough there.
**Q: Why put all messages in the same Home? Why not use a different Home per destination chain?**
**A:** We do this so that there's no action on the Home chain when we set up a new Replica. There's no action needed in the Home or on the home chain to make a new channel. The Home can be totally naive of the channels it's sending over

@ -0,0 +1,170 @@
# Cross-Chain Governance
## Pre-Requisite Reading
- [Optics: OPTimistic Interchain Communication](../optics.md)
## Summary
#### Purpose
This document describes **a governable system for executing permissioned actions across chains**. ****
We aim to clearly describe
- **what** contracts comprise the system for calling permissioned functions across chains
- **which** functions will be delegated to this system at launch, and
- (directionally) **who** will have permission to call these functions at launch and in the future
#### Out of Scope
This document does NOT describe a system for **how** governance actions will be proposed, voted on, and/or approved before being executed.
It does not describe how contract upgrades will be written, reviewed, verified.
#### Overview
We define a role, `governor`, with the power to perform permissioned actions across chains. In order to empower the `governor`, we deploy a cross-chain application comprised of a `GovernanceRouter` contract on each chain.
Each `GovernanceRouter` can be delegated control over an arbitrary set of permissioned functions on its local chain. The only way to access the permissioned functionality is to call the function via the `GovernanceRouter` contract.
Each `GovernanceRouter` is programmed to accept messages ***only*** from the `governor`, which is deployed on only one chain. The `governor` may call the contract locally (if it is deployed on the same chain), or it may send it messages remotely via Optics. Because of its exclusive power over the `GovernanceRouter` contracts, the `governor` has exclusive rights to perform **all** of the permissioned roles that are delegated to the `GovernanceRouter` on each chain.
The system receives orders from the `governor` and carries out their effects across chains; it is agnostic to how the `governor` chooses to operate. This maintains flexibility to design the governance proposal process in the future.
At launch, the core functionality that will be delegated to the `GovernanceRouter` on each chain is the power to upgrade the implementation of the `Home` and `Replica` contracts. This way, the `governor` will have the power to conduct upgrades of the Optics system on every chain. More details on the upgradability system can be found [here](https://www.notion.so/Upgrade-Setup-6853fbbd99c54e2981b9e5b767045231).
At launch, the `governor` will be a multisig of trusted team and community members. In the near future, the `governor` role will most likely be transferred to a more fully-featured set of contracts capable of accepting proposals, tallying votes, and executing successful proposals.
## Message Flow Diagram
![https://s3-us-west-2.amazonaws.com/secure.notion-static.com/c6d0c8c1-0ebd-4a79-a5d9-00c5f8007421/Governance_xApp.jpg](https://s3-us-west-2.amazonaws.com/secure.notion-static.com/c6d0c8c1-0ebd-4a79-a5d9-00c5f8007421/Governance_xApp.jpg)
1. `governor` sends message to its local `GovernanceRouter`
2. `GovernanceRouter` dispatches the message...
1. if the recipient is local, to the recipient directly (→ process finished)
2. if the recipient is remote, via Optics to the local Home contract (→ continue to 3)
3. Message is relayed from local `Home` to remote `Replica` via Optics
4. `Replica` dispatches message to the remote `GovernanceRouter`
5. `GovernanceRouter` dispatched the message directly to the local recipient
**Note on message recipient:**
- the recipient may be a `Replica` or `Home` contract
- it may be an `UpgradeBeacon` that controls the implementation of `Replica` or `Home`
- it may be any other app
For simplicity & clarity to show the message flow, this diagram represents the recipient as a generic "App"
## Specification
#### Glossary of Terms
- **xApp** - Cross-Chain Application
- **role**
- an address stored in a smart contract's state that specifies an entity with special permissions on the contract
- permission to call certain functions is usually implemented using a function modifier that requires that the caller of the function is one of the roles with permission to call it; all contract calls sent from callers that do not have valid permission will revert
- *example*: `owner` is the **role** set on all [Ownable](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/access/Ownable.sol) contracts upon deployment; the `owner` **role** has exclusive permission to call functions with the `onlyOwner` modifier
- **permissioned function**
- any smart contract function that restricts callers of the function to a certain role or roles
- *example*: functions using the `onlyOwner` modifier on [Ownable](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/access/Ownable.sol) contracts
- **permissioned call** — a call to a **permissioned function**
- **governor chain**
- the chain on which the `**governor**` is deployed
- the chain whose `**GovernanceRouter**` is also the special `**GovernorRouter**` which can *send* messages; all `**GovernanceRouters**` on other chains can only *receive* governance messages
### On-Chain Components
#### `**GovernanceRouter**`
- xApp designed to perform permissioned roles on core Optics contracts on all chains
- State Variables
- `**governor**` state variable
- if the `governor` is local, `governor` will be set to the EVM address of the `governor`
- if the `governor` is remote, `governor` will be `address(0)`
- **`governorDomain`** state variable
- the Optics domain of the **governor chain**
- stored as a state variable on all `GovernanceRouters`; should be the same on all `GovernanceRouters`; always non-zero
- if the `governor` is local, `governorDomain` is equal to the `originDomain` of the local `Home` contract
- if the `governor` is remote, `governorDomain` is equal to the `originDomain` of the remote `Home` contract
- equal to the `originDomain` of the local `Home` contract on the chain of the `GovernorRouter`
- used by all `GovernanceRouters` to determine whether an incoming Optics message was sent from the `GovernorRouter`
- if the message is from the `GovernorRouter`, the `GovernanceRouter` will handle the incoming message
- if not, it will revert
- `**routers**` state variable
- a mapping of domain → address of the remote `GovernanceRouter` on every other chain\
- `**domains**` state variable
- an array of all domains that are registered in `routers`
- used to loop through and message all other chains when taking governance actions
- there is the possibility that some domains in the array are null (if a chain has been de-registered)
- **`GovernorRouter`** —
- the special `GovernanceRouter` that has *permission to send* governance messages to all other `GovernanceRouters`
- the `GovernanceRouter` on the **governor chain**
#### `**governor**`
- via the `GovernanceRouter` system, it has the unique ability to call permissioned functions on **any contract** on **any chain** that transfers permission to the local `GovernanceRouter`
- the **role** with permission to send messages to the `**GovernorRouter**`
- the `**GovernorRouter**` has exclusive permission to send messages via Optics to all other `**GovernanceRouters**`
- the `**GovernanceRouters**` can have arbitrary permissions delegated to them by any contract on their local chain
- therefore, the `**governor**` is the entity with the power to call any **permissioned function** delegated to any `**GovernanceRouter**` on any chain
- there is only one `**governor**` throughout the Optics system; it can be deployed on any chain
- the `**governor**` role can always be transferred to another contract, on the same chain **or** a different remote chain
- stored as a state variable on `GovernanceRouters`; set to zero on all `GovernanceRouters` except on the **governor chain**
- **Any contract** on **any chain** that wishes for this governance system to have discretion to call a set of its functions can create a role & a function modifier giving exclusive permission to that role to call the function(s) (similar pattern to Ownable). The contract must then set the local `GovernanceRouter` to the permissioned role, which — by extension — gives the `governor` exclusive permission to call those functions (regardless of whether the `governor` is remote or local)
### Failure States
If there is fraud on the Optics `Home` contract on the **governor chain**, this is currently a "catastrophic failure state" — no further governance actions can be rolled out to remote chains; we must create a plan to recover the system in this case (See [#128](https://github.com/celo-org/optics-monorepo/issues/128) for more details.)
---
## Message Types
### Executing (Arbitrary) Calls
1. **for each chain**, the `governor` constructs the array of `(to, data)` calls to the permissioned functions on the contracts that will perform the upgrades on that chain
2. the `governor` sends a transaction to the `GovernanceRouter.callRemote` function on its local the , passing in the `domain` of the remote chain and the array of `(to, data)` calls of transactions to execute on that chain
3. the local `GovernanceRouter` constructs an Optics-compatible message from the array of calls, addresses the message to the remote `GovernanceRouter`, and sends the message to the local `Home` contract
4. the message is relayed from the local `Home` to the remote `Replica` contract on the specified `domain`
5. the `Replica` dispatches the message to the specified recipient, which is the local `GovernanceRouter`
6. the `GovernanceRouter` parses the message to decode the array of `(to, data)` calls
7. the `GovernanceRouter` uses low-level call to execute each of the transactions in the array within the local chain
### **Transferring Governor**
#### **Possible State Transitions**
1. called by the local owner to transfer ownership to another local owner (`domain` does not change, `owner` changes to a new `bytes32` address)
2. called by the local owner to transfer ownership to a remote owner (`domain` changes to the remote, `owner` changes from a non-zero `bytes32` to `bytes32(0)`)
3. called by a remote owner to transfer ownership to a local owner (`domain` changes to the local domain, `owner` changes from `bytes32(0)` to a non-zero `bytes32`)
4. called by a remote owner to transfer ownership to another remote owner (`domain` changes to the new remote owner, `owner` remains `bytes32(0)`)
### Enrolling a Router
- used when a new chain is added to Optics after we've already set up the system and transferred governorship
- add a new domain → address mapping to the `routers` mapping on every other `GovernanceRouter`
---
## Functionality at Launch
### Permissioned Roles
At launch, the `GovernanceRouter` system **will have the following permissions**:
1. upgrade the implementation of `Home` (via `UpgradeBeacon` pattern)
2. upgrade the implementation of all `Replicas` (via 1-to-N `UpgradeBeacon` pattern)
3. upgrade the implementation of itself (via `UpgradeBeacon` pattern)
The `GovernanceRouter` **will NOT have permission** to
- un-enroll a `Replica` from the `UsingOptics` contract, which will require a specialized role that can act quickly
### Governor
The flexibility of this system will support a move to progressive decentralization.
Initially, the `governor` will most likely be a multisig controlled by trusted team and community members
Later, the `governor` role will most likely be transferred to a decentralized governance contract

@ -0,0 +1,43 @@
# Optics: OPTimistic Interchain Communication
## What is Optics?
Optics is a new design for radically cheaper cross-chain communication *without header verification.* We expect operating Optics to cut 90% of costs compared to a traditional header relay.
Optics will form the base layer of a cross-chain communication network that provides fast, cheap communication for all smart contract chains, rollups, etc. It relies only on widely-available cryptographic primitives (unlike header relays), has latency around 2-3 hours (unlike an ORU message passing layer), and imposes only about 120,000 gas overhead on message senders.
Optics has been designed for ease of implementation in any blockchain that supports user-defined computations. We will provide initial Solidity implementations of the on-chain contracts, and Rust implementations of the off-chain system agents.
## How does it work?
Optics is patterned after optimistic systems. It sees an attestation of some data, and accepts it as valid after a timer elapses. While the timer is running, honest participants have a chance to respond to the data and/or submit fraud proofs.
Unlike most optimistic systems, Optics must work on multiple chains. This means that certain types of fraud can't be objectively proven on the receiving chain. For example, it can't know which messages the home chain intended to send and therefore can't check message validity.
However, they can be proven on the home chain, which means participants can be bonded and fraudulent messages can always result in slashing. In addition, all off-chain observers can be immediately convinced of fraud (as they can check the home chain). This means that the validity of a message sent by Optics is not 100% guaranteed. Instead, Optics guarantees the following:
1) Fraud is costly
2) All users can learn about fraud
3) All users can respond to the fraudulent message before it is accepted
In other words, rather than using a globally verifiable fraud proof, Optics relies on local verification by participants. This tradeoff allows Optics to save 90% on gas fees compared to pessimistic relays, while still maintaining a high degree of security.
## Building Intuition
Optics works something like a notary service. The home chain produces a document (the message tree) that needs notarization. A notary (the updater) is contracted to sign it. The notary can produce a fraudulent copy, but they will be punished by having their bond and license publicly revoked. When this happens, everyone relying on the notary learns that the notary is malicious. All the notary's customers can immediately block the notary and prevent any malicious access to their accounts.
## Technical description
Optics creates an authenticated data structure on a home chain, and replays updates to that data structure on any number of replicas. As a result, the home chain and all replicas will agree on the state of the data structure. By embedding data ("messages") in this data structure we can propagate it between chains with a high degree of confidence.
The home chain enforces rules on the creation of this data structure. In the current design, this data structure is a sparse merkle tree based on the design used in the eth2 deposit contract. This tree commits to the vector of all previous messages. The home chain enforces an addressing and message scheme for messages and calculates the tree root. This root will be propagated to the replicas. The home chain maintains a queue of roots (one for each message).
The home chain elects an "updater" that must attest to the state of the message tree. The updater places a bond on the home chain and is required to periodically sign attestations (updates or `U`). Each attestation contains the root from the previous attestation (`U_prev`), and a new root (`U_new)`.
The home chain slashes when it sees two conflicting updates (`U_i` and `U_i'` where `U_i_prev == U_i'_prev && U_i_new != U_i'_new`) or a single update where `U_new` is not an element of the queue. The new root MUST be a member of the queue. E.g a list of updates `U_1...U_i` should follow the form `[(A, B), (B, C), (C, D)...]`.
Semantically, updates represent a batch commitment to the messages between the two roots. Updates contain one or more messages that ought to be propagated to the replica chain. Updates may occur at any frequency, as often as once per message. Because updates are chain-independent, any home chain update may be presented to any replica. And any replica update may be presented to the home chain. In other words, data availability of signed updates is guaranteed by each chain.
Before accepting an update, a replica places it into a queue of pending updates. Each update must wait for some time parameter before being accepted. While a replica can't know that an update is certainly valid, the waiting system guarantees that fraud is publicly visible **on the home chain** before being accepted by the replica. In other words, the security guarantee of the system is that all frauds may be published by any participant, all published frauds may be slashed, and all participants have a window to react to any fraud. Therefore updates that are **not** blacklisted by participants are sufficiently trustworthy for the replica to accept.

@ -0,0 +1,12 @@
# Code Versioning
Due to the dependency structure in the codebase, it is advantageous to version the Contracts and then pin everything else in the monorepo to the same versioning.
**Versioning Scheme:**
- Monotonically increasing Integer Versions corresponding to implementation contract deployments
- ex. 1, 2, 3, etc.
- Monorepo is tagged with integer version upon major release
- The commit a release is associated with will contain agent/deployment code that is compatible with it
- Agents/build artifacts are versioned using global repo version

@ -0,0 +1,83 @@
# Developing Cross-Chain Applications
## Summary
Optics sends messages from one chain to another in the form of raw bytes. A cross-chain application that wishes to *use* Optics will need to define the rules for sending and receiving messages for its use case.
Each cross-chain application must implement its own messaging protocol. By convention, we call the contracts that implement this protocol the application's **Router contracts.** These Router contracts must:
- ***maintain a permissioned set*** of the contract(s) on remote chains from which it will accept messages via Optics — this could be a single owner of the application on one chain; it could be a registry of other applications implementing the same rules on various chains
- ***encode messages in a standardized format***, so they can be decoded by the Router contract on the destination chain
- ***handle messages*** from remote Router contracts
- ***dispatch messages*** to remote Router contracts
By implementing these pieces of functionality within a Router contract and deploying it across multiple chains, we create a working cross-chain application using a common language and set of rules. Applications of this kind may use Optics as the cross-chain courier for sending and receiving messages to each other.
## Example Code
This repository has several examples one can use to build understanding around Cross-Chain Applications.
### xApp Template
[This is a template](https://github.com/celo-org/optics-monorepo/tree/main/solidity/optics-xapps/contracts/xapp-template) provided by the Optics team that shows the high-level components of an xApp, ready for one to fill in their own application logic and utilize an Optics channel for cross-chain communication.
To implement a xApp, define the actions you would like to execute across chains.
For each type of action,
- in the [xApp Router](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-xapps/contracts/xapp-template/RouterTemplate.sol)
- implement a function like doTypeA to initiate the action from one domain to another (add your own parameters and logic)
- implement a corresponding _handle function to receive, parse, and execute this type of message on the remote domain
- add logic to the handle function to route incoming messages to the appropriate _handle function
- in the [Message library](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-xapps/contracts/xapp-template/MessageTemplate.sol),
- implement functions to *format* the message to send to the other chain (encodes all necessary information for the action)
- implement functions to *parse* the message once it is received on the other chain (decode all necessary information for the action)
### Ping Pong xApp
[The PingPong xApp](https://github.com/celo-org/optics-monorepo/tree/main/solidity/optics-xapps/contracts/ping-pong) is capable of initiating PingPong "matches" between two chains. A match consists of "volleys" sent back-and-forth between the two chains via Optics.
The first volley in a match is always a Ping volley.
- When a Router receives a Ping volley, it returns a Pong.
- When a Router receives a Pong volley, it returns a Ping.
The Routers keep track of the number of volleys in a given match, and emit events for each Sent and Received volley so that spectators can watch.
### Token Bridge xApp
See the full-length [Token Bridge Documentation](#TODO) for in-depth details on Token Bridge operation and construction.
[Link to Contracts](https://github.com/celo-org/optics-monorepo/tree/main/solidity/optics-xapps/contracts/bridge)
### Cross-Chain Governance xApp
See the full-length [Optics Governance Documentation](#TODO) for in-depth details on Governance xApp operation and construction.
[Link to Contracts](https://github.com/celo-org/optics-monorepo/tree/main/solidity/optics-core/contracts/governance)
## Useful Links
- [xApp Developers Workshop @ EthCC 2021 by Anna Carroll](https://www.youtube.com/watch?v=E_zhTRsxWtw)
## Glossary of Terms
- **Local vs Remote**: in the context of discussing a particular contract, it is deployed on a particular chain. Contracts and assets on that chain are "local." Contracts and assets on another chain are "remote"
- e.g. Uniswap is deployed on Ethereum. Ethereum is the local chain. Celo is a remote chain
- e.g. there is a token deployed on Celo. There is a local `Home` contract (on Celo), and a local `Replica` contract (on Celo) receiving messages from Ethereum. There is a remote `Home` (on Ethereum) sending messages. There is a remote `Replica` (on Ethereum) receiving messages from Celo. There is a remote `Router` (on Ethereum) communicating with the local `Router` on Celo.
- **"Locally originating" vs "Remotely Originating" -** in the context of a token or asset in a specific contract, these terms denote whether the original canonical contract is deployed on the local chain, or on a remote chain
- e.g. cUSD originates on Celo. in the context of the Celo blockchain, cUSD is "of local origin" or "locally originating"; in the context of the Ethereum blockchain, cUSD is "of remote origin"
- e.g. Ether and WETH originate on Ethereum. When used in a Celo contract, they are "remotely originating" or "of remote origin"
- e.g. a `Router` receives a Transfer message for a remotely-originating asset. It finds the local contract that represents that asset. When it receives a message for a locally originating asset, it knows that it can find the original asset contract locally
- **Router Contract**: a contract that implements a cross-chain application by specifying the:
- **message format** - the bytes-encoded format of messages for the application
- **registry of remote Router contracts** that implement the same application on remote chains
- **rules & behavior for handling messages** sent via Optics by a registered Router contract on a remote chain
- **rules & behavior for dispatching messages** via Optics to a registered Router contract on a remote chain
- **Message**: bytes transferred via Optics that encode some application-specific instructions via a standardized set of rules
- **Instructions**: set of application-specific actions (e.g. "send 5 token X to 0x123...456 on chain Z" in the case of a Token Bridge); calls to functions on the Router contract
- **Handling Messages from Optics Channels**:
- receive bytes-encoded message from Optics (sent from a remote chain)
- enact or dispatch the instructions on the local chain
- local handler decodes the message into application-specific instructions
- **Dispatching Message to Optics Channels:**
- receive instructions on the local chain (via local users and contracts calling functions on the contract)
- encode the instructions into bytes using standardized message format
- dispatch the bytes-encoded message to Optics (to be sent to a remote chain)

@ -0,0 +1,84 @@
# Token Bridge xApp
## Summary
The Token Bridge xApp implements a bridge that is capable of sending tokens across blockchains.
Features:
- Ensures that circulating token supply remains constant across all chains.
## Protocol
### Handling Messages
- the BridgeRouter contract only accepts messages from other remote BridgeRouter contracts, which are registered by each BridgeRouter
- therefore, every message received follows the same "rules" that the local BridgeRouter expects
- for example, any tokens sent in a message are ensured to be valid, because the remote BridgeRouter sending the message should locally enforce that a user has custody before sending the message to the remote chain
- the messages from remote BridgeRouter contracts must be sent via Optics, dispatched by a local Replica contract, which are registered with the UsingOptics contract
- thus, the BridgeRouter depends on the UsingOptics contract for a valid registry of local Replicas
- if another chain has sent a token that's "native" to this chain, we send that token from the Router contract's escrow to the recipient on this chain
- if we're receiving a token that's not "native" to this chain,
- we check whether a representation token contract has already been deployed by the Router contract on this chain; if not, we deploy that representation token contract and add its address to the token registry
- we mint representation tokens on this chain and send them to the recipient
### Dispatching Messages
- **TODO**: describe rules — person must approve token to Router on local chain (if it's a native token) proving they have ownership over that token and can send to the native chain
- sending tokens
- the user uses ERC-20 `approve` to grant allowance for the tokens being sent to the local BridgeRouter contract
- the user calls send on the local BridgeRouter to transfer the tokens to a remote
- if the token being sent is "native" to the BridgeRouter's chain, the BridgeRouter contract holds the token in escrow
- if the token being sent is not "native" to the chain, then the local token is a representation token contract deployed by the BridgeRouter in the first place; the BridgeRouter contract burns the tokens before sending them to another chain
### Message Format
- **TODO**: specify how messages are encoded for this application
## Architecture
**BridgeRouter ([code](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-xapps/contracts/bridge/BridgeRouter.sol))**
- Receives incoming messages from local `Replica` contracts sending tokens from another chain
- Dispatches outgoing messages to local `Home` contract in order to send tokens to other chains
- Manages a registry of representation ERC-20 token contracts that it deploys on its local chain
- Maintains a registry of remote `BridgeRouter` contracts to
- authenticate that incoming messages come from a remote `BridgeRouter` contract
- properly address outgoing messages to remote `BridgeRouter` contracts
**TokenRegistry ([code](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-xapps/contracts/bridge/TokenRegistry.sol))**
- Responsible for deploying and keeping track of representation ERC-20 token contracts on this chain
- When a new token is transferred, deploys a new representation token contract on this chain, and stores a two-way mapping between the information of the original token contract & the address of the representation on this chain
- Inherited by the `BridgeRouter`, who uses this to make sure a representation of the token exists on this chain before minting/burning
**BridgeMessage library ([code](https://github.com/celo-org/optics-monorepo/blob/main/solidity/optics-xapps/contracts/bridge/BridgeMessage.sol))**
- Library for handling all the nitty gritty of encoding / decoding messages in a standardized way so they can be sent via Optics
## Message Flow
The logical steps and flow of information involved in sending tokens from one chain to another.
- **Chain A**
- User wants to send their tokens to Chain B
- If it's a native token, the user must first `approve` tokens to the local `BridgeRouter-A`
- User calls `send` on the local `BridgeRouter-A`
- If it's a native token, tokens are pulled from the User's wallet to `BridgeRouter-A` and held in escrow
- If it's a non-native token, tokens are burned from User's wallet by `BridgeRouter-A`
- *Note:* `BridgeRouter-A` can burn non-native tokens because the representative contract for the token on its non-native chain was originally deployed by `BridgeRouter-A` when it received a message sending the token from another chain. The router has administrative rights on representations
- `BridgeRouter-A` constructs a message to `BridgeRouter-B`
- `BridgeRouter-A` keeps a mapping of `BridgeRouter` contracts on other chains so it knows where to send the message on Chain B
- `BridgeRouter-A` calls `enqueue` on `Home-A` contract to send the message to Chain B
- **Off-Chain**
- Standard Optics behavior. Updater → Relayer → Processor
- Relayers see message on `Home-A`
- Relayers pass message to `Replica-A` on Chain B
- **Chain B**
- After waiting for the acceptance timeout, `Replica-A` processes the message and dispatches it to `BridgeRouter-B`
- `BridgeRouter-B` keeps a mapping `Replica` contracts that it trusts on the local chain. It uses this to authenticate that the incoming message came from chain A
- `BridgeRouter-B` keeps a mapping of `BridgeRouter` contracts on other chains, so it can authenticate that this message came from `BridgeRouter-A`
- `BridgeRouter-B` looks for the corresponding ERC-20 token contract in its registry, and deploys a new representative one if it doesn't already exist
- `BridgeRouter-B` sends the token to the recipient
- If it's a native token, `BridgeRouter-B` sends the tokens from the pool it's holding in escrow
- If it's a non-native token, `BridgeRouter-B` mints the token to the recipient (
- *Note:* `BridgeRouter-B` can mint non-native tokens because the representative contract for the token on its non-native chain is deployed by `BridgeRouter-B` when it received a message sending the token from another chain. The router has administrative rights on representations.

@ -123,148 +123,3 @@ We use the tokio async runtime environment. Please see the docs
- make a `config` folder and a toml file
- Make sure to include your own settings from above
# Provisoning KMS Keys
There exists a script in this repository (`provision_kms_keys.py `) that facilitates KMS key provisioning for agent roles.
The script will produce a single set of keys per "environment." Where an __environment__ is a logical set of smart contrace deployments. By default there are two environments configured, `staging` and `production` where `staging` is testnet deployments of the contracts and `production` corresponds to mainnet deployments.
The current strategy, in order to reduce complexity, is to use the same keys for transaction signing on both Celo and Ethereum networks. Should you desire, the key names to be provisioned can be modified such that the script creates unique keys per-network. Ex:
```
# Agent Keys
required_keys = [
"watcher-signer-alfajores",
"watcher-attestation-alfajores",
"watcher-signer-kovan",
"watcher-attestation-kovan",
"updater-signer-alfajores",
"updater-attestation-alfajores",
"updater-signer-kovan",
"updater-attestation-kovan",
"processor-signer-alfajores",
"processor-signer-kovan",
"relayer-signer-alfajores",
"relayer-signer-kovan"
]
```
## Run the Key Provisoner Script
```
AWS_ACCESS_KEY_ID=accesskey AWS_SECRET_ACCESS_KEY=secretkey python3 provision_kms_keys.py
```
If the required keys are not present, the script will generate them. If they keys _are_ present, their information will be fetched and displayed non-destructively.
Upon successful operation, the script will output a table of the required keys, their ARNs, ETH addresses (for funding the accounts), and their regions.
## Provision IAM Policies and Users
This is an opinionated setup that works for most general agent operations use-cases. The same permissions boundaries can be achieved through different means, like using only [Key Policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html)
Background Reading/Documentation:
- KMS Policy Conditions: https://docs.aws.amazon.com/kms/latest/developerguide/policy-conditions.htm
- KMS Policy Examples: https://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html
- CMK Alias Authorization: https://docs.aws.amazon.com/kms/latest/developerguide/alias-authorization.html
The following sequence describes how to set up IAM policies staging and production deployments.
- Create two users
- optics-signer-staging
- optics-signer-production
- kms-admin
- Save IAM credential CSV
- Create staging signer policies
- staging-processor-signer
- staging-relayer-signer
- staging-updater-signer
- staging-watcher-signer
- With the following policy, modified appropriately:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OpticsStagingPolicy",
"Effect": "Allow",
"Action": [
"kms:GetPublicKey",
"kms:Sign"
],
"Resource": "arn:aws:kms:*:11111111111:key/*",
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/staging-processor*"
}
}
}
]
}
```
- production-processor-signer
- production-relayer-signer
- production-updater-signer
- production-watcher-signer
- With the following policy, modified appropriately:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "OpticsProductionPolicy",
"Effect": "Allow",
"Action": [
"kms:GetPublicKey",
"kms:Sign"
],
"Resource": "arn:aws:kms:*:11111111111:key/*",
"Condition": {
"ForAnyValue:StringLike": {
"kms:ResourceAliases": "alias/production-processor*"
}
}
}
]
}
```
- Create kms-admin policy
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "KMSAdminPolicy",
"Effect": "Allow",
"Action": [
"kms:DescribeCustomKeyStores",
"kms:ListKeys",
"kms:DeleteCustomKeyStore",
"kms:GenerateRandom",
"kms:UpdateCustomKeyStore",
"kms:ListAliases",
"kms:DisconnectCustomKeyStore",
"kms:CreateKey",
"kms:ConnectCustomKeyStore",
"kms:CreateCustomKeyStore"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "kms:*",
"Resource": [
"arn:aws:kms:*:756467427867:alias/*",
"arn:aws:kms:*:756467427867:key/*"
]
}
]
}
```
- Create IAM groups
- staging-signer
- production-signer
- kms-admin
- Add previously created users to the corresponding groups
- optics-signer-staging -> staging-signer
- opticics-signer-production -> production-signer
- kms-admin -> kms-admin

Loading…
Cancel
Save