@ -1,19 +0,0 @@ |
||||
# BlockScout |
||||
|
||||
[![CircleCI](https://circleci.com/gh/poanetwork/blockscout.svg?style=svg&circle-token=f8823a3d0090407c11f87028c73015a331dbf604)](https://circleci.com/gh/poanetwork/blockscout) [![Coverage Status](https://coveralls.io/repos/github/poanetwork/blockscout/badge.svg?branch=master)](https://coveralls.io/github/poanetwork/blockscout?branch=master) [![Join the chat at https://gitter.im/poanetwork/blockscout](https://badges.gitter.im/poanetwork/blockscout.svg)](https://gitter.im/poanetwork/blockscout?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) |
||||
|
||||
BlockScout provides a comprehensive, easy-to-use interface for users to view, confirm, and inspect transactions on **all EVM** (Ethereum Virtual Machine) blockchains. This includes the Ethereum main and test networks as well as **Ethereum forks and sidechains**. |
||||
|
||||
## Features |
||||
|
||||
- **Open source development**: The code is community driven and available for anyone to use, explore and improve. |
||||
|
||||
- **Real time transaction tracking**: Transactions are updated in real time - no page refresh required. Infinite scrolling is also enabled. |
||||
|
||||
- **Smart contract interaction**: Users can read and verify Solidity smart contracts and access pre-existing contracts to fast-track development. Support for Vyper, LLL, and Web Assembly contracts is in progress. |
||||
|
||||
- **Token support**: ERC20 and ERC721 tokens are supported. Future releases will support additional token types including ERC223 and ERC1155. |
||||
|
||||
- **User customization**: Users can easily deploy on a network and customize the Bootstrap interface. |
||||
|
||||
- **Ethereum sidechain networks**: BlockScout supports the Ethereum mainnet, Ethereum testnets, POA network, and forks like Ethereum Classic, xDAI, additional sidechains, and private EVM networks. |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 101 KiB |
Before Width: | Height: | Size: 184 KiB |
Before Width: | Height: | Size: 300 KiB |
Before Width: | Height: | Size: 639 KiB |
Before Width: | Height: | Size: 132 KiB |
Before Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 177 KiB |
@ -1,37 +0,0 @@ |
||||
<!-- _sidebar.md --> |
||||
|
||||
- About BlockScout |
||||
|
||||
- [About](about.md) |
||||
- [Umbrella Project Organization](umbrella.md) |
||||
|
||||
- Installation & Configuration |
||||
|
||||
- [Requirements](requirements.md) |
||||
- [Ansible Deployment](ansible-deployment.md) |
||||
- [Manual Deployment](manual-deployment.md) |
||||
- [ENV Variables](env-variables.md) |
||||
- [Configuration Options](dev-env.md) |
||||
- [Chain Configuration](chain-configs.md) |
||||
- [Automating Restarts](restarts.md) |
||||
- [Front End](front-end.md) |
||||
- [CircleCI Configs](circleci.md) |
||||
- [Testing](testing.md) |
||||
- [Internationalization](internationalization.md) |
||||
- [Metrics](metrics.md) |
||||
- [Tracing](tracing.md) |
||||
- [Memory Usage](memory-usage.md) |
||||
- [API Docs](api.md) |
||||
- [Upgrading](upgrading.md) |
||||
|
||||
- User Guide |
||||
|
||||
- [Search Terminology](terminology.md) |
||||
- [Smart Contract Verification](smart-contract.md) |
||||
- [FAQs](faqs.md) |
||||
|
||||
- Resources |
||||
- [POA BlockScout Forum & FAQs](https://forum.poa.network/c/blockscout) |
||||
- [Gitter Channel](https://gitter.im/poanetwork/blockscout) |
||||
- [Twitter](https://twitter.com/_blockscout/) |
||||
- [Github Repo](https://github.com/poanetwork/blockscout) |
@ -1,30 +0,0 @@ |
||||
<!-- about.md --> |
||||
|
||||
## About BlockScout |
||||
|
||||
BlockScout is an Elixir application that allows users to search transactions, view accounts and balances, and verify smart contracts on the entire Ethereum network including all forks and sidechains. |
||||
|
||||
Currently available block explorers (i.e. Etherscan and Etherchain) are closed systems which are not independently verifiable. As Ethereum sidechains continue to proliferate in both private and public settings, transparent tools are needed to analyze and validate transactions. |
||||
|
||||
Information on the latest release and version history is available [on our forum](https://forum.poa.network/c/blockscout/releases) |
||||
|
||||
## Visual Interface |
||||
|
||||
![POA BlockScout](_media/screenshot_06_2019.png) |
||||
|
||||
Interface for the POA network: v2.0 _updated 06/2019_ |
||||
|
||||
## Acknowledgements |
||||
|
||||
We would like to thank the [EthPrize foundation](http://ethprize.io/) for their funding support. |
||||
|
||||
## Contributing |
||||
|
||||
See [CONTRIBUTING.md](https://github.com/poanetwork/blockscout/blob/master/CONTRIBUTING.md) for contribution and pull request protocol. We expect contributors to follow our [code of conduct](https://github.com/poanetwork/blockscout/blob/master/CODE_OF_CONDUCT.md) when submitting code or comments. |
||||
|
||||
## License |
||||
|
||||
[![License: GPL v3.0](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) |
||||
|
||||
This project is licensed under the GNU General Public License v3.0. See the [LICENSE](https://github.com/poanetwork/blockscout/blob/master/LICENSE) file for details. |
||||
|
@ -1,324 +0,0 @@ |
||||
<!--ansible-deployment.md --> |
||||
|
||||
# Playbook Overview |
||||
|
||||
We use [Ansible](https://docs.ansible.com/ansible/latest/index.html) & [Terraform](https://www.terraform.io/intro/getting-started/install.html) to build the correct infrastructure to run BlockScout. |
||||
|
||||
The playbook repository is located at [https://github.com/poanetwork/blockscout-terraform](https://github.com/poanetwork/blockscout-terraform). Currently it only supports [AWS](#AWS-permissions) as a cloud provider. |
||||
|
||||
In the root folder you will find Ansible Playbooks to create all necessary infrastructure to deploy BlockScout. The `lambda` folder also contains a set of scripts that may be useful in your BlockScout infrastructure. |
||||
|
||||
|
||||
1. [Deploying the Infrastructure](#deploying-the-infrastructure). This section describes all the steps to deploy the virtual hardware that is required for production instance of BlockScout. Skip this section if you do have an infrastructure and simply want to install or update your BlockScout. |
||||
2. [Deploying BlockScout](#deploying-blockscout). Follow this section to install or update your BlockScout. |
||||
3. [Destroying Provisioned Infrastructure](#destroying-provisioned-infrastructure). Refer to this section if you want to destroy your BlockScout installation. |
||||
|
||||
|
||||
# Prerequisites |
||||
|
||||
Playbooks relies on Terraform, the stateful infrastructure-as-a-code software tool. It allows you to modify and recreate single and multiple resources depending on your needs. |
||||
|
||||
## Prerequisites for deploying infrastructure |
||||
|
||||
| Dependency name | Installation method | |
||||
| -------------------------------------- | ------------------------------------------------------------ | |
||||
| Ansible >= 2.6 | [Installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) | |
||||
| Terraform >=0.11.11 | [Installation guide](https://learn.hashicorp.com/terraform/getting-started/install.html) | |
||||
| Python >=2.6.0 | `apt install python` | |
||||
| Python-pip | `apt install python-pip` | |
||||
| boto & boto3 & botocore python modules | `pip install boto boto3 botocore` | |
||||
|
||||
## Prerequisites for deploying BlockScout |
||||
|
||||
| Dependency name | Installation method | |
||||
| -------------------------------------- | ------------------------------------------------------------ | |
||||
| Ansible >= 2.7.3 | [Installation guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) | |
||||
| Terraform >=0.11.11 | [Installation guide](https://learn.hashicorp.com/terraform/getting-started/install.html) | |
||||
| Python >=2.6.0 | `apt install python` | |
||||
| Python-pip | `apt install python-pip` | |
||||
| boto & boto3 & botocore python modules | `pip install boto boto3 botocore` | |
||||
| AWS CLI | `pip install awscli` | |
||||
| All BlockScout prerequisites | [Check here](requirements.md) | |
||||
|
||||
|
||||
# AWS permissions |
||||
|
||||
See our forum for a detailed [AWS settings and setup tutorial](https://forum.poa.network/t/aws-settings-for-blockscout-terraform-deployment/1962). |
||||
|
||||
During deployment you will provide credentials to your AWS account. The deployment process requires a wide set of permissions, so it works best if you specify the administrator account credentials. |
||||
|
||||
However, if you want to restrict the permissions, here is the list of resources which are created during the deployment process: |
||||
|
||||
- An S3 bucket to keep Terraform state files; |
||||
- DynamoDB table to manage Terraform state files leases; |
||||
- An SSH keypair (or you can choose to use one which was already created), this is used with any EC2 hosts; |
||||
- A VPC containing all of the resources provisioned; |
||||
- A public subnet for the app servers, and a private subnet for the database (and Redis for now); |
||||
- An internet gateway to provide internet access for the VPC; |
||||
- An ALB which exposes the app server HTTPS endpoints to the world; |
||||
- A security group to lock down ingress to the app servers to 80/443 + SSH; |
||||
- A security group to allow the ALB to talk to the app servers; |
||||
- A security group to allow the app servers access to the database; |
||||
- An internal DNS zone; |
||||
- A DNS record for the database; |
||||
- An autoscaling group and launch configuration for each chain; |
||||
- A CodeDeploy application and deployment group targeting the corresponding autoscaling groups. |
||||
|
||||
Each configured chain receives its own ASG (autoscaling group) and deployment group. When application updates are pushed to CodeDeploy, all autoscaling groups will deploy the new version using a blue/green strategy. Currently, there is only one EC2 host to run, and the ASG is configured to allow scaling up, but no triggers are set up to actually perform the scaling yet. This is something that may come in the future. |
||||
|
||||
When deployment begins, Ansible creates the S3 bucket and DynamoDB table required for Terraform state management. This ensures that the Terraform state is stored in a centralized location, allowing multiple people to use Terraform on the same infra without interfering with one another. Terraform prevents interference by holding locks (via DynamoDB) against the state data (stored in S3). |
||||
|
||||
# Configuration |
||||
|
||||
The single point of configuration in this script is a `group_vars/all.yml` file. First, copy it from `group_vars/all.yml.example` template by executing `cp group_vars/all.yml.example group_vars/all.yml` command and then modify it via any text editor you want (vim example - `vim group_vars/all.yml`). The subsections describe the variable you may want to adjust. |
||||
|
||||
# Variables |
||||
|
||||
## Common variables |
||||
|
||||
- `aws_access_key` and `aws_secret_key` is a credentials pair that provides access to AWS for the deployer. |
||||
|
||||
- `backend` variable defines whether deployer should keep state files remote or locally. Set `backend` variable to `true` if you want to save state file to the remote S3 bucket. |
||||
|
||||
- `upload_config_to_s3` - set to `true` if you want to upload config `all.yml` file to the S3 bucket automatically after the deployment. Will not work if `backend` is set to false. |
||||
|
||||
- `upload_debug_info_to_s3` - set to `true` if you want to upload full log output to the S3 bucket automatically after the deployment. Will not work if `backend` is set to false. |
||||
>[!DANGER] |
||||
>Locally logs are stored at `log.txt` which is not cleaned automatically. Please, do not forget to clean it manually or using the `clean.yml` playbook. |
||||
|
||||
- `bucket` represents a globally unique name of the bucket where your configs and state will be stored. It will be created automatically during the deployment. |
||||
|
||||
- `prefix` - is a unique tag to use for provisioned resources (5 alphanumeric chars or less). |
||||
|
||||
- `chains` - maps chains to the URLs of HTTP RPC endpoints, an ordinary blockchain node can be used. |
||||
|
||||
- The `region` should be left at `us-east-1` as some of the other regions fail for different reasons. |
||||
>[!WARNING] |
||||
>a chain name SHOULD NOT be more than 5 characters. Otherwise, it will throw an error because the aws load balancer name should not be greater than 32 characters. |
||||
|
||||
## Infrastructure related variables |
||||
|
||||
- `dynamodb_table` represents the name of table that will be used for Terraform state lock management. |
||||
|
||||
- If `ec2_ssh_key_content` variable is not empty, Terraform will try to create EC2 SSH key with the `ec2_ssh_key_name` name. Otherwise, the existing key with `ec2_ssh_key_name` name will be used. |
||||
|
||||
- `instance_type` defines a size of the Blockscout instance that will be launched during the deployment process. |
||||
|
||||
- `vpc_cidr`, `public_subnet_cidr`, `db_subnet_cidr` represent the network configuration for the deployment. Usually you will leave as is. However, if you want to modify, understand that `db_subnet_cidr` represents not a single network, but a group of networks that start with a defined CIDR block increased by 8 bits. |
||||
> [!TIP|label: Example] |
||||
> Number of networks: 2 <br /> |
||||
> `db_subnet_cidr`: "10.0.1.0/16"<br /> |
||||
> Real networks: 10.0.1.0/24 and 10.0.2.0/24 |
||||
|
||||
- An internal DNS zone with`dns_zone_name` name will be created to take care of BlockScout internal communications. |
||||
|
||||
- The name of a IAM key pair to use for EC2 instances, if you provide a name which |
||||
already exists it will be used, otherwise it will be generated for you. |
||||
|
||||
- If `use_ssl` is set to `false`, SSL will be forced on Blockscout. To configure SSL, use `alb_ssl_policy` and `alb_certificate_arn` variables. |
||||
|
||||
- The `root_block_size` is the amount of storage on your EC2 instance. This value can be adjusted by how frequently logs are rotated. Logs are located in `/opt/app/logs` of your EC2 instance. |
||||
|
||||
- The `pool_size` defines the number of connections allowed by the RDS instance; |
||||
- `secret_key_base` is a random password used for BlockScout internally. It is highly recommended to gernerate your own `secret_key_base` before the deployment. For instance, you can do it via `openssl rand -base64 64 | tr -d '\n'` command. |
||||
|
||||
- `new_relic_app_name` and `new_relic_license_key` should usually stay empty unless you want and know how to configure New Relic integration. |
||||
|
||||
- `elixir_version` - is an Elixir version used in BlockScout release. |
||||
|
||||
- `chain_trace_endpoint` - maps chains to the URLs of HTTP RPC endpoints, which represents a node where state pruning is disabled (archive node) and tracing is enabled. If you don't have a trace endpoint, you can simply copy values from `chains` variable. |
||||
|
||||
- `chain_ws_endpoint` - maps chains to the URLs of HTTP RPCs that supports websockets. This is required to get the real-time updates. Can be the same as `chains` if websocket is enabled there (but make sure to use`ws(s)` instead of `htpp(s)` protocol). |
||||
|
||||
- `chain_jsonrpc_variant` - a client used to connect to the network. Can be `parity`, `geth`, etc. |
||||
|
||||
- `chain_logo` - maps chains to the it logos. Place your own logo at `apps/block_scout_web/assets/static` and specify a relative path at `chain_logo` variable. |
||||
|
||||
- `chain_coin` - a name of the coin used in each particular chain. |
||||
|
||||
- `chain_network` - usually, a name of the organization keeping group of networks, but can represent a name of any logical network grouping you want. |
||||
|
||||
- `chain_subnetwork` - a name of the network to be shown at BlockScout. |
||||
|
||||
- `chain_network_path` - a relative URL path which will be used as an endpoint for defined chain. For example, if we will have our BlockScout at `blockscout.com` domain and place `core` network at `/poa/core`, then the resulting endpoint will be `blockscout.com/poa/core` for this network. |
||||
|
||||
- `chain_network_icon` - maps the chain name to the network navigation icon at apps/block_scout_web/lib/block_scout_web/templates/icons without .eex extension. |
||||
|
||||
- `chain_graphiql_transaction` - is a variable that maps chain to a random transaction hash on that chain. This hash will be used to provide a sample query in the GraphIQL Playground. |
||||
|
||||
- `chain_block_transformer` - will be `clique` for clique networks like Rinkeby and Goerli, and `base` for the rest. |
||||
|
||||
- `chain_heart_beat_timeout`, `chain_heart_command` - configs for the integrated heartbeat. First describes a timeout after the command described at the second variable will be executed. |
||||
|
||||
- Each of the `chain_db_*` variables configures the database for each chain. Each chain will have the separate RDS instance. |
||||
|
||||
- `chain_blockscout_version` - is a text at the footer of BlockScout instance. Usually represents the current BlockScout version. |
||||
|
||||
## Blockscout related variables |
||||
|
||||
- `blockscout_repo` - a direct link to the Blockscout repo. |
||||
|
||||
- `chain_branch` - maps branch at `blockscout_repo` to each chain. |
||||
|
||||
- Specify the `chain_merge_commit` variable if you want to merge any of the specified `chains` with the commit in the other branch. Usually may be used to update production branches with the releases from master branch. |
||||
|
||||
- `skip_fetch` - if this variable is set to `true` , BlockScout repo will not be cloned and the process will start from building the dependencies. Use this variable to prevent playbooks from overriding manual changes in cloned repo. |
||||
|
||||
- `ps_*` variables represents a connection details to the test Postgres database. This one will not be installed automatically, so make sure `ps_*` credentials are valid before starting the deployment. |
||||
|
||||
- `chain_custom_environment` - is a map of variables that should be overrided when deploying the new version of Blockscout. Can be omitted. |
||||
|
||||
>[!NOTE] |
||||
> `chain_custom_environment` variables will not be propagated to the Parameter Store at production servers and need to be set there manually. |
||||
|
||||
# Database Storage Required |
||||
|
||||
The configuration variable `db_storage` can be used to define the amount of storage allocated to your RDS instance. The chart below shows an estimated amount of storage that is required to index individual chains. The `db_storage` can only be adjusted 1 time in a 24 hour period on AWS. |
||||
|
||||
| Chain | Storage (GiB) | |
||||
| ---------------- | ------------- | |
||||
| POA Core | 200 | |
||||
| POA Sokol | 400 | |
||||
| Ethereum Classic | 1000 | |
||||
| Ethereum Mainnet | 4000 | |
||||
| Kovan Testnet | 800 | |
||||
| Ropsten Testnet | 1500 | |
||||
|
||||
# Deploying the Infrastructure |
||||
|
||||
1. Ensure all the [infrastructure prerequisites](#Prerequisites-for-deploying-infrastructure) are installed and has the right version number; |
||||
|
||||
2. Create the AWS access key and secret access key for user with [sufficient permissions](#AWS); |
||||
|
||||
3. Merge `infrastructure` and `all` config template files into single config file: |
||||
```bash |
||||
cat group_vars/infrastructure.yml.example group_vars/all.yml.example > group_vars/all.yml |
||||
``` |
||||
|
||||
4. Set the variables at `group_vars/all.yml` config template file as described in the [configuration section](#Configuration); |
||||
|
||||
5. Run `ansible-playbook deploy_infra.yml`; |
||||
|
||||
- During the deployment the ["diffs didn't match"](#error-applying-plan-diffs-didnt-match) error may occur. If it does, it will be ignored automatically. If the Ansible play recap shows 0 failed plays, then the deployment was successful despite the error. |
||||
|
||||
- Optionally, you may want to check the variables uploaded to the [Parameter Store](https://console.aws.amazon.com/systems-manager/parameters) on your AWS Console. |
||||
|
||||
|
||||
# Deploying BlockScout |
||||
|
||||
1. Ensure all the [BlockScout prerequisites](#Prerequisites-for-deploying-blockscout) are installed and has the right version number. |
||||
|
||||
2. Merge `blockscout` and `all` config template files into a single config file: |
||||
```bash |
||||
cat group_vars/blockscout.yml.example group_vars/all.yml.example > group_vars/all.yml |
||||
``` |
||||
> [!NOTE] |
||||
> All three configuration files are compatible with one another, so you can simply `cat group_vars/blockscout.yml.example >> group_vars/all.yml` if you already have the `all.yml` file after deploying the infrastructure. |
||||
|
||||
3. Set the variables at `group_vars/all.yml` config template file as described in the [configuration section](#Configuration). |
||||
> [!NOTE] |
||||
> Use `chain_custom_environment` to update the variables in each deployment. Map each deployed chain with variables as they should appear at the Parameter Store. Check the example at `group_vars/blockscout.yml.example` config file. `chain_*` variables will be ignored during BlockScout software deployment. |
||||
|
||||
4. This step is for mac OS users only. Please skip if you are not using this OS. |
||||
|
||||
To avoid the the following Python crash error: |
||||
``` |
||||
TASK [main_software : Fetch environment variables] ************************************ |
||||
objc[12816]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. |
||||
objc[12816]: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. |
||||
``` |
||||
|
||||
- Open terminal: `nano .bash_profile`; |
||||
- Add the following line to the end of the file: `export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES`; |
||||
- Save, exit, close terminal and re-open the terminal. Check to see that the environment variable is now set: `env` |
||||
|
||||
(source: https://stackoverflow.com/questions/50168647/multiprocessing-causes-python-to-crash-and-gives-an-error-may-have-been-in-progr); |
||||
|
||||
5. Run `ansible-playbook deploy_software.yml`. |
||||
|
||||
6. When the prompt appears, check that server is running and there is no visual artifacts. The server will be launched at port 4000 at the same machine where you run the Ansible playbooks. If you face any errors you can either fix it or cancel the deployment by pressing **Ctrl+C** and then pressing **A** when additionally prompted. |
||||
|
||||
7. When server is ready to be deployed simply press enter and deployer will upload Blockscout to the appropriate S3. |
||||
|
||||
8. Two other prompts will appear to ensure your will on updating the Parameter Store variables and deploying the BlockScout through the CodeDeploy. Both **yes** and **true** will be interpreted as the confirmation. |
||||
|
||||
9. Monitor and manage your deployment at [CodeDeploy](https://console.aws.amazon.com/codesuite/codedeploy/applications) service page at AWS Console. |
||||
|
||||
# Destroying Provisioned Infrastructure |
||||
|
||||
First, remove autoscaling groups (ASG) deployed via CodeDeploy manually since Terraform doesn't track them and will miss them during the automatic destroy process. Once ASG is deleted you can use the `ansible-playbook destroy.yml` playbook to remove the rest of generated infrastructure. Make sure to check the playbook output since in some cases it may not delete everything. Check the error description for details. |
||||
|
||||
> [!WARNING] |
||||
>While Terraform is stateful, Ansible is stateless, so if you modify `bucket` or `dynamodb_table` variables and run `destroy.yml` or `deploy_infra.yml` playbooks, it will not alter the current S3/Dynamo resources names, but create a new resources. Moreover, altering `bucket` variable will make Terraform to forget about existing infrastructure and, as a consequence, redeploy it. If it is absolutely necessary for you to alter the S3 or DynamoDB names, perform this operation manually and then change the appropriate variable accordingly. |
||||
|
||||
> [!NOTE] |
||||
> Changing the `backend` variable will force Terraform to forget about created infrastructure, since it will start searching the current state files locally instead of remote. |
||||
|
||||
# Useful information |
||||
|
||||
## Cleaning Deployment cache |
||||
|
||||
Despite the fact that the Terraform cache is automatically cleared before each deployment, you may also want to manually force the cleaning process. To clear the Terraform cache, Run the `ansible-playbook clean.yml` command. |
||||
|
||||
## Migrating deployer to another machine |
||||
|
||||
You can easily manipulate your deployment from any machine with sufficient prerequisites. If the `upload_debug_info_to_s3` variable is set to true, the deployer will automatically upload your `all.yml` file to the s3 bucket, so you can download it to any other machine. Simply download this file to your `group_vars` folder and your new deployer will pick up the current deployment instead of creating a new one. |
||||
|
||||
|
||||
## Attaching the existing RDS instance to the current deployment |
||||
|
||||
Rather than create a new database, you may want to add an existing instance to use with the deployment. To do this, configure all proper values at `group_vars/all.yml`, including your DB ID and name, and execute the `ansible-playbook attach_existing_rds.yml` command. This will add the current DB instance into the Terraform-managed resource group. After that run `ansible-playbook deploy_infra.yml` as you normally would. |
||||
|
||||
> [!NOTE|label: Note 1] |
||||
> While executing `ansible-playbook attach_existing_rds.yml` the S3 and DynamoDB instances will be automatically created (if `backend` variable is set to `true`) to store Terraform state files. |
||||
|
||||
> [!NOTE|label: Note 2] |
||||
> The actual name of your resource must include the prefix you are using with this deployment.<br /> |
||||
> |
||||
>Example:<br /> |
||||
> |
||||
>Real resource: tf-poa<br /> |
||||
> |
||||
> `prefix` variable: tf<br /> |
||||
> |
||||
> `chain_db_id` variable: poa<br /> |
||||
|
||||
> [!NOTE|label: Note 3] |
||||
> mMke sure MultiAZ is disabled on your database. |
||||
|
||||
> [!NOTE|label: Note 4] |
||||
> Make sure that all the variables at `group_vars/all.yml` are exactly the same as your existing DB. |
||||
|
||||
## Using AWS CodeDeploy to Monitor and manage a BlockScout deployment |
||||
|
||||
BlockScout deployment can be managed through the AWS console. [A brief tutorial is available on our forum](https://forum.poa.network/t/monitor-and-manage-a-blockscout-deployment-using-codedeploy-in-your-aws-console/2499). |
||||
|
||||
# Common Errors and Questions |
||||
|
||||
## S3: 403 error during provisioning |
||||
|
||||
This usually appears if the S3 bucket already exists. Remember, the S3 bucket has a unique global name. Login to your AWS console and create an S3 bucket with the same name you specified in the `bucket` variable to ensure they match. |
||||
|
||||
## Error Applying Plan (diffs didn't match) |
||||
|
||||
If you see something similar to the following: |
||||
|
||||
```bash |
||||
Error: Error applying plan: |
||||
|
||||
1 error(s) occurred: |
||||
|
||||
* module.stack.aws_autoscaling_group.explorer: aws_autoscaling_group.explorer: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue. |
||||
|
||||
Please include the following information in your report: |
||||
|
||||
Terraform Version: 0.11.11 |
||||
Resource ID: aws_autoscaling_group.explorer |
||||
Mismatch reason: attribute mismatch: availability_zones.1252502072 |
||||
``` |
||||
|
||||
This is due to a bug in Terraform, the fix is to run `ansible-playbook deploy_infra.yml` again, and Terraform will pick up where it left off. This does not always happen, but this is the current workaround if needed. |
||||
|
||||
## Server doesn't start during deployment |
||||
|
||||
Even if the server is configured correctly, sometimes it may not bind the appropriate 4000 port for unknown reasons. If so, simply go to the appropriate nested blockscout folder, kill and rerun the server. For example, you can use the following command: `pkill beam.smp && pkill node && sleep 10 && mix phx.server`. |
@ -1,67 +0,0 @@ |
||||
<!--api.md --> |
||||
|
||||
## BlockScout Internal Documentation |
||||
|
||||
To view Modules and API Reference documentation: |
||||
|
||||
1. Generate documentation. |
||||
`mix docs` |
||||
2. View the generated docs. |
||||
`open doc/index.html` |
||||
|
||||
|
||||
## BlockScout API Usage |
||||
|
||||
Api calls can be accessed from the BlockScout UI menu. BlockScout supports several methods: |
||||
|
||||
1. [Graphiql](https://github.com/graphql/graphiql): An IDE for exploring GraphQL |
||||
2. RPC: API provided for developers transitioning their applications from Etherscan to BlockScout. It supports GET and POST requests. |
||||
|
||||
### Graphiql |
||||
|
||||
Send Queries to quickly get information. Use the Docs button to quickly find arguments accepted by the schema.More information is available in our [BlockScout GraphQL tutorial](https://forum.poa.network/t/graphql-in-blockscout/1971). |
||||
|
||||
![Graphiql](_media/graphiql_screenshot.png) |
||||
|
||||
|
||||
#### Graphiql RootQueryType Fields |
||||
|
||||
* address(hash: AddressHash!): Address<br /> |
||||
Gets an address by hash. |
||||
<br /><br /> |
||||
* addresses(hashes: [AddressHash!]!): [Address]<br /> |
||||
Gets addresses by address hash. |
||||
<br /><br /> |
||||
* block(number: Int!): Block<br /> |
||||
Gets a block by number. |
||||
<br /><br /> |
||||
* node(id: ID!): Node<br /> |
||||
Fetches an object given its ID |
||||
<br /><br /> |
||||
* tokenTransfers(<br /> |
||||
after: String<br /> |
||||
before: String<br /> |
||||
count: Int<br /> |
||||
first: Int<br /> |
||||
last: Int<br /> |
||||
tokenContractAddressHash: AddressHash!<br /> |
||||
): TokenTransferConnection<br /> |
||||
Gets token transfers by token contract address hash. |
||||
<br /><br /> |
||||
* transaction(hash: FullHash!): Transaction<br /> |
||||
Gets a transaction by hash. |
||||
|
||||
#### Example Queries |
||||
|
||||
Blockscout's GraphQL API provides 4 queries and 1 subscription. You can view them in GraphiQL interface under the `Schema` tab. Short query examples: |
||||
|
||||
| Query | Description | Example | |
||||
|-----------------------------------------------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------| |
||||
| address(hash: AddressHash!): Address | Gets an address by hash | {address(hash: "0x1fddEc96688e0538A316C64dcFd211c491ECf0d8") {hash, contractCode} } | |
||||
| addresses (hashes: [AddressHash!]): [Address] | Gets addresses by hashes | {addresses(hashes: ["0x1fddEc96688e0538A316C64dcFd211c491ECf0d8", "0x3948c17c0f45017064858b8352580267a85a762c"]) {hash, contractCode} } | |
||||
| block(number: Int!): Block | Gets a block by number | {block(number: 1) {parentHash, size, nonce}} | |
||||
| transaction (hash: FullHash!): Transaction | Gets a transaction by hash. | {transaction(hash: "0xc391da8f433b3bea0b3eb45da40fdd194c7a0e07d1b5ad656bf98940f80a6cf6") {input, gasUsed}} | |
||||
|
||||
|
||||
[Example GraphQL Query to retrieve transactions for a specific address](https://forum.poa.network/t/faq-graphql-query-to-retrieve-transactions-for-a-specific-address/1937) |
||||
|
@ -1,33 +0,0 @@ |
||||
<!--chain-configs.md --> |
||||
|
||||
## Configuring EVM Chains |
||||
|
||||
* **CSS:** Update the import instruction in `apps/block_scout_web/assets/css/theme/_variables.scss` to select a preset css file. This is reflected in the `production-${chain}` branch for each instance. For example, in the `production-xdai` branch, comment out `@import "neutral_variables` and uncomment `@import "dai-variables"`. |
||||
|
||||
* **ENV:** Update the [environment variables](env-variables.md) to match the chain specs. |
||||
|
||||
### Current css presets |
||||
``` bash |
||||
@import "theme/base_variables"; |
||||
@import "neutral_variables"; |
||||
// @import "dai_variables"; |
||||
// @import "ethereum_classic_variables"; |
||||
// @import "ethereum_variables"; |
||||
// @import "ether1_variables"; |
||||
// @import "expanse_variables"; |
||||
// @import "gochain_variables"; |
||||
// @import "goerli_variables"; |
||||
// @import "kovan_variables"; |
||||
// @import "lukso_variables"; |
||||
// @import "musicoin_variables"; |
||||
// @import "pirl_variables"; |
||||
// @import "poa_variables"; |
||||
// @import "posdao_variables"; |
||||
// @import "rinkeby_variables"; |
||||
// @import "ropsten_variables"; |
||||
// @import "social_variables"; |
||||
// @import "sokol_variables"; |
||||
// @import "tobalaba_variables"; |
||||
// @import "tomochain_variables"; |
||||
// @import "rsk_variables"; |
||||
``` |
@ -1,5 +0,0 @@ |
||||
<!--circleci.md --> |
||||
|
||||
## CircleCI Updates |
||||
|
||||
To monitor build status, configure your local [CCMenu](http://ccmenu.org/) with the following url: [`https://circleci.com/gh/poanetwork/blockscout.cc.xml?circle-token=f8823a3d0090407c11f87028c73015a331dbf604`](https://circleci.com/gh/poanetwork/blockscout.cc.xml?circle-token=f8823a3d0090407c11f87028c73015a331dbf604) |
@ -1,23 +0,0 @@ |
||||
<!--dev-env.md --> |
||||
# Configuration Options |
||||
|
||||
- [Chain Configuration](chain-configs.md) |
||||
- [Automating Restarts](restarts.md) |
||||
- [Front End](front-end.md) |
||||
- [CircleCI Configs](circleci.md) |
||||
- [Testing](testing.md) |
||||
- [Internationalization](internationalization.md) |
||||
- [Metrics](metrics.md) |
||||
- [Tracing](tracing.md) |
||||
- [Memory Usage](memory-usage.md) |
||||
- [API Docs](api.md) |
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,74 +0,0 @@ |
||||
# BlockScout Env Variables |
||||
|
||||
Below is a table outlining the environment variables utilized by BlockScout. |
||||
|
||||
**Notes:** |
||||
- This table is horizontally scrollable, version information is located in the last column. |
||||
- Settings related to the `ETHEREUM_JSONRPC_VARIANT` variable and client related settings for running a full archive node with geth or parity are located in [this forum post](https://forum.poa.network/t/faq-what-settings-are-required-on-a-parity-or-geth-client/1805). |
||||
- Additional information related to certain variables is available on the [ansible deployment](ansible-deployment.md) page. |
||||
- To set variables using the CLI, use the export command. For example: |
||||
```bash |
||||
$ export ETHEREUM_JSONRPC_VARIANT=parity |
||||
$ export COIN=POA |
||||
$ export NETWORK=POA |
||||
``` |
||||
|
||||
|
||||
| Variable | Required | Description | Default | Version | Need recompile | Deprecated in Version | |
||||
| --- | --- | --- | ---| --- | --- | --- | |
||||
| `NETWORK`| :white_check_mark: | Environment variable for the main EVM network such as Ethereum Network or POA Network | POA Network | all | | | |
||||
| `SUBNETWORK` | :white_check_mark: | Environment variable for the subnetwork such as Core or Sokol Network | Sokol Testnet | all | | | |
||||
| `NETWORK_ICON` | :white_check_mark: | Environment variable for the main network icon or testnet icon. Two options are `_test_network_icon.html` and `_network_icon.html` | `_test_network_icon.html` | all | | | |
||||
| `LOGO` | :white_check_mark: | Environment variable for the logo image location. The logo files names for different chains can be found [here](https://github.com/poanetwork/blockscout/tree/master/apps/block_scout_web/assets/static/images) | /images/blockscout_logo.svg | all | | | |
||||
| `ETHEREUM_JSONRPC_VARIANT` | :white_check_mark: | This environment variable is used to tell the application which RPC Client the node is using (i.e. Geth, Parity, or Ganache) | parity | all | | | |
||||
| `ETHEREUM_JSONRPC_HTTP_URL` | :white_check_mark: | The RPC endpoint used to fetch blocks, transactions, receipts, tokens. | localhost:8545 | all | | | |
||||
| `ETHEREUM_JSONRPC_TRACE_URL` | | The RPC endpoint specifically for the Geth/Parity client used by trace_block and trace_replayTransaction. This can be used to designate a tracing node. | localhost:8545 | all | | | |
||||
| `ETHEREUM_JSONRPC_WS_URL` | :white_check_mark: | The WebSockets RPC endpoint used to subscribe to the `newHeads` subscription alerting the indexer to fetch new blocks. | ws://localhost:8546 | all | | | |
||||
| `NETWORK_PATH` | | Used to set a network path other than what is displayed in the root directory. An example would be to add /eth/mainnet/ to the root directory. | (empty) | all | | | |
||||
| `SECRET_KEY_BASE` | :white_check_mark: | Use mix phx.gen.secret to generate a new Secret Key Base string to protect production assets. | (empty) | all | | | |
||||
| `CHECK_ORIGIN` | | Used to check the origin of requests when the origin header is present. It defaults to false. In case of true, it will check against the host value. | false | all | | | |
||||
| `PORT` | :white_check_mark: | Default port the application runs on is 4000 | 4000 | all | | | |
||||
| `COIN` | :white_check_mark: | The coin here is checked via the CoinGecko API to obtain USD prices on graphs and other areas of the UI | POA | all | | | |
||||
| `METADATA_CONTRACT` | | This environment variable is specifically used by POA Network to obtain Validators information to display in the UI. | (empty) | all | | | |
||||
| `VALIDATORS_CONTRACT` | | This environment variable is specifically used by POA Network to obtain the Emission Fund contract. | (empty) | all | | | |
||||
| `SUPPLY_MODULE` | | This environment variable is used by the xDai Chain in order to tell the application how to calculate the total supply of the chain. | false | all | | | |
||||
| `SOURCE_MODULE` | | This environment variable is used to calculate the exchange rate and is specifically used by the xDai Chain. | false | all | | | |
||||
| `DATABASE_URL` | | Production environment variable to define the Database endpoint. | (empty) | all | | | |
||||
| `POOL_SIZE` | | Production environment variable to define the number of database connections allowed. | 20 | all | | | |
||||
| `ECTO_USE_SSL` | | Production environment variable to use SSL on Ecto queries. | true | all | | | |
||||
| `DATADOG_HOST` | | Host configuration setting for [Datadog integration](https://docs.datadoghq.com/integrations/) | (empty) | all | | | |
||||
| `DATADOG_PORT` | | Port configuration setting for [Datadog integration](https://docs.datadoghq.com/integrations/). | (empty} | all | | | |
||||
| `SPANDEX_BATCH_SIZE` | | [Spandex](https://github.com/spandex-project/spandex) and Datadog configuration setting. | (empty) | all | | |
||||
| `SPANDEX_SYNC_THRESHOLD` | | [Spandex](https://github.com/spandex-project/spandex) and Datadog configuration setting. | (empty) | all | | | |
||||
| `HEART_BEAT_TIMEOUT` | | Production environment variable to restart the application in the event of a crash. | 30 | all | | | |
||||
| `HEART_COMMAND` | | Production environment variable to restart the application in the event of a crash. | systemctl restart explorer.service | all | | | |
||||
| `BLOCKSCOUT_VERSION` | | Added to the footer to signify the current BlockScout version. | (empty) | v1.3.4+ | | | |
||||
| `RELEASE_LINK` | | The link to Blockscout release notes in the footer. | <u>https: //github.com/poanetwork/</u> <br /><u>blockscout/releases/</u> <br /> <u>tag/${BLOCKSCOUT_VERSION}</u> | v1.3.5+ | | | |
||||
| `ELIXIR_VERSION` | | Elixir version to install on the node before Blockscout deploy. | (empty) | all | | | |
||||
| `BLOCK_TRANSFORMER` | | Transformer for blocks: base or clique. | base | v1.3.4+ | | | |
||||
| `GRAPHIQL_TRANSACTION` | | Default transaction in query to GraphiQL. | (empty) | v1.2.0+ | :white_check_mark: | | |
||||
| `FIRST_BLOCK` | | The block number, where indexing begins from. | 0 | v1.3.8+ | | | |
||||
| `LAST_BLOCK` | | The block number, where indexing stops. | (empty) | v2.0.3+ | | | |
||||
| `TXS_COUNT_CACHE_PERIOD` | | Interval in seconds to restart the task, which calculates the total txs count. | 60 * 60 * 2 | v1.3.9+ | | | |
||||
| `ADDRESS_WITH_BALANCES` <br /> `_UPDATE_INTERVAL`| | Interval in seconds to restart the task, which calculates addresses with balances. | 30 * 60 | v1.3.9+ | | | |
||||
| `LINK_TO_OTHER_EXPLORERS` | | true/false. If true, links to other explorers are added in the footer | (empty) | v1.3.0+ | | | |
||||
| `COINMARKETCAP_PAGES` | | the number of pages on coinmarketcap to list in order to find token's price | 10 | v1.3.10+ | | v2.0.4 | |
||||
| `SUPPORTED_CHAINS` | | Array of supported chains that displays in the footer and in the chains dropdown. This var was introduced in this PR [#1900](https://github.com/poanetwork/blockscout/pull/1900) and looks like an array of JSON objects. | (empty) | v2.0.0+ | | | |
||||
| `BLOCK_COUNT_CACHE_PERIOD ` | | time to live of cache in seconds. This var was introduced in [#1876](https://github.com/poanetwork/blockscout/pull/1876) | 600 | v2.0.0+ | | | |
||||
| `ALLOWED_EVM_VERSIONS ` | | the comma-separated list of allowed EVM versions for contracts verification. This var was introduced in [#1964](https://github.com/poanetwork/blockscout/pull/1964) | "homestead, tangerineWhistle, spuriousDragon, byzantium, constantinople, petersburg" | v2.0.0+ | | | |
||||
| `UNCLES_IN_AVERAGE_BLOCK_TIME` | Include or exclude nonconsensus blocks in avg block time calculation. Exclude if `false`. | false | v2.0.1+ | | | |
||||
| `AVERAGE_BLOCK_CACHE_PERIOD` | | Update of average block cache, in seconds | 30 minutes | v2.0.2+ | | |
||||
| `MARKET_HISTORY_CACHE_PERIOD` | | Update of market history cache, in seconds | 6 hours | v2.0.2+ | | |
||||
| `DISABLE_WEBAPP` | | If `true`, endpoints to webapp are hidden (compile-time) | `false` | v2.0.3+ | :white_check_mark: | | |
||||
| `DISABLE_READ_API` | | If `true`, read-only endpoints to API are hidden (compile-time) | `false` | v2.0.3+ | :white_check_mark: | | |
||||
| `DISABLE_WRITE_API` | | If `true`, write endpoints to API are hidden (compile-time) | `false` | v2.0.3+ | :white_check_mark: | | |
||||
| `DISABLE_INDEXER` | | If `true`, indexer application doesn't run | `false` | v2.0.3+ | :white_check_mark: | | |
||||
| `WEBAPP_URL` | | Link to web application instance, e.g. `http://host/path` | (empty) | v2.0.3+ | | | |
||||
| `API_URL` | | Link to API instance, e.g. `http://host/path` | (empty) | v2.0.3+ | | | |
||||
| `CHAIN_SPEC_PATH` | | Chain specification path (absolute file system path or url) to import block emission reward ranges and genesis account balances from | (empty) | v2.0.4+ | | | |
||||
| `COIN_GECKO_ID` | | CoinGecko coin id required for fetching an exchange rate | poa-network | v2.0.4+ | | master | |
||||
| `EMISSION_FORMAT` | | Should be set to `POA` if you have block emission indentical to POA Network. This env var is used only if `CHAIN_SPEC_PATH` is set | `STANDARD` | v2.0.4+ | | | |
||||
| `REWARDS_CONTRACT_ADDRESS` | | Emission rewards contract address. This env var is used only if `EMISSION_FORMAT` is set to `POA` | `0xeca443e8e1ab29971a45a9c57a6a9875701698a5` | v2.0.4+ | | | |
||||
| `INTERNAL_TRANSACTIONOS_FOR_TOKEN_TRANSFERS` | | Does not applicable for parity because we fetch all transactions for it. If set to true fetches internal transactions for simple token transfers transactions. It's disabled by default to increase internal transactions indexing speed | `false` | master | | | |
||||
| `BLOCKSCOUT_PROTOCOL` | | Url scheme for blockscout | in prod env `https` is used, in dev env `http` is used | master | | | |
||||
| `MAX_SKIPPING_DISTANCE` | | The maximum distance the indexer is allowed to wait for when notified of a number not following the lask known one. | 4 | master | | |
@ -1,10 +0,0 @@ |
||||
|
||||
|
||||
!> **Important** notice with `inline code` and additional placeholder text used |
||||
to force the content to wrap and span multiple lines. |
||||
|
||||
> [!NOTE] |
||||
> An alert of type 'note' using global style 'callout'. |
||||
|
||||
> [!NOTE|style:flat] |
||||
> An alert of type 'note' using alert specific style 'flat' which overrides global style 'callout'. |
@ -1,3 +0,0 @@ |
||||
<!-- faq.md --> |
||||
|
||||
FAQs are located in the [BlockScout forum](https://forum.poa.network/c/blockscout/wiki). |
@ -1,18 +0,0 @@ |
||||
<!--front-end.md --> |
||||
|
||||
## Front-end |
||||
|
||||
### Javascript |
||||
|
||||
All Javascript files are located in [apps/block_scout_web/assets/js](https://github.com/poanetwork/blockscout/tree/master/apps/block_scout_web/assets/js). The main file is [app.js](https://github.com/poanetwork/blockscout/blob/master/apps/block_scout_web/assets/js/app.js). This file imports all javascript used in the application. If you want to create a new JS file consider creating in [/js/pages](https://github.com/poanetwork/blockscout/tree/master/apps/block_scout_web/assets/js/pages) or [/js/lib](https://github.com/poanetwork/blockscout/tree/master/apps/block_scout_web/assets/js/lib), as follows: |
||||
|
||||
#### js/lib |
||||
This folder contains all scripts usable for any page or as helpers to some component. |
||||
|
||||
#### js/pages |
||||
This folder contains the scripts that are page-specific. |
||||
|
||||
#### Redux |
||||
This project uses Redux to control the state in some pages. There are pages with real-time events that use Phoenix channels, e.g. Address page. The page state changes often depending on which events it is listening to. Redux is also used to load some contents asynchronously, see [async_listing_load.js](https://github.com/poanetwork/blockscout/blob/master/apps/block_scout_web/assets/js/lib/async_listing_load.js). |
||||
|
||||
To understand how to build new pages that require Redux, see the [redux_helpers.js](https://github.com/poanetwork/blockscout/blob/master/apps/block_scout_web/assets/js/lib/redux_helpers.js) file. |
@ -1,35 +0,0 @@ |
||||
<!DOCTYPE html> |
||||
<html lang="en"> |
||||
<head> |
||||
<meta charset="UTF-8"> |
||||
<title>BlockScout Docs</title> |
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> |
||||
<meta name="description" content="Description"> |
||||
<meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0"> |
||||
<link rel="stylesheet" href="//unpkg.com/docsify/themes/buble.css"> |
||||
<style> |
||||
nav.app-nav li ul { |
||||
min-width: 100px; |
||||
} |
||||
</style> |
||||
</head> |
||||
<body> |
||||
<div id="app"></div> |
||||
<script> |
||||
window.$docsify = { |
||||
loadSidebar: true, |
||||
logo: 'https://blockscout.com/eth/mainnet/android-chrome-192x192.png', |
||||
name: 'BlockScout', |
||||
repo: 'https://github.com/poanetwork/blockscout', |
||||
auto2top : true, |
||||
maxLevel : 3, |
||||
subMaxLevel: 1, |
||||
search: 'auto' |
||||
} |
||||
</script> |
||||
<script src="//unpkg.com/docsify/lib/docsify.min.js"></script> |
||||
<script src="//unpkg.com/docsify/lib/plugins/search.min.js"></script> |
||||
<script src="https://unpkg.com/docsify-plugin-flexible-alerts"></script> |
||||
<script src="//unpkg.com/prismjs/components/prism-bash.min.js"></script> |
||||
</body> |
||||
</html> |
@ -1,10 +0,0 @@ |
||||
<!--internationalization.md --> |
||||
|
||||
|
||||
## Internationalization |
||||
|
||||
The app is currently internationalized. It is only localized to U.S. English. To translate new strings. |
||||
|
||||
1. To setup translation file. |
||||
`cd apps/block_scout_web; mix gettext.extract --merge; cd -` |
||||
2. To edit the new strings, go to `apps/block_scout_web/priv/gettext/en/LC_MESSAGES/default.po`. |
@ -1,99 +0,0 @@ |
||||
<!-- manual-deployment.md --> |
||||
|
||||
# Manual Deployment |
||||
|
||||
Below is the procedure for manual deployment of BlockScout. For automated deployment, see [ansible deployment](ansible-deployment.md). |
||||
|
||||
BlockScout currently requires a full archive node in order to import every state change for every address on the target network. For client specific settings related to a node running parity or geth, please see [this forum post](https://forum.poa.network/t/faq-what-settings-are-required-on-a-parity-or-geth-client/1805). |
||||
|
||||
## Deployment Steps |
||||
|
||||
1. `git clone https://github.com/poanetwork/blockscout` |
||||
|
||||
2. `cd blockscout` |
||||
|
||||
3. Setup default configurations: |
||||
`cp apps/explorer/config/dev.secret.exs.example apps/explorer/config/dev.secret.exs` |
||||
|
||||
`cp apps/block_scout_web/config/dev.secret.exs.example apps/block_scout_web/config/dev.secret.exs` |
||||
|
||||
4. Update `apps/explorer/config/dev.secret.exs` |
||||
|
||||
**Linux:** Update the database username and password configuration |
||||
|
||||
**Mac:** Remove the `username` and `password` fields |
||||
|
||||
**Optional:** Set up a default configuration for testing. `cp apps/explorer/config/test.secret.exs.example apps/explorer/config/test.secret.exs` Example usage: Changing the default Postgres port from localhost:15432 if [Boxen](https://github.com/boxen/boxen) is installed. |
||||
|
||||
5. If you have deployed previously, delete the `apps/block_scout_web/priv/static` folder. This removes static assets from the previous build. |
||||
|
||||
6. Install dependencies. `mix do deps.get, local.rebar --force, deps.compile, compile` |
||||
|
||||
7. If not already running, start postgres: `pg_ctl -D /usr/local/var/postgres start` |
||||
|
||||
> [!TIP] |
||||
> To check [postgres status](https://www.postgresql.org/docs/9.6/app-pg-isready.html): `pg_isready` |
||||
|
||||
8. Create and migrate database `mix do ecto.create, ecto.migrate` |
||||
|
||||
> [!NOTE] |
||||
> If you have run previously, drop the previous database |
||||
`mix do ecto.drop, ecto.create, ecto.migrate` |
||||
|
||||
9. Install Node.js dependencies |
||||
|
||||
- `cd apps/block_scout_web/assets; npm install && node_modules/webpack/bin/webpack.js --mode production; cd -` |
||||
|
||||
- `cd apps/explorer && npm install; cd -` |
||||
|
||||
10. Enable HTTPS in development. The Phoenix server only runs with HTTPS. |
||||
|
||||
* `cd apps/block_scout_web` |
||||
* `mix phx.gen.cert blockscout blockscout.local; cd -` |
||||
* Add blockscout and blockscout.local to your `/etc/hosts` |
||||
|
||||
```bash |
||||
|
||||
127.0.0.1 localhost blockscout blockscout.local |
||||
|
||||
255.255.255.255 broadcasthost |
||||
|
||||
::1 localhost blockscout blockscout.local |
||||
|
||||
``` |
||||
|
||||
> [!NOTE] |
||||
> If using Chrome, Enable `chrome://flags/#allow-insecure-localhost` |
||||
|
||||
11. Set your [environment variables](env-variables.md) as needed. |
||||
|
||||
CLI Example: |
||||
```bash |
||||
export COIN=DAI |
||||
export NETWORK_ICON=_network_icon.html |
||||
export ... |
||||
``` |
||||
|
||||
> [!NOTE] |
||||
>The `ETHEREUM_JSONRPC_VARIANT` will vary depending on your client (parity, geth etc). [See this forum post](https://forum.poa.network/t/faq-what-settings-are-required-on-a-parity-or-geth-client/1805) for more information on client settings. |
||||
|
||||
12. Return to the root directory and start the Phoenix Server. `mix phx.server` |
||||
|
||||
## Check your instance: |
||||
|
||||
13. Check that there are no visual artifacts, all assets exist and there are no database errors. |
||||
|
||||
14. If there are no errors, stop BlockScout (`ctrl+c`) |
||||
|
||||
15. Build static assets for deployment `mix phx.digest` |
||||
|
||||
16. Delete build artifacts: |
||||
|
||||
a. Script: `./rel/commands/clear_build.sh` |
||||
|
||||
b. Manually: |
||||
- delete `_build` & `deps` directories |
||||
- delete node modules located at |
||||
- `apps/block_scout_web/assets/node_modules` |
||||
- & `apps/explorer/node_modules` |
||||
- delete `logs/dev` directory |
@ -1,13 +0,0 @@ |
||||
<!--memory-usage.md --> |
||||
|
||||
## Memory Usage |
||||
|
||||
The work queues for building the index of all blocks, balances (coin and token), and internal transactions can grow quite large. By default, the soft-limit is 1 GiB, which can be changed in `apps/indexer/config/config.exs`: |
||||
|
||||
``` |
||||
config :indexer, memory_limit: 1 <<< 30 |
||||
``` |
||||
|
||||
Memory usage is checked once per minute. If the soft-limit is reached, the shrinkable work queues will shed half their load. The shed load will be restored from the database, the same as when a restart of the server occurs, so rebuilding the work queue will be slower, but use less memory. |
||||
|
||||
If all queues are at their minimum size, then no more memory can be reclaimed and an error will be logged. |
@ -1,40 +0,0 @@ |
||||
<!--metrics.md --> |
||||
|
||||
## Metrics |
||||
|
||||
### Wobserver |
||||
|
||||
[Wobserver](https://github.com/shinyscorpion/wobserver) is configured to display data from the `/metrics` endpoint in a web interface. To view, go to `/wobserver` for the chain you would like to view. |
||||
|
||||
For example `https://blockscout.com/eth/mainnet/wobserver` |
||||
|
||||
### Prometheus |
||||
|
||||
BlockScout is setup to export [Prometheus](https://prometheus.io/) metrics at `/metrics`. |
||||
|
||||
1. Install prometheus: `brew install prometheus` |
||||
2. Start the web server `iex -S mix phx.server` |
||||
3. Start prometheus: `prometheus --config.file=prometheus.yml` |
||||
|
||||
### Grafana |
||||
|
||||
The Grafana dashboard may also be used for metrics display. |
||||
|
||||
1. Install grafana: `brew install grafana` |
||||
2. Install Pie Chart panel plugin: `grafana-cli plugins install grafana-piechart-panel` |
||||
3. Start grafana: `brew services start grafana` |
||||
4. Add Prometheus as a Data Source |
||||
1. `open http://localhost:3000/datasources` |
||||
2. Click "+ Add data source" |
||||
3. Put "Prometheus" for "Name" |
||||
4. Change "Type" to "Prometheus" |
||||
5. Set "URL" to "http://localhost:9090" |
||||
6. Set "Scrape Interval" to "10s" |
||||
5. Add the dashboards from https://github.com/deadtrickster/beam-dashboards: |
||||
For each `*.json` file in the repo. |
||||
1. `open http://localhost:3000/dashboard/import` |
||||
2. Copy the contents of the JSON file in the "Or paste JSON" entry |
||||
3. Click "Load" |
||||
6. View the dashboards. (You will need to click-around and use BlockScout for the web-related metrics to show up.) |
||||
|
||||
|
@ -1,15 +0,0 @@ |
||||
<!-- requirements.md --> |
||||
|
||||
## Requirements |
||||
|
||||
| Dependency | Mac | Linux | |
||||
|-------------|-----|-------| |
||||
| [Erlang/OTP 21.0.4](https://github.com/erlang/otp) | `brew install erlang` | [Erlang Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L134) | |
||||
| [Elixir 1.9.0](https://elixir-lang.org/) | :point_up: | [Elixir Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L138) | |
||||
| [Postgres 10.3](https://www.postgresql.org/) | `brew install postgresql` | [Postgres Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L187) | |
||||
| [Node.js 10.x.x](https://nodejs.org/en/) | `brew install node` | [Node.js Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L66) | |
||||
| [Automake](https://www.gnu.org/software/automake/) | `brew install automake` | [Automake Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L72) | |
||||
| [Libtool](https://www.gnu.org/software/libtool/) | `brew install libtool` | [Libtool Install Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L62) | |
||||
| [Inotify-tools](https://github.com/rvoicilas/inotify-tools/wiki) | Not Required | Ubuntu - `apt-get install inotify-tools` | |
||||
| [GCC Compiler](https://gcc.gnu.org/) | `brew install gcc` | [GCC Compiler Example](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L70) | |
||||
| [GMP](https://gmplib.org/) | `brew install gmp` | [Install GMP Devel](https://github.com/poanetwork/blockscout-terraform/blob/33f68e816e36dc2fb055911fa0372531f0e956e7/modules/stack/libexec/init.sh#L74) | |
@ -1,7 +0,0 @@ |
||||
<!--restarts.md --> |
||||
|
||||
## Automating Restarts |
||||
|
||||
By default `BlockScout` does not restart if it crashes. To enable automated restarts, set the [environment variable](env-variables.md) `HEART_COMMAND` to whatever command you run to start `BlockScout`. Configure the heart beat timeout to change how long it waits before considering the application unresponsive. |
||||
|
||||
At that point, it will kill the current blockscout instance and execute the `HEART_COMMAND`. By default a crash dump is not written unless you set `ERL_CRASH_DUMP_SECONDS` to a positive or negative integer. See the [heart](http://erlang.org/doc/man/heart.html) documentation for more information. |
@ -1,137 +0,0 @@ |
||||
<!--sharelocks.md --> |
||||
|
||||
## ShareLocks |
||||
|
||||
ShareLock is the row-level locking mechanism used internally by PostgreSQL. |
||||
|
||||
### Deadlocks and prevention |
||||
|
||||
When several DB transactions are acting on multiple rows of the same table, it's |
||||
possible to incur in a deadlock and so into an error. |
||||
This can be prevented by enforcing the same consistent order of lock aquisition |
||||
on *all* the transactions performing `INSERT`, `UPDATE` or `DELETE` on a given table. |
||||
|
||||
On top of this, if multiple DB transactions act on multiple tables a deadlock |
||||
will occur, even if they follow the order on each table described above, if they |
||||
acquire locks on said tables in a different order. |
||||
This can also be prevented by using a consisten order of lock acquisition *between* |
||||
different tables. |
||||
|
||||
### Imposing the lock acquisition order on a table with Ecto |
||||
|
||||
When `INSERT`ing a list of rows Postgres will respect the order in which they |
||||
appear in the query, so the reordering can happen beforehand. |
||||
|
||||
For example, this will work: |
||||
```elixir |
||||
entries = [...] |
||||
|
||||
ordered_entries = Enum.sort_by(entries, & &1.id) |
||||
|
||||
Repo.insert_all(__MODULE__, ordered_entries) |
||||
``` |
||||
|
||||
Performing `UPDATE`s is trickier because there is no `ORDER BY` clause. |
||||
The solution to this is to `JOIN` on a subquery that `SELECT`s with the option `FOR UPDATE`. |
||||
|
||||
Using Ecto this can be done, for example, like this: |
||||
```elixir |
||||
query = |
||||
from( |
||||
entry in Entry, |
||||
where: not is_nil(entry.value), |
||||
order_by: entry.id, |
||||
lock: "FOR UPDATE" |
||||
) |
||||
|
||||
Repo.update_all( |
||||
from(e in Entry, join: s in subquery(query), on: e.id == s.id), |
||||
[set: [value: nil]], |
||||
timeout: timeout) |
||||
``` |
||||
|
||||
`DELETE` has the same quircks as `UPDATE` and it is too solved in the same way. |
||||
|
||||
For example: |
||||
```elixir |
||||
query = |
||||
from( |
||||
entry in Entry, |
||||
where: is_nil(entry.value), |
||||
order_by: entry.id, |
||||
lock: "FOR UPDATE" |
||||
) |
||||
|
||||
Repo.delete_all(from(e in Entry, join: s in subquery(query), on: e.id == s.id)) |
||||
``` |
||||
|
||||
### Imposing the lock acquisition order between tables with Ecto |
||||
|
||||
When using an `Ecto.Multi` to perform `INSERT`, `UPDATE` or `DELETE` on multiple |
||||
tables the order to keep is between different operation. |
||||
For example, supposing `EntryA` was established to be modified before `EntryB`, |
||||
this is not correct: |
||||
```elixir |
||||
Multi.new() |
||||
|> Multi.run(:update_b, fn repo, _ -> |
||||
# operations with ordered locks on `EntryB` |
||||
end) |
||||
|> Multi.run(:update_a, fn repo, _ -> |
||||
# operations with ordered locks on `EntryA` |
||||
end) |
||||
|> Repo.transaction() |
||||
``` |
||||
|
||||
When possible, the simple solution is to move `:update_a` to be before `:update_b`. |
||||
When not possible, for instance if `:update_a` depends on the result of `:update_b`, |
||||
this can be solved by acquiring the locks in a separate operation. |
||||
|
||||
For example: |
||||
```elixir |
||||
Multi.new() |
||||
|> Multi.run(:acquire_a, fn repo, _ -> |
||||
# acquire locks in order on `EntryA` |
||||
end) |
||||
|> Multi.run(:update_b, fn repo, _ -> |
||||
# operations with ordered locks on `EntryB` |
||||
end) |
||||
|> Multi.run(:update_a, fn repo, %{acquire_a: values} -> |
||||
# operations (no need to enforce order again) on `EntryA` |
||||
end) |
||||
|> Repo.transaction() |
||||
``` |
||||
|
||||
Note also that for the same reasons multiple operations on the same table in the |
||||
same transaction are not safe to perform if they each acquire locks in order, |
||||
because locks are not released until the transaction is committed. |
||||
|
||||
### Order used for Explorer's tables |
||||
|
||||
This is a complete list of the ordering currently in use on each table. |
||||
It also specifies the order between tables in the same transaction: locks for a |
||||
table on top need to be acquired before those from a table on the bottom. |
||||
|
||||
Note that this should always be enforced because as long as there is one DB |
||||
transaction performing in a different order there is the possibility of a deadlock. |
||||
|
||||
| schema module | table name | ordered by | |
||||
|---------------|------------|------------| |
||||
| Explorer.Chain.Address | addresses | asc: :hash | |
||||
| Explorer.Chain.Address.Name | address_names | [asc: :address_hash, asc: :name] | |
||||
| Explorer.Chain.Address.CoinBalance | address_coin_balances | [asc: :address_hash, asc: :block_number] | |
||||
| Explorer.Chain.Block | blocks | asc: :hash | |
||||
| Explorer.Chain.Block.SecondDegreeRelation | block_second_degree_relations | [asc: :nephew_hash, asc: :uncle_hash] | |
||||
| Explorer.Chain.Block.Reward | block_rewards | [asc: :address_hash, asc: :address_type, asc: :block_hash] | |
||||
| Explorer.Chain.Block.EmissionReward | emission_rewards | asc: :block_range | |
||||
| Explorer.Chain.Transaction | transactions | asc: :hash | |
||||
| Explorer.Chain.Transaction.Fork | transaction_forks | [asc: :uncle_hash, asc: :index] | |
||||
| Explorer.Chain.Log | logs | [asc: :transaction_hash, asc: :index] | |
||||
| Explorer.Chain.InternalTransaction | internal_transactions | [asc: :transaction_hash, asc: :index] | |
||||
| Explorer.Chain.Token | tokens | asc: :contract_address_hash | |
||||
| Explorer.Chain.TokenTransfer | token_transfers | [asc: :transaction_hash, asc: :log_index]| |
||||
| Explorer.Chain.Address.TokenBalance | address_token_balances | [asc: :address_hash, asc: :token_contract_address_hash, asc: :block_number] | |
||||
| Explorer.Chain.Address.CurrentTokenBalance | address_current_token_balances | [asc: :address_hash, asc: :token_contract_address_hash] | |
||||
| Explorer.Chain.StakingPool | staking_pools | :staking_address_hash | |
||||
| Explorer.Chain.StakingPoolsDelegator | staking_pools_delegators | [asc: :delegator_address_hash, asc: :pool_address_hash] | |
||||
| Explorer.Chain.ContractMethod | contract_methods | [asc: :identified, asc: :abi] |
||||
| Explorer.Market.MarketHistory | market_history | asc: :date | |
@ -1,92 +0,0 @@ |
||||
<!-- smart-contract.md --> |
||||
|
||||
# Verifying a smart contract in BlockScout |
||||
|
||||
Once verified, a smart contract or token contract's source code becomes publicly available and verifiable. This creates transparency and trust. Plus, it's easy to do! |
||||
|
||||
1. Go to [blockscout.com](https://blockscout.com/), verify you are on the chain where the contract was deployed, and type the contract's address into the search bar. Your contract details should come up. |
||||
|
||||
2. Select the `Code` tab to view the bytecode. |
||||
|
||||
![BlockScout_1|690x391](_media/sc1.jpeg) |
||||
|
||||
3. In the code tab view, click the `Verify & Publish` button. |
||||
|
||||
![Blockscout_2|690x195](_media/sc2.jpeg) |
||||
|
||||
4. On the following screen, enter your contract details: |
||||
1. **Contract Address:** The `0x` address supplied on contract creation. |
||||
2. **Contract Name:** Name of the class whose constructor was called in the .sol file. For example, in `contract MyContract {..` **MyContract** is the contract name. |
||||
3. **Compiler:** derived from the first line in the contract `pragma solidity X.X.X`. Use the corresponding compiler version rather than the nightly build. |
||||
4. **EVM Version:** [See EVM version](#evm-version) |
||||
5. **Optimization:** If you enabled optimization during compilation, check yes. |
||||
6. **Enter the Solidity Contract Code:** You may need to flatten your solidity code if it utilizes a library or inherits dependencies from another contract. We recommend the [POA solidity flattener](https://github.com/poanetwork/solidity-flattener) or the [truffle flattener](https://www.npmjs.com/package/truffle-flattener) |
||||
7.**Constructor Arguments:** [See this post for more info](https://forum.poa.network/t/smart-contract-verification-abi-encoded-constructor-arguments/2331) |
||||
8.**Libraries:** Enter the name and Ox address for any required libraries called in the called in the .sol file. |
||||
9. Click the `Verify and Publish` button. |
||||
|
||||
5. If all goes well, you will see a green checkmark next to the code, and an additional tab where you can read the contract. In addition, the contract name will appear in BlockScout with any transactions related to your contract. |
||||
|
||||
## Troubleshooting: |
||||
|
||||
If you receive the dreaded `There was an error compiling your contract` message this means the bytecode doesn't match the supplied sourcecode. Unfortunately, there are many reasons this may be the case. Here are a few things to try: |
||||
|
||||
1. Double check the compiler version is correct. |
||||
|
||||
2. Check that an extra space has not been added to the end of the contract. When pasting in, an extra space may be added. Delete this and attempt to recompile. |
||||
|
||||
3. Copy, paste and verify your source code in Remix. You may find some exceptions here. |
||||
|
||||
|
||||
# EVM Version |
||||
|
||||
You are asked to provide the EVM version the contract uses during the verification process. If the bytecode does not match the version, we try to verify using the latest EVM version. |
||||
|
||||
For more information, see the [Solidity docs on specifying the EVM version when compiling a contract](https://solidity.readthedocs.io/en/v0.5.3/using-the-compiler.html). Note that backward compatibility is not guaranteed between each version. |
||||
|
||||
||Name|Date|Mainnet Block #|Relevant changes / opcode specs|EIP details| |
||||
| --- | --- | --- | --- | --- | --- | |
||||
|1|Homestead|2016-03-14|1,150,000|Oldest version|http://eips.ethereum.org/EIPS/eip-606| |
||||
|2|Tangerine Whistle|2016-10-18|2,463,000|Gas cost to access other accounts increased, impacts gas estimation and optimization. <br /><br />All gas sent by default for external calls, previously a certain amount had to be retained.|http://eips.ethereum.org/EIPS/eip-608| |
||||
|3|Spurious Dragon|2016-11-18|2,675,000|Gas cost for the `exp` opcode increased, impacts gas estimation and optimization.|http://eips.ethereum.org/EIPS/eip-607| |
||||
|4|Byzantium|2017-12-17|4,370,000|Opcodes `returndatacopy`, `returndatasize` and `staticcall` available in assembly.<br /><br /> `staticcall` opcode used when calling non-library view or pure functions, which prevents the functions from modifying state at the EVM level, this even applies to invalid type conversions.<br /><br /> Ability to access dynamic data returned from function calls. <br /><br /> `revert` opcode introduced, `revert()` will not waste gas.|http://eips.ethereum.org/EIPS/eip-609| |
||||
|5|Constantinople|2019-02-22|7,280,000|Opcodes `create2`, `extcodehash`, `shl`, `shr` and `sar` are available in assembly.<br /><br /> Bitwise shifting operators use shifting opcodes (`shl`,`shr`,`sar`), requiring less gas.|http://eips.ethereum.org/EIPS/eip-1013| |
||||
|6|Petersburg|2019-02-22|7,280,000|No changes related to contract compiling (removes EIP 1283)|http://eips.ethereum.org/EIPS/eip-1716| |
||||
|
||||
# ABI-Encoded Constructor Arguments |
||||
|
||||
If Constructor Arguments are required by the contract, you will add them to the Constructor Arguments field in [ABI hex encoded form](https://solidity.readthedocs.io/en/develop/abi-spec.html). Constructor arguments are appended to the END of the contract source bytecode when compiled by Solidity. |
||||
|
||||
An easy way to find these arguments is to compare the ‘raw input’ code in the transaction details to to the contract creation code in the code section of the contract. |
||||
|
||||
1. Access the contract creation TX in BlockScout. This is the transaction that created the contract, not the address of the actual contract. You should see a link to it in your wallet history. |
||||
|
||||
![nifty_wallet_history|294x500,75%](_media/abi1.jpeg) |
||||
|
||||
2. Go to the transaction details page for the contract creation TX. Within the details, you will see the Raw input. Copy this input in Hex format and paste into a txt or spreadsheet where you will compare against a second ABI code. |
||||
|
||||
![copy_raw_input|548x500](_media/abi2.jpeg) |
||||
|
||||
3. Go to the contract creation address. You can access through the transaction details at the top: |
||||
|
||||
![contract_address|548x500](_media/abi3.jpeg) |
||||
|
||||
4. In Contract Address Details, click on the Code tab. |
||||
|
||||
![code_tab|690x417](_media/abi4.jpeg) |
||||
|
||||
5. Copy the contract creation code. |
||||
|
||||
![copy_contract_creation_code|690x407](_media/abi5.jpeg) |
||||
|
||||
6. Paste into a document next to the original raw input ABI. This will allow you to compare the two. Anything that appears at the **END** of the Raw input code that does not exist at the end of the Contract Code is the ABI code for the constructor arguments. |
||||
|
||||
![contract_compare|690x177](_media/abi6.jpeg) |
||||
|
||||
7. The code may differ in other ways, but the constructor arguments will appear at the end. Copy this extra code and paste into the constructor arguments field along with the other information needed to verify your contract. |
||||
|
||||
![smart_contract_paste|620x500](_media/abi7.jpeg) |
||||
|
||||
|
||||
|
||||
|
@ -1,3 +0,0 @@ |
||||
<!-- terminology.md --> |
||||
|
||||
_Coming Soon_ |
@ -1,82 +0,0 @@ |
||||
<!--testing.md --> |
||||
|
||||
## Testing |
||||
|
||||
### Requirements |
||||
|
||||
* PhantomJS (for wallaby) |
||||
|
||||
### Running tests |
||||
|
||||
1. Build assets. |
||||
`cd apps/block_scout_web/assets && npm run build; cd -` |
||||
|
||||
2. Format Elixir code. |
||||
`mix format` |
||||
|
||||
3. Run the test suite with coverage for whole umbrella project. This step can be run with different configuration outlined below. |
||||
`mix coveralls.html --umbrella` |
||||
|
||||
4. Lint Elixir code. |
||||
`mix credo --strict` |
||||
|
||||
5. Run the dialyzer. |
||||
`mix dialyzer --halt-exit-status` |
||||
|
||||
6. Check the Elixir code for vulnerabilities. |
||||
`cd apps/explorer && mix sobelow --config; cd -` |
||||
`cd apps/block_scout_web && mix sobelow --config; cd -` |
||||
|
||||
7. Lint JavaScript code. |
||||
`cd apps/block_scout_web/assets && npm run eslint; cd -` |
||||
|
||||
8. Test JavaScript code. |
||||
`cd apps/block_scout_web/assets && npm run test; cd -` |
||||
|
||||
#### Parity |
||||
|
||||
##### Mox |
||||
|
||||
**This is the default setup. `mix coveralls.html --umbrella` will work on its own, but to be explicit, use the following setup**: |
||||
|
||||
```shell |
||||
export ETHEREUM_JSONRPC_CASE=EthereumJSONRPC.Case.Parity.Mox |
||||
export ETHEREUM_JSONRPC_WEB_SOCKET_CASE=EthereumJSONRPC.WebSocket.Case.Mox |
||||
mix coveralls.html --umbrella --exclude no_parity |
||||
``` |
||||
|
||||
##### HTTP / WebSocket |
||||
|
||||
```shell |
||||
export ETHEREUM_JSONRPC_CASE=EthereumJSONRPC.Case.Parity.HTTPWebSocket |
||||
export ETHEREUM_JSONRPC_WEB_SOCKET_CASE=EthereumJSONRPC.WebSocket.Case.Parity |
||||
mix coveralls.html --umbrella --exclude no_parity |
||||
``` |
||||
|
||||
| Protocol | URL | |
||||
|:----------|:-----------------------------------| |
||||
| HTTP | `http://localhost:8545` | |
||||
| WebSocket | `ws://localhost:8546` | |
||||
|
||||
#### Geth |
||||
|
||||
##### Mox |
||||
|
||||
```shell |
||||
export ETHEREUM_JSONRPC_CASE=EthereumJSONRPC.Case.Geth.Mox |
||||
export ETHEREUM_JSONRPC_WEB_SOCKET_CASE=EthereumJSONRPC.WebSocket.Case.Mox |
||||
mix coveralls.html --umbrella --exclude no_geth |
||||
``` |
||||
|
||||
##### HTTP / WebSocket |
||||
|
||||
```shell |
||||
export ETHEREUM_JSONRPC_CASE=EthereumJSONRPC.Case.Geth.HTTPWebSocket |
||||
export ETHEREUM_JSONRPC_WEB_SOCKET_CASE=EthereumJSONRPC.WebSocket.Case.Geth |
||||
mix coveralls.html --umbrella --exclude no_geth |
||||
``` |
||||
|
||||
| Protocol | URL | |
||||
|:----------|:--------------------------------------------------| |
||||
| HTTP | `https://mainnet.infura.io/8lTvJTKmHPCHazkneJsY` | |
||||
| WebSocket | `wss://mainnet.infura.io/ws/8lTvJTKmHPCHazkneJsY` | |
@ -1,25 +0,0 @@ |
||||
<!--tracing.md --> |
||||
|
||||
## Tracing |
||||
|
||||
Blockscout supports tracing via [Spandex](http://git@github.com:spandex-project/spandex.git). Each application has its own internally configured tracer. |
||||
|
||||
To enable tracing, visit each application's `config/<env>.ex` and change `disabled?: true` to `disabled?: false`. Do this for |
||||
each application you'd like included in your trace data. |
||||
|
||||
Currently, only [Datadog](https://www.datadoghq.com/) is supported as a |
||||
tracing backend, but more will be added soon. |
||||
|
||||
### DataDog |
||||
|
||||
If you would like to use DataDog, after enabling `Spandex`, set |
||||
`"DATADOG_HOST"` and `"DATADOG_PORT"` environment variables to the |
||||
host/port that your Datadog agent is running on. For more information on |
||||
Datadog and the Datadog agent, see the [documentation](https://docs.datadoghq.com/). |
||||
|
||||
### Other |
||||
|
||||
If you want to use a different backend, remove the |
||||
`SpandexDatadog.ApiServer` `Supervisor.child_spec` from |
||||
`Explorer.Application` and follow any instructions provided in `Spandex` |
||||
for setting up that backend. |
@ -1,14 +0,0 @@ |
||||
<!-- umbrella.md --> |
||||
|
||||
## Umbrella Project Organization |
||||
|
||||
BlockScout is an Elixir [umbrella project](https://elixir-lang.org/getting-started/mix-otp/dependencies-and-umbrella-projects.html). Each directory under `apps/` is a separate [Mix](https://hexdocs.pm/mix/Mix.html) project and [OTP application](https://hexdocs.pm/elixir/Application.html), but the projects can use each other as a dependency in their `mix.exs`. |
||||
|
||||
Each OTP application has a restricted domain. |
||||
|
||||
| Directory | OTP Application | Namespace | Purpose | |
||||
|:------------------------|:--------------------|:------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
||||
| `apps/ethereum_jsonrpc` | `:ethereum_jsonrpc` | `EthereumJSONRPC` | Ethereum JSONRPC client. It is allowed to know `Explorer`'s param format, but it cannot directly depend on `:explorer` | |
||||
| `apps/explorer` | `:explorer` | `Explorer` | Storage for the indexed chain. Can read and write to the backing storage. MUST be able to boot in a read-only mode when run independently from `:indexer`, so cannot depend on `:indexer` as that would start `:indexer` indexing. | |
||||
| `apps/block_scout_web` | `:block_scout_web` | `BlockScoutWeb` | Phoenix interface to `:explorer`. The minimum interface to allow web access should go in `:block_scout_web`. Any business rules or interface not tied directly to `Phoenix` or `Plug` should go in `:explorer`. MUST be able to boot in a read-only mode when run independently from `:indexer`, so cannot depend on `:indexer` as that would start `:indexer` indexing. | |
||||
| `apps/indexer` | `:indexer` | `Indexer` | Uses `:ethereum_jsonrpc` to index chain and batch import data into `:explorer`. Any process, `Task`, or `GenServer` that automatically reads from the chain and writes to `:explorer` should be in `:indexer`. This restricts automatic writes to `:indexer` and read-only mode can be achieved by not running `:indexer`. | |
@ -1,3 +0,0 @@ |
||||
## Upgrading Guide |
||||
|
||||
**Upgrade instructions are in progress. If you need assistance with an upgrade, please contact us through the [forum](https://forum.poa.network/c/blockscout) or [gitter](https://gitter.im/poanetwork/blockscout) channel.** |