edit kubernetes tutorials (#932)

* edit kubernetes tutorials

Signed-off-by: Alexandra Tran <alexandra.tran@consensys.net>

* apply reviewer feedback

Signed-off-by: Alexandra Tran <alexandra.tran@consensys.net>
pull/936/head 22.1.0-RC3
Alexandra Tran 3 years ago committed by GitHub
parent 05ab40c3ae
commit e2ab9abf55
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 57
      docs/Tutorials/Kubernetes/Create-Cluster.md
  2. 74
      docs/Tutorials/Kubernetes/Deploy-Charts.md
  3. 20
      docs/Tutorials/Kubernetes/Maintenance.md
  4. 4
      docs/Tutorials/Kubernetes/Overview.md
  5. 75
      docs/Tutorials/Kubernetes/Production.md

@ -5,7 +5,8 @@ description: Create a cluster for deployment
# Create a cluster
Create a cluster before you deploy the network, there are options locally and in cloud.
You can create a [local](#local-clusters) or [cloud](#cloud-clusters) cluster to deploy a Besu network using
Kubernetes.
## Prerequisites
@ -18,8 +19,7 @@ Create a cluster before you deploy the network, there are options locally and in
## Local Clusters
Use one of several options to create a local cluster. Select one listed below, or another that you
are comfortable with.
Use one of several options to create a local cluster. Select one listed below, or another that you're comfortable with.
### Minikube
@ -49,42 +49,42 @@ kind create cluster
### Rancher
[Rancher](https://github.com/rancher-sandbox/rancher-desktop/) is a light-weight open source desktop application
for Mac, Windows and Linux. It provides Kubernetes and container management, and allows you to choose the
[Rancher](https://github.com/rancher-sandbox/rancher-desktop/) is a lightweight open source desktop application
for Mac, Windows, and Linux. It provides Kubernetes and container management, and allows you to choose the
version of Kubernetes to run.
It can build, push, pull and run container images. Built container images can be run without needing a registry.
It can build, push, pull, and run container images. Built container images can be run without needing a registry.
!!!note
The official Docker-CLI is not supported but rather uses [nerdctl](https://github.com/containerd/nerdctl) which is
a Docker-CLI compatible tool for containerd, and is automatically installed with Rancher Desktop.
!!!note
For Windows, you need to [install Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install)
For Windows, you must [install Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install)
to install Rancher Desktop.
Refer to the [official documentation](https://github.com/rancher-sandbox/docs.rancherdesktop.io/blob/main/docs/installation.md)
for system requirements and installation instructions.
## Cloud Clusters
## Cloud clusters
### AWS EKS
[AWS Elastic Kubernetes Service (AWS EKS)](https://aws.amazon.com/eks/) is one of the most popular platforms
to deploy Hyperledger Besu.
To create a cluster in AWS, you need to install the [AWS CLI](https://aws.amazon.com/cli/) and
To create a cluster in AWS, you must install the [AWS CLI](https://aws.amazon.com/cli/) and
[`eksctl`](https://eksctl.io/).
The [template](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws) comprises the base
infrastructure used to build the cluster and other resources in AWS. We also use AWS native
services and features after the cluster is created. These include:
* [Pod identities](https://github.com/aws/amazon-eks-pod-identity-webhook)
* [Secrets Store CSI drivers](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html)
* [Pod identities](https://github.com/aws/amazon-eks-pod-identity-webhook).
* [Secrets Store CSI drivers](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html).
* Dynamic storage classes backed by AWS EBS. The
[volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) are fixed
sizes and can be updated as you grow via helm updates, and will not need to re-provision the underlying storage
sizes and can be updated as you grow via helm updates, and won't need to re-provision the underlying storage
class.
* [CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) networking mode for EKS. By default,
EKS clusters use `kubenet` to create a virtual network and subnet. Nodes get an IP
@ -92,9 +92,9 @@ services and features after the cluster is created. These include:
receive an IP address "hidden" behind the node IP.
!!! note
This approach reduces the number of IP addresses that you need
to reserve in your network space for pods, but places constraints on what can connect to the nodes from
outside the cluster (for example on premises nodes or those on another cloud provider).
This approach reduces the number of IP addresses that you must reserve in your network space for pods, but
constrains what can connect to the nodes from
outside the cluster (for example, on-premise nodes or those on another cloud provider).
AWS Container Networking Interface (CNI) provides each pod with an IP address from the subnet, and can be accessed
directly. The IP addresses must be unique across your network space, and must be planned in advance. Each node has
@ -119,7 +119,7 @@ your VPC details.
```
1. Optionally, deploy the
[kubernetes dashboard](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws/templates/k8s-dashboard)
[kubernetes dashboard](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws/templates/k8s-dashboard).
1. Provision the drivers. After the deployment completes, provision the secrets manager, identity, and
CSI drivers. Use `besu` for `EKS_NAMESPACE` and update `AWS_REGION` and `EKS_CLUSTER_NAME` in the
@ -144,30 +144,31 @@ commands below to match your settings from step 2.
1. You can now use your cluster and you can deploy [Helm charts](./Deploy-Charts.md) to it.
### [Azure AKS](https://azure.microsoft.com/en-au/services/kubernetes-service/)
### Azure Kubernetes Service
Azure Kubernetes Service is also a popular cloud platform that you can use to deploy Besu. To create a cluster in
Azure, you need to install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) and you
must have admin rights on your Azure subscription to enable some preview features on AKS.
[Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) is another popular cloud
platform that you can use to deploy Besu. To create a cluster in
Azure, you must install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) and have admin
rights on your Azure subscription to enable some preview features on AKS.
The [template](https://github.com/ConsenSys/quorum-kubernetes/tree/master/azure) comprises the base
infrastructure used to build the cluster and other resources in Azure. We also make use Azure native
services and features after the cluster is created. These include:
* [AAD pod identities](https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity).
* [Secrets Store CSI drivers](https://docs.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes)
* [Secrets Store CSI drivers](https://docs.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes).
* Dynamic storage classes backed by Azure Files. The
[volume claims](https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv) are fixed sizes and can be updated
as you grow via helm updates, and will not need to re-provision the underlying storage class.
as you grow via helm updates, and won't need to re-provision the underlying storage class.
* [CNI](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni) networking mode for AKS. By default, AKS
clusters use `kubenet`, to create a virtual network and subnet. Nodes get an IP address
from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive
an IP address "hidden" behind the node IP.
!!! note
This approach reduces the number of IP addresses that you need to reserve
in your network space for pods to use, however places constraints on what can connect to the nodes from outside the
cluster (for example on prem nodes or other cloud providers)
This approach reduces the number of IP addresses you must reserve
in your network space for pods to use, but constrains what can connect to the nodes from outside the
cluster (for example, on-premise nodes or other cloud providers).
AKS Container Networking Interface (CNI) provides each pod with an IP address from the subnet, and can be accessed
directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has
@ -177,15 +178,15 @@ exhaustion as your application demands grow, however makes it easier for externa
!!!warning
Please do not create more than one AKS cluster in the same subnet. AKS clusters may not use 169.254.0.0/16,
172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range.
Please do not create more than one AKS cluster in the same subnet. AKS clusters may not use `169.254.0.0/16`,
`172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range.
To provision the cluster:
1. Enable the preview features that allow you to use AKS with CNI, and a managed identity to authenticate and
run cluster operations with other services. We also enable
[AAD pod identities](https://docs.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity) which use the managed
identity. This is in preview so you need to enable this feature by registering the `EnablePodIdentityPreview` feature:
identity. This is in preview, so you must enable this feature by registering the `EnablePodIdentityPreview` feature:
```bash
az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService

@ -21,7 +21,7 @@ to the `dev` directory for the rest of this tutorial.
cd dev/helm
```
If you are running the cluster on AWS or Azure, update the `values.yml` with `provider: aws` or
If you're running the cluster on AWS or Azure, update the `values.yml` with `provider: aws` or
`provider: azure` as well.
!!! note
@ -51,7 +51,7 @@ This tutorial isolates groups of resources (for example, StatefulSets and Servic
!!! note
The rest of this tutorial uses `besu` as the namespace,
but you are free to pick any name when deploying, but it must be consistent across the
but you're free to pick any name when deploying, as long as it's consistent across the
[infrastructure scripts](./Create-Cluster.md) and charts.
Run the following in a terminal window:
@ -124,8 +124,8 @@ deploy the chart using:
helm install genesis ./charts/besu-genesis --namespace besu --create-namespace --values ./values/genesis-besu.yml
```
Once completed, view the genesis and enodes (the list of static nodes) config maps that every Besu node uses. and
validator and bootnodes node keys saves as secrets.
Once completed, view the genesis and enodes (the list of static nodes) configuration maps that every Besu node uses, and
the validator and bootnode node keys as secrets.
![k8s-genesis-configmaps](../../images/kubernetes-genesis-configmaps.png)
@ -133,8 +133,8 @@ validator and bootnodes node keys saves as secrets.
### 5. Deploy the bootnodes
The Dev charts uses two bootnodes to replicate best practices for a production network. Each Besu node has flags
that tell the StatefulSet what to deploy and how to cleanup.
The Dev charts use two bootnodes to replicate best practices for a production network. Each Besu node has flags
that tell the StatefulSet what to deploy and how to clean up.
The default `values.yml` for the StatefulSet define the following flags which are present in all the
override values files.
@ -146,12 +146,12 @@ nodeFlags:
removeKeysOnDeletion: false
```
We do not generate keys for the bootnodes and initial validator pool. To create a Tessera pod paired to Besu
We don't generate keys for the bootnodes and initial validator pool. To create a Tessera pod paired to Besu
for private transactions, set the `privacy` flag to `true`. Optionally remove the secrets for the node if you
delete the StatefulSet (for example removing a member node) by setting the `removeKeysOnDeletion` flag to `true`.
For the bootnodes set `bootnode: true` flag to indicate they are bootnodes. All the other nodes
(for example, validators and members) wait for the bootnodes to be up before proceeding, and have this flag set to `false`.
For the bootnodes, set the `bootnode` flag to `true` to indicate they are bootnodes. All the other nodes
(for example, validators, and members) wait for the bootnodes to be up before proceeding, and have this flag set to `false`.
```bash
helm install bootnode-1 ./charts/besu-node --namespace besu --values ./values/bootnode.yml
@ -160,11 +160,11 @@ helm install bootnode-2 ./charts/besu-node --namespace besu --values ./values/bo
!!! warning
It is important to keep the release names of the bootnodes the same as it is tied to the keys that the genesis chart
creates. So we use `bootnode-1` and `bootnode-2` in the command above.
It's important to keep the release names of the bootnodes the same as it is tied to the keys that the genesis chart
creates. So we use `bootnode-1` and `bootnode-2` in the previous command.
Once complete, you will see two StatefulSets, and the two bootnodes will discover themselves and peer. However because
there are no validators present yet, there will be no blocks created as can be seen in the logs below.
Once complete, you see two StatefulSets, and the two bootnodes discover themselves and peer.
Because there are no validators present yet, there are no blocks created, as seen in the following logs.
![k8s-bootnode-logs](../../images/kubernetes-bootnode-logs.png)
@ -184,11 +184,11 @@ helm install validator-4 ./charts/besu-node --namespace besu --values ./values/v
!!! warning
As with the bootnodes, it is important to keep the release names of the initial validators the same as it is tied
to the keys that the genesis chart creates. So we use `validator-1`, `validator-2` and so on in the command above.
As with the bootnodes, it's important to keep the release names of the initial validators the same as it is tied
to the keys that the genesis chart creates. So we use `validator-1`, `validator-2`, etc. in the previous command.
Once completed, you may need to give the validators a few minutes to peer and for round changes, depending when the first
validator was spun up, before the logs display blocks being created.
Once completed, you may need to give the validators a few minutes to peer and for round changes, depending on when the
first validator was spun up, before the logs display blocks being created.
![k8s-validator-logs](../../images/kubernetes-validator-logs.png)
@ -197,9 +197,9 @@ To add a validator into the network, deploy a normal RPC node (step 7) and then
### 7. Deploy RPC or transaction nodes
These nodes need their own node keys, so we set the `generateKeys: true` for a standard RPC node. For a Transaction node
(Besu paired with Tessera for private transactions) we also set the `privacy: true` flag and deploy in the same manner
as before.
These nodes need their own node keys, so set the `generateKeys` flag to `true` for a standard RPC node.
For a transaction node (Besu paired with Tessera for private transactions), set the `privacy` flag to `true` and
deploy in the same manner as before.
For an RPC node with the release name `rpc-1`:
@ -213,11 +213,11 @@ For a transaction node release name `tx-1`:
helm install tx-1 ./charts/besu-node --namespace besu --values ./values/txnode.yml
```
Logs for `tx-1` would resemble the following for Tessera:
Logs for `tx-1` resemble the following for Tessera:
![`k8s-tx-tessera-logs`](../../images/kubernetes-tx-tessera-logs.png)
Logs for Besu resembles the following:
Logs for Besu resemble the following:
![`k8s-tx-Besu-logs`](../../images/kubernetes-tx-Besu-logs.png)
@ -250,30 +250,32 @@ that match your deployments, deploy the rules as follows:
kubectl apply -f ../../ingress/ingress-rules-besu.yml
```
Once complete, view the IP address under the `Ingress` section if you are using the Kubernetes Dashboard
Once complete, view the IP address under the `Ingress` section if you're using the Kubernetes Dashboard
or equivalent `kubectl` command.
![`k8s-ingress`](../../images/kubernetes-ingress-ip.png)
The Grafana dashboard can be viewed by going to:
You can view the Grafana dashboard by going to:
```bash
# For Besu's grafana address:
http://<INGRESS_IP>/d/XE4V0WGZz/besu-overview?orgId=1&refresh=10s
```
Or for RPC calls:
The following is an example RPC call, which confirms that the node running the JSON-RPC service is syncing:
```bash
curl -v -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://<INGRESS_IP>/rpc
```
=== "curl HTTP request"
The call returns the following to confirm that the node running the JSON-RPC service is syncing:
```bash
curl -v -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' http://<INGRESS_IP>/rpc
```
```json
{
"jsonrpc" : "2.0",
"id" : 1,
"result" : "0x4e9"
}
```
=== "JSON result"
```json
{
"jsonrpc" : "2.0",
"id" : 1,
"result" : "0x4e9"
}
```

@ -10,17 +10,17 @@ description: Maintenance for Besu on a Kubernetes cluster
* Install [Kubectl](https://kubernetes.io/docs/tasks/tools/)
* Install [Helm3](https://helm.sh/docs/intro/install/)
## Update a persistent volume claim (PVC) size
## Update a persistent volume claim size
As the chain grows so does the amount of space used by the PVC. As of Kubernetes v1.11,
[certain types of Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion)
allow volume resizing. Our production charts for Azure use Azure Files and for AWS use EBS Block Store which allow for
volume expansion.
Over time, as the chain grows so will the amount of space used by the persistent volume claim (PVC).
As of Kubernetes v1.11, [certain types of Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion)
allow volume resizing.
Production charts for Azure use Azure Files and on AWS use EBS Block Store which allow for volume expansion.
To update the volume size, add the following to the override values file. For example to increase the size on the
transaction nodes volumes, add the following snippet to the
To update the volume size, you must update the override values file.
For example, to increase the size on the transaction nodes volumes, add the following snippet to the
[`txnode values.yml`](https://github.com/ConsenSys/quorum-kubernetes/blob/master/dev/helm/values/txnode.yml) file, with
appropriate size (for example 50Gi below).
the new size limit (the following example uses 50Gi).
```bash
storage:
@ -40,10 +40,10 @@ helm upgrade tx-1 ./charts/besu-node --namespace besu --values ./values/txnode.y
When updating Besu nodes across a cluster, perform the updatesas a rolling update and not all at once,
especially for the validator pool. If all the validators are taken offline, the
chain will halt, and you'll have to wait for round changes to expire before blocks are created again.
chain halts, and you must wait for round changes to expire before blocks are created again.
Updates for Besu can be done via Helm in exactly the same manner as other applications. Alternatively, this can be done
via `kubectl`. In this example, we'll update a node called `besu-validator-3`:
via `kubectl`. This example updates a node called `besu-validator-3`:
1. Set the update policy to use rolling updates (if not done already):

@ -8,8 +8,8 @@ description: Deploying Hyperledger Besu with Kubernetes
Use the [reference implementations](https://github.com/ConsenSys/besu-kubernetes) to install
private networks using Kubernetes (K8s). Reference implementations are available using:
* [Helm](https://github.com/ConsenSys/quorum-kubernetes/tree/master/dev)
* [Helmfile](https://github.com/roboll/helmfile)
* [Helm](https://github.com/ConsenSys/quorum-kubernetes/tree/master/dev).
* [Helmfile](https://github.com/roboll/helmfile).
* [`kubectl`](https://github.com/ConsenSys/besu-kubernetes/tree/master/playground/kubectl).
Familiarize yourself with the reference implementations and customize them for your requirements.

@ -12,13 +12,13 @@ description: Deploying Besu Helm Charts for production on a Kubernetes cluster
## Overview
The charts in the `prod` folder are similar to those of the `dev` folder but use cloud native services for
**identities** (IAM on AWS and a Managed Identity on Azure) and **secrets** (Secrets Manager on AWS or Key Vault on
Azure). Any keys or secrets are created directly in Secrets Manager or Key Vault and the Identity is given permission to
retrieve those secrets at runtime. Please note that no kubernetes secrets objects are created.
The charts in the `prod` folder are similar to those in the `dev` folder but use cloud native services for
**identities** (IAM on AWS and a Managed Identity on Azure) and **secrets** (Secrets Manager on AWS and Key Vault on
Azure). Any keys or secrets are created directly in Secrets Manager or Key Vault, and the Identity is given permission to
retrieve those secrets at runtime. No Kubernetes secrets objects are created.
Access to these secrets are done on the least privileges policy and access to them is denied for
users. If any admins need access to them, you will need to update the IAM policy.
users. If any admins need access to them, they must update the IAM policy.
!!!warning
@ -26,15 +26,15 @@ users. If any admins need access to them, you will need to update the IAM policy
!!!warning
You are encouraged to use AWS RDS or Azure PostgreSQL in High Availability mode for any Tessera nodes that you use.
The templates do not include that functionality. They can be provisioned with CloudFormation or Azure Resource Manager,
We recommend using AWS RDS or Azure PostgreSQL in High Availability mode for any Tessera nodes that you use.
The templates don't include that functionality. They can be provisioned with CloudFormation or Azure Resource Manager,
respectively. Once created, please specify the connection details to the `values.yml`.
## Deploy
### Check that you can connect to the cluster with `kubectl`
Once you have a [cluster running](./Create-Cluster.md), verify kubectl is connected to cluster with:
Once you have a [cluster running](./Create-Cluster.md), verify `kubectl` is connected to cluster with:
```bash
kubectl version
@ -55,7 +55,7 @@ cd prod/helm
!!!attention
Please update all the [values files](https://github.com/ConsenSys/quorum-kubernetes/tree/master/prod/helm/values)
with your choice of cloud provider, that is AWS or Azure and set `provider: aws` or `provider: azure` as required.
with your choice of cloud provider (AWS or Azure) and set `provider: aws` or `provider: azure` as required.
Depending on the provider, you may also need to update the `azure:` or `aws:` dictionaries with specifics of your
cluster and account.
@ -63,38 +63,45 @@ Follow the steps outlined in the [deploy charts](./Deploy-Charts.md) tutorial to
## Best practices
The most important thing is to plan your network out on paper first and then test it out in a Dev cluster to make sure
connectivity works with your applications and you get the required throughput in transactions per second (TPS). In
addition to this, we also recommend you test the entire process from provisioning infrastructure to updating nodes on a
Dev cluster prior to launching your production network.
The most important thing is to plan your network out on paper first and then test it in a Dev cluster to make sure
connectivity works with your applications and you get the required throughput in transactions per second (TPS).
We also recommend you test the entire process, from provisioning infrastructure to updating nodes on a
Dev cluster, prior to launching your production network.
By default, the Kubernetes clusters in cloud should take care of availability and do multi zones within a region. The
scheduler will also ensure that deployments are spread out across zones. Where possible we recommend you use multiple
bootnodes and static nodes to speed up peering.
By default, the cloud Kubernetes clusters take care of availability and do multi-zones within a region.
The scheduler also ensures that deployments are spread out across zones.
Where possible, we recommend you use multiple bootnodes and static nodes to speed up peering.
If you need to connect to APIs and services outside the cluster this should work as normal, however connectivity into
your network (such as adding an on-premise node to the network) may require more configuration. Please check the
[limitations](./Overview.md#limitations) and use CNI where possible. To connect an external node to your cluster, the
easiest way is to use a VPN as seen in the [multi-cluster](#multi-cluster-support) setup below.
You can connect to APIs and services outside the cluster normally, but connecting into your network (such as
adding an on-premise node to the network) might require more configuration.
Please check the [limitations](./Overview.md#limitations) and use CNI where possible.
To connect an external node to your cluster, the easiest way is to use a VPN as seen in the
following [multi-cluster](#multi-cluster-support) setup.
The last thing we recommend is to setup monitoring and alerting right from the beginning so you can get early warnings
of issues rather than after failure. We have a monitoring chart which uses Grafana and you can use it in conjunction with
Alertmanager to create alerts or alternatively alert via Cloudwatch or Azure Monitoring.
Finally, we recommend setting up monitoring and alerting from the beginning so you can get early warnings of issues
rather than after failure.
We have a monitoring chart which uses Grafana and you can use it in conjunction with Alertmanager to create alerts or
alternatively alert via Cloudwatch or Azure Monitoring.
## Multi-cluster support
When CNI is used, multi-cluster support is simple enough but you have to cater for cross-cluster DNS names. Ideally,
what you are looking to do is to create two separate VPCs (or VNets) and make sure they have different base CIDR blocks
so that IPs will not conflict. Once done, peer the VPCs together and update the subnet route table, so they are
effectively a giant single network.
When CNI is used, multi-cluster support is simple, but you have to cater for cross-cluster DNS names.
Ideally, you want to create two separate VPCs (or VNets) and make sure they have different base CIDR blocks so that IPs
don't conflict.
Once done, peer the VPCs together and update the subnet route table, so they are effectively a giant single network.
![multi-cluster](../../images/kubernetes-3.png)
When you [spin up clusters](./Create-Cluster.md), use [CNI](./Overview.md#limitations) and CIDR blocks to match the
subnet's CIDR settings. Then deploy the genesis chart on one cluster and copy across the genesis file and static nodes
config maps. Depending on your DNS settings they may be fine as is or they make need to be actual IPs - that is you can
provision cluster B only after cluster A has Besu nodes up and running. Deploy the network on cluster A, and then on
cluster B. Besu nodes on cluster A should work as expected, and Besu nodes on cluster B should use the list of peers
provided to communicate with the nodes on cluster A. Keeping the list of peers on the clusters live and up to date can
be quite challenging so we recommend using the cloud service provider's DNS service such as Route 53 or Azure DNS and
adapting the charts to create entries for each node when it comes up.
subnet's CIDR settings.
Then deploy the genesis chart on one cluster and copy across the genesis file and static nodes config maps.
Depending on your DNS settings, they might be fine as is or they might need to be actual IPs.
That is, you can provision cluster B only after cluster A has Besu nodes up and running.
Deploy the network on cluster A, and then on cluster B.
Besu nodes on cluster A should work as expected, and Besu nodes on cluster B should use the list of peers provided to
communicate with the nodes on cluster A.
Keeping the list of peers on the clusters live and up to date can be challenging, so we recommend using the cloud
service provider's DNS service such as Route 53 or Azure DNS and adapting the charts to create entries for each node
when it comes up.

Loading…
Cancel
Save