1. After the deployment completes, provision the EBS drivers for the volumes. While it is possible to use the
in-tree `aws-ebs` driver that's natively supported by Kubernetes, it is no longer being updated and does not support
newer EBS features such as the [cheaper and better gp3 volumes](https://stackoverflow.com/questions/68359043/whats-the-difference-between-ebs-csi-aws-com-vs-kubernetes-io-aws-ebs-for-provi).
The `cluster.yml` file (from the steps above) that is included in this folder automatically deploys the
cluster with the EBS IAM policies, but you need to install the EBS CSI drivers. This can be done through
the AWS Management Console for simplicity, or via a CLI command as below. Replace `CLUSTER_NAME`,
`AWS_REGION` and `AWS_ACCOUNT` with details that are specific to your deployment.
@ -12,17 +12,53 @@ description: Deploying Besu Helm Charts for a Kubernetes cluster
## Provision with Helm charts
Helm allows you to package a collection of objects into a chart which can be deployed to the cluster. For the
rest of this tutorial we use the [**Helm charts**](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm).
After cloning the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change
to the `dev` directory for the rest of this tutorial.
Helm is a method of packaging a collection of objects into a chart which can then be deployed to the cluster.
After you have cloned the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change
the directory to `helm` for the rest of this tutorial.
```bash
cd helm
```
If you're running the cluster on AWS or Azure, update the `values.yml` with `provider: aws` or
`provider: azure` as well.
Each helm chart has the following key-map values which you will need to set depending on your needs. The `cluster.provider` is used
as a key for the various cloud features enabled. Please specify only one cloud provider, not both. At present, the
charts have full support for cloud native services in both AWS and Azure. Please note that if you use
GCP, IBM etc please set `cluster.provider: local` and set `cluster.cloudNativeServices: false`.
Please update the `aws` or `azure` map as shown below if you deploy to either cloud provider.
```bash
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false # set to true to use Cloud Native Services (SecretsManager and IAM for AWS; KeyVault & Managed Identities for Azure)
reclaimPolicy: Delete # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
quorumFlags:
privacy: false
removeKeysOnDelete: false
aws:
# the aws cli commands uses the name 'quorum-node-secrets-sa' so only change this if you altered the name
serviceAccountName: quorum-node-secrets-sa
# the region you are deploying to
region: ap-southeast-2
azure:
# the script/bootstrap.sh uses the name 'quorum-pod-identity' so only change this if you altered the name
identityName: quorum-pod-identity
# the clientId of the user assigned managed identity created in the template
identityClientId: azure-clientId
keyvaultName: azure-keyvault
# the tenant ID of the key vault
tenantId: azure-tenantId
# the subscription ID to use - this needs to be set explictly when using multi tenancy
subscriptionId: azure-subscriptionId
```
Setting the `cluster.cloudNativeServices: true` will:
* Store keys in Azure KeyVault or AWS Secrets Manager
* Make use of Azure Managed Identities or AWS IAMs for pod identity access
!!! note
@ -31,7 +67,7 @@ If you're running the cluster on AWS or Azure, update the `values.yml` with `pro
### 1. Check that you can connect to the cluster with `kubectl`
Verify kubectl is connected to cluster using:
Verify kubectl is connected to cluster using: (use the latest version)
This tutorial isolates groups of resources (for example, StatefulSets and Services) within a single cluster.
@ -60,11 +96,24 @@ Run the following in a terminal window:
kubectl create namespace besu
```
### 3. Deploy the metrics chart
### 3. Deploy the monitoring chart
This chart deploys Prometheus and Grafana to monitor the metrics of the cluster, nodes and state of the network.
This chart deploys Prometheus and Grafana to monitor the cluster, nodes, and state of the network.
Each Besu pod has [`annotations`](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
which allow Prometheus to scrape metrics from the pod at a specified port and path. For example:
Update the admin `username` and `password` in the [monitoring values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/monitoring.yml). Configure alerts to the receiver of your choice (for example, email or Slack), then deploy the chart using:
[`annotations`](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) which specify the
port and path to use. For example:
```bash
template:
@ -75,68 +124,153 @@ which allow Prometheus to scrape metrics from the pod at a specified port and pa
prometheus.io/path: "/metrics"
```
Update the admin `username` and `password` in the [monitoring values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/monitoring.yml).
Configure alerts to the receiver of your choice (for example, email or Slack), then deploy the chart using:
Once complete, view the IP address listed under the `Ingress` section if you're using the Kubernetes Dashboard
or on the command line `kubectl -n quorum get services quorum-monitoring-ingress-ingress-nginx-controller`.
!!! note
We refer to the ingress here as `external-nginx` because it deals with monitoring endpoints specifically. We
also deploy a second ingress called `network-ingress` which is for the blockchain nodes only in [step 8](#8-connecting-to-the-node-from-your-local-machine-via-an-ingress)
@ -144,36 +278,69 @@ the validator and bootnode node keys as secrets.
### 5. Deploy the bootnodes
The Dev charts use two bootnodes to replicate best practices for a production network. Each Besu node has flags
that tell the StatefulSet what to deploy and how to clean up.
This is an optional but recommended step. In a production setup we recommend the use of two ore more bootnodes
for best practices. Each Besu node has a map that tells the StatefulSet what to deploy and how to clean up.
The default `values.yml` for the StatefulSet define the following flags which are present in all the
override values files.
```bash
nodeFlags:
bootnode: true
generateKeys: false
---
quorumFlags:
privacy: false
removeKeysOnDeletion: false
removeKeysOnDelete: true
isBootnode: true # set this to true if this node is a bootnode
usesBootnodes: true # set this to true if the network you are connecting to use a bootnode/s that are deployed in the cluster
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false
reclaimPolicy: Delete # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
aws:
# the aws cli commands uses the name 'quorum-node-secrets-sa' so only change this if you altered the name
serviceAccountName: quorum-node-secrets-sa
# the region you are deploying to
region: ap-southeast-2
azure:
# the script/bootstrap.sh uses the name 'quorum-pod-identity' so only change this if you altered the name
identityName: quorum-pod-identity
# the clientId of the user assigned managed identity created in the template
identityClientId: azure-clientId
keyvaultName: azure-keyvault
# the tenant ID of the key vault
tenantId: azure-tenantId
# the subscription ID to use - this needs to be set explictly when using multi tenancy
subscriptionId: azure-subscriptionId
node:
besu:
metrics:
serviceMonitorEnabled: true
resources:
cpuLimit: 1
cpuRequest: 0.1
memLimit: "2G"
memRequest: "1G"
```
We don't generate keys for the bootnodes and initial validator pool. To create a Tessera pod paired to Besu
for private transactions, set the `privacy` flag to `true`. Optionally remove the secrets for the node if you
delete the StatefulSet (for example removing a member node) by setting the `removeKeysOnDeletion` flag to `true`.
Please set the `aws`, `azure` and `cluster` keys are as per the [Provisioning](#provision-with-helm-charts) step.
`quorumFlags.removeKeysOnDelete: true` tells the chart to delete the node's keys when the chart is deleted.
If you may wish to retain the keys on deletion, please set that value to `false`.
For the bootnodes, set the `bootnode` flag to `true` to indicate they are bootnodes. All the other nodes
(for example, validators, and members) wait for the bootnodes to be up before proceeding, and have this flag set to `false`.
For the bootnodes only, set the `quorumFlags.isBootnode: true`. When using bootnodes you have to also set
`quorumFlags.usesBootnodes: true` to indicate that all nodes on the network will use these bootnodes.
!!! note
If you use bootnodes, you must set `quorumFlags.usesBootnodes: true` in the override values.yaml for
every other node type, that is validators.yaml, txnode.yaml and reader.yaml
We use separate data volumes to store the blockchain data. This is similar to using separate volumes
to store data when using docker containers natively or docker-compose. This is done for
a few reasons:
When using PVCs ensure you set `allowVolumeExpansion` to `true` to keep costs
low and enable growing the volume over time, rather than creating new volumes and copying data across.
* Containers are mortal and we do not want to store data on them.
* Kubernetes host nodes can fail and we want the chain data to persist.
### Namespaces
Ensure that you provide enough data storage capacity for all nodes on the cluster.
Select the appropriate type of [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) based
on your cloud provider. In the templates, the size of the [volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
is set to 20Gb by default; you can change this depending on your needs. If you have a different storage
account than the one in the charts, you may edit those
@ -12,13 +12,8 @@ description: Deploying Besu Helm Charts for production on a Kubernetes cluster
## Overview
The charts in the `prod` folder are similar to those in the `dev` folder but use cloud native services for
**identities** (IAM on AWS and a Managed Identity on Azure) and **secrets** (Secrets Manager on AWS and Key Vault on
Azure). Any keys or secrets are created directly in Secrets Manager or Key Vault, and the Identity is given permission to
retrieve those secrets at runtime. No Kubernetes secrets objects are created.
Access to these secrets are done on the least privileges policy and access to them is denied for
users. If any admins need access to them, they must update the IAM policy.
To get things production-ready, we'll use the same charts, and set a few of
the values in the `cluster` map as in the [Deploy](#deploy-the-network) section.
!!!warning
@ -44,19 +39,25 @@ Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCom
### Deploy the network
For the rest of this tutorial we use the [**Helm charts**](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm). After you have cloned the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository,
change the directory to `prod` for the rest of this tutorial.
For the rest of this tutorial we use Helm charts. After you have cloned the
[Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change the directory to `helm` for
the rest of this tutorial.
```bash
cd helm
```
!!!attention
Each helm chart has the following keys that must be set.
Specify either `aws` or `azure` for the `cluster.provider`. Additionally, set `cloudNativeServices: true` and
`reclaimPolicy: Retain` so that it looks like the following for AWS:
Please update all the [values files](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm/values)
with your choice of cloud provider (AWS or Azure) and set `provider: aws` or `provider: azure` as required.
Depending on the provider, you may also need to update the `azure:` or `aws:` dictionaries with specifics of your
cluster and account.
```bash
cluster:
provider: aws # choose from: aws | azure
cloudNativeServices: true # set to true to use Cloud Native Services (SecretsManager and IAM for AWS; KeyVault & Managed Identities for Azure)
reclaimPolicy: Retain # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
```
Follow the steps outlined in the [deploy charts](./Deploy-Charts.md) tutorial to deploy the network.
[Quorum-Explorer](https://github.com/ConsenSys/quorum-explorer) as a lightweight
blockchain explorer. The Quorum Explorer is **not** recommended for use in production and is intended for
demonstration or development purposes only.
The explorer can provide an overview over the whole network, such as block information, voting or removing
validators from the network, and demonstrates using the `SimpleStorage` smart contract with privacy enabled, and sending
transactions between wallets in one interface.
To use the explorer, update the [Quorum-Explorer values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/explorer-besu.yaml)
with your node details and endpoints, and then [deploy](./Deploy-Charts.md).
## Nodes
The **Nodes** page provides an overview of the nodes on the network. Select the node you would like to interact
with from the drop-down on the top right, and you'll get details of the node, block height, peers, queued