updating doco to match the new kubernetes repo charts (#1030)

* updating doco to match the new kubernetes repo charts

Signed-off-by: Joshua Fernandes <joshua.fernandes@consensys.net>

* Initial edits.

Signed-off-by: bgravenorst <byron.gravenorst@consensys.net>

* Edit new page.

Signed-off-by: bgravenorst <byron.gravenorst@consensys.net>

* Fix link.

Signed-off-by: bgravenorst <byron.gravenorst@consensys.net>

Co-authored-by: bgravenorst <byron.gravenorst@consensys.net>
pull/1032/head
Joshua Fernandes 3 years ago committed by GitHub
parent 38b6b9c39b
commit 0f955ff227
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      CI/vale/vale_styles/Vocab/Besu/accept.txt
  2. 64
      docs/Tutorials/Kubernetes/Create-Cluster.md
  3. 433
      docs/Tutorials/Kubernetes/Deploy-Charts.md
  4. 112
      docs/Tutorials/Kubernetes/Overview.md
  5. 4
      docs/Tutorials/Kubernetes/Playground.md
  6. 29
      docs/Tutorials/Kubernetes/Production.md
  7. 83
      docs/Tutorials/Kubernetes/Quorum-Explorer.md
  8. BIN
      docs/images/kubernetes-elastic.png
  9. BIN
      docs/images/kubernetes-explorer-contracts-1.png
  10. BIN
      docs/images/kubernetes-explorer-contracts-set.png
  11. BIN
      docs/images/kubernetes-explorer-explorer.png
  12. BIN
      docs/images/kubernetes-explorer-validators.png
  13. BIN
      docs/images/kubernetes-explorer-wallet.png
  14. BIN
      docs/images/kubernetes-explorer.png
  15. BIN
      docs/images/kubernetes-grafana.png
  16. BIN
      docs/images/kubernetes-ingress-ip.png
  17. 1
      mkdocs.yml

@ -79,6 +79,7 @@ Keycloak
[kK]eytool(s)?
Kibana
Kotti
[kK]ube
[kK]ubenet
[kK]ubectl
[kK]ubernetes
@ -143,6 +144,7 @@ Slack
[sS]lashable
Splunk
statefully
[sS]tatefulset
[sS]ubcommand(s)?
[sS]ubnet(s)?
[sS]uborganization(s)?

@ -77,10 +77,10 @@ To create a cluster in AWS, you must install the [AWS CLI](https://aws.amazon.co
[`eksctl`](https://eksctl.io/).
The [template](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws) comprises the base
infrastructure used to build the cluster and other resources in AWS. We also use AWS native
services and features after the cluster is created. These include:
infrastructure used to build the cluster and other resources in AWS. We also use some native
services with the cluster for performance and best practices, these include:
* [Pod identities](https://github.com/aws/amazon-eks-pod-identity-webhook).
* [Pod identities](hhttps://github.com/aws/amazon-eks-pod-identity-webhook).
* [Secrets Store CSI drivers](https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html).
* Dynamic storage classes backed by AWS EBS. The
[volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) are fixed
@ -104,13 +104,12 @@ exhaustion as your application demands grow, however makes it easier for externa
!!!warning
EKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes
EKS clusters must not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes
service address range.
To provision the cluster:
1. Update [cluster.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/aws/templates/cluster.yml) with
your VPC details.
1. Update [cluster.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/aws/templates/cluster.yml)
1. Deploy the template:
@ -118,19 +117,38 @@ your VPC details.
eksctl create cluster -f ./templates/cluster.yml
```
1. Optionally, deploy the
[kubernetes dashboard](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws/templates/k8s-dashboard).
1. Your `.kube/config` should be connected to the cluster automatically, but if not, run the commands below
and replace `AWS_REGION` and `CLUSTER_NAME` with details that are specific to your deployment.
```bash
aws sts get-caller-identity
aws eks --region AWS_REGION update-kubeconfig --name CLUSTER_NAME
```
1. After the deployment completes, provision the EBS drivers for the volumes. While it is possible to use the
in-tree `aws-ebs` driver that's natively supported by Kubernetes, it is no longer being updated and does not support
newer EBS features such as the [cheaper and better gp3 volumes](https://stackoverflow.com/questions/68359043/whats-the-difference-between-ebs-csi-aws-com-vs-kubernetes-io-aws-ebs-for-provi).
The `cluster.yml` file (from the steps above) that is included in this folder automatically deploys the
cluster with the EBS IAM policies, but you need to install the EBS CSI drivers. This can be done through
the AWS Management Console for simplicity, or via a CLI command as below. Replace `CLUSTER_NAME`,
`AWS_REGION` and `AWS_ACCOUNT` with details that are specific to your deployment.
```bash
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster CLUSTER_NAME --region AWS_REGION --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name AmazonEKS_EBS_CSI_DriverRole
1. Provision the drivers. After the deployment completes, provision the secrets manager, identity, and
CSI drivers. Use `besu` for `EKS_NAMESPACE` and update `AWS_REGION` and `EKS_CLUSTER_NAME` in the
commands below to match your settings from step 2.
eksctl create addon --name aws-ebs-csi-driver --cluster CLUSTER_NAME --region AWS_REGION --service-account-role-arn arn:aws:iam::AWS_ACCOUNT:role/AmazonEKS_EBS_CSI_DriverRole --force
```
1. Once the deployment is completed, provision the Secrets Manager IAM and CSI driver.
Use `besu` (or equivalent) for `NAMESPACE` and replace `CLUSTER_NAME`, `AWS_REGION` and `AWS_ACCOUNT` with details
that are specific to your deployment.
```bash
helm repo add secrets-store-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/secrets-store-csi-driver/master/charts
helm install --namespace besu --create-namespace csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
kubectl apply --namespace besu -f templates/secrets-manager/aws-provider-installer.yml
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install --namespace kube-system --create-namespace csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml
POLICY_ARN=$(aws --region AWS_REGION --query Policy.Arn --output text iam create-policy --policy-name besu-node-secrets-mgr-policy --policy-document '{
POLICY_ARN=$(aws --region AWS_REGION --query Policy.Arn --output text iam create-policy --policy-name quorum-node-secrets-mgr-policy --policy-document '{
"Version": "2012-10-17",
"Statement": [ {
"Effect": "Allow",
@ -139,9 +157,23 @@ commands below to match your settings from step 2.
} ]
}')
eksctl create iamserviceaccount --name besu-node-secrets-sa --namespace EKS_NAMESPACE --region=AWS_REGION --cluster EKS_CLUSTER_NAME --attach-policy-arn "$POLICY_ARN" --approve --override-existing-serviceaccounts
#If you have deployed the above policy before, you can acquire its ARN:
POLICY_ARN=$(aws iam list-policies --scope Local \
--query 'Policies[?PolicyName==`quorum-node-secrets-mgr-policy`].Arn' \
--output text)
eksctl create iamserviceaccount --name quorum-node-secrets-sa --namespace NAMESPACE --region=AWS_REGION --cluster CLUSTER_NAME --attach-policy-arn "$POLICY_ARN" --approve --override-existing-serviceaccounts
```
!!!warning
The above command creates a service account called `quorum-node-secrets-sa` and is
preconfigured in the helm charts override `values.yml` files, for ease of use.
1. Optionally, deploy the
[kubernetes dashboard](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws/templates/k8s-dashboard).
1. You can now use your cluster and you can deploy [Helm charts](./Deploy-Charts.md) to it.
### Azure Kubernetes Service

@ -12,17 +12,53 @@ description: Deploying Besu Helm Charts for a Kubernetes cluster
## Provision with Helm charts
Helm allows you to package a collection of objects into a chart which can be deployed to the cluster. For the
rest of this tutorial we use the [**Helm charts**](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm).
After cloning the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change
to the `dev` directory for the rest of this tutorial.
Helm is a method of packaging a collection of objects into a chart which can then be deployed to the cluster.
After you have cloned the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change
the directory to `helm` for the rest of this tutorial.
```bash
cd helm
```
If you're running the cluster on AWS or Azure, update the `values.yml` with `provider: aws` or
`provider: azure` as well.
Each helm chart has the following key-map values which you will need to set depending on your needs. The `cluster.provider` is used
as a key for the various cloud features enabled. Please specify only one cloud provider, not both. At present, the
charts have full support for cloud native services in both AWS and Azure. Please note that if you use
GCP, IBM etc please set `cluster.provider: local` and set `cluster.cloudNativeServices: false`.
Please update the `aws` or `azure` map as shown below if you deploy to either cloud provider.
```bash
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false # set to true to use Cloud Native Services (SecretsManager and IAM for AWS; KeyVault & Managed Identities for Azure)
reclaimPolicy: Delete # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
quorumFlags:
privacy: false
removeKeysOnDelete: false
aws:
# the aws cli commands uses the name 'quorum-node-secrets-sa' so only change this if you altered the name
serviceAccountName: quorum-node-secrets-sa
# the region you are deploying to
region: ap-southeast-2
azure:
# the script/bootstrap.sh uses the name 'quorum-pod-identity' so only change this if you altered the name
identityName: quorum-pod-identity
# the clientId of the user assigned managed identity created in the template
identityClientId: azure-clientId
keyvaultName: azure-keyvault
# the tenant ID of the key vault
tenantId: azure-tenantId
# the subscription ID to use - this needs to be set explictly when using multi tenancy
subscriptionId: azure-subscriptionId
```
Setting the `cluster.cloudNativeServices: true` will:
* Store keys in Azure KeyVault or AWS Secrets Manager
* Make use of Azure Managed Identities or AWS IAMs for pod identity access
!!! note
@ -31,7 +67,7 @@ If you're running the cluster on AWS or Azure, update the `values.yml` with `pro
### 1. Check that you can connect to the cluster with `kubectl`
Verify kubectl is connected to cluster using:
Verify kubectl is connected to cluster using: (use the latest version)
```bash
kubectl version
@ -44,7 +80,7 @@ Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCom
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
```
### 2. Deploy the network
### 2. Create the namespace
This tutorial isolates groups of resources (for example, StatefulSets and Services) within a single cluster.
@ -60,11 +96,24 @@ Run the following in a terminal window:
kubectl create namespace besu
```
### 3. Deploy the metrics chart
### 3. Deploy the monitoring chart
This chart deploys Prometheus and Grafana to monitor the metrics of the cluster, nodes and state of the network.
This chart deploys Prometheus and Grafana to monitor the cluster, nodes, and state of the network.
Each Besu pod has [`annotations`](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
which allow Prometheus to scrape metrics from the pod at a specified port and path. For example:
Update the admin `username` and `password` in the [monitoring values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/monitoring.yml). Configure alerts to the receiver of your choice (for example, email or Slack), then deploy the chart using:
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack --version 34.10.0 --namespace=besu --values ./values/monitoring.yml --wait
kubectl --namespace besu apply -f ./values/monitoring/
```
Metrics are collected via a
[ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md)
that scrapes each Besu pod, using given
[`annotations`](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) which specify the
port and path to use. For example:
```bash
template:
@ -75,68 +124,153 @@ which allow Prometheus to scrape metrics from the pod at a specified port and pa
prometheus.io/path: "/metrics"
```
Update the admin `username` and `password` in the [monitoring values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/monitoring.yml).
Configure alerts to the receiver of your choice (for example, email or Slack), then deploy the chart using:
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack --version 34.6.0 --namespace=quorum --create-namespace --values ./values/monitoring.yml --wait
kubectl --namespace quorum apply -f ./values/monitoring/
```
!!! warning
For production use cases, configure Grafana with one of the supported [native auth mechanisms](https://grafana.com/docs/grafana/latest/auth/).
![k8s-metrics](../../images/kubernetes-grafana.png)
Optionally you can also deploy the [Elastic Stack](https://www.elastic.co/elastic-stack/) to view logs (and metrics).
```bash
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch --version 7.16.3 elastic/elasticsearch --namespace quorum --create-namespace --values ./values/elasticsearch.yml
helm install kibana --version 7.16.3 elastic/kibana --namespace quorum --values ./values/kibana.yml
helm install filebeat elastic/filebeat --namespace quorum --values ./values/filebeat.yml
# to get metrics, please install metricbeat with config that is similar to filebeat and once complete create a `metricbeat` index in kibana
# if on cloud
helm install elasticsearch --version 7.17.1 elastic/elasticsearch --namespace quorum --values ./values/elasticsearch.yml
# if local - set the replicas to 1
helm install elasticsearch --version 7.17.1 elastic/elasticsearch --namespace quorum --values ./values/elasticsearch.yml --set replicas=1 --set minimumMasterNodes: 1
helm install kibana --version 7.17.1 elastic/kibana --namespace quorum --values ./values/kibana.yml
helm install filebeat --version 7.17.1 elastic/filebeat --namespace quorum --values ./values/filebeat.yml
```
If you install `filebeat`, please create a `filebeat-*` index pattern in `kibana`. All the logs from the nodes are sent to the `filebeat` index.
If you use The Elastic stack for logs and metrics, please deploy `metricbeat` in a similar manner to `filebeat` and create an index pattern in
Kibana.
You can optionally deploy BlockScout to aid with monitoring the network. To do this, update the
[BlockScout values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/blockscout-besu.yml)
and set the `database` and `secret_key_base` values.
![k8s-elastic](../../images/kubernetes-elastic.png)
!!! important
To connect to Kibana or Grafana, we also need to deploy an ingress so you can access your monitoring endpoints
publicly. We use Nginx as our ingress here, and you are free to configure any ingress per your requirements.
Changes to the database requires changes to both the `database` and the `blockscout` dictionaries.
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install quorum-monitoring-ingress ingress-nginx/ingress-nginx \
--namespace quorum \
--set controller.ingressClassResource.name="monitoring-nginx" \
--set controller.ingressClassResource.controllerValue="k8s.io/monitoring-ingress-nginx" \
--set controller.replicaCount=1 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.externalTrafficPolicy=Local
Once completed, deploy the chart using:
kubectl apply -f ../ingress/ingress-rules-monitoring.yml
```
Once complete, view the IP address listed under the `Ingress` section if you're using the Kubernetes Dashboard
or on the command line `kubectl -n quorum get services quorum-monitoring-ingress-ingress-nginx-controller`.
!!! note
We refer to the ingress here as `external-nginx` because it deals with monitoring endpoints specifically. We
also deploy a second ingress called `network-ingress` which is for the blockchain nodes only in [step 8](#8-connecting-to-the-node-from-your-local-machine-via-an-ingress)
![`k8s-ingress-external`](../../images/kubernetes-ingress-ip.png)
You can view the Besu dashboard by going to:
```bash
helm dependency update ./charts/blockscout
helm install blockscout ./charts/blockscout --namespace besu --values ./values/blockscout-besu.yaml
http://<INGRESS_IP>/d/XE4V0WGZz/besu-overview?orgId=1&refresh=10s
```
You can view the Kibana dashboard (if deployed) by going to:
```bash
http://<INGRESS_IP>/kibana
```
### 4. Deploy the genesis chart
The genesis chart creates the genesis file and keys for the validators and bootnodes.
The genesis chart creates the genesis file and keys for the validators.
!!! warning
It's important to keep the release names of the bootnodes and validators as per this tutorial, that is `bootnode-n` and
`validator-n` (for the initial validator pool), where `n` is the node number. Any validators created after the initial
pool can be named to anything you like.
It's important to keep the release names of the initial validator pool as per this tutorial, that is
`validator-n`, where `n` is the node number. Any validators created after the initial pool can be named
to anything you like.
Update the number of validators, accounts, chain ID, and any parameters for the genesis file in the
[`genesis-besu` values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/genesis-besu.yml), then
deploy the chart using:
The override [values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/genesis-besu.yml)
looks like below:
```bash
---
quorumFlags:
removeGenesisOnDelete: true
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false
aws:
# the aws cli commands uses the name 'quorum-node-secrets-sa' so only change this if you altered the name
serviceAccountName: quorum-node-secrets-sa
# the region you are deploying to
region: ap-southeast-2
azure:
# the script/bootstrap.sh uses the name 'quorum-pod-identity' so only change this if you altered the name
identityName: quorum-pod-identity
# the clientId of the user assigned managed identity created in the template
identityClientId: azure-clientId
keyvaultName: azure-keyvault
# the tenant ID of the key vault
tenantId: azure-tenantId
# the subscription ID to use - this needs to be set explictly when using multi tenancy
subscriptionId: azure-subscriptionId
# the raw Genesis config
# rawGenesisConfig.blockchain.nodes set the number of validators/signers
rawGenesisConfig:
genesis:
config:
chainId: 1337
algorithm:
consensus: qbft # choose from: ibft | qbft | raft | clique
blockperiodseconds: 10
epochlength: 30000
requesttimeoutseconds: 20
gasLimit: '0x47b760'
difficulty: '0x1'
coinbase: '0x0000000000000000000000000000000000000000'
blockchain:
nodes:
generate: true
count: 4
accountPassword: 'password'
```
Please set the `aws`, `azure` and `cluster` keys are as per the [Provisioning](#provision-with-helm-charts) step.
`quorumFlags.removeGenesisOnDelete: true` tells the chart to delete the genesis file when the chart is deleted.
If you may wish to retain the genesis on deletion, please set that value to `false`.
The last config item is `rawGenesisConfig` which has details of the chain you are creating, please edit any of the
parameters in there to match your requirements. To set the number of initial validators set the
`rawGenesisConfig.blockchain.nodes` to the number that you'd like. We recommend using the Byzantine formula of `N=3F+1`
when setting the number of validators.
One more thing to note is that when `cluster.cloudNativeServices: true` is set, the genesis job will
not add the [Quickstart](../../Tutorials/Developer-Quickstart.md) test accounts into the genesis file.
When you are ready deploy the chart with :
```bash
cd helm
helm install genesis ./charts/besu-genesis --namespace besu --create-namespace --values ./values/genesis-besu.yml
```
Once completed, view the genesis and enodes (the list of static nodes) configuration maps that every Besu node uses, and
the validator and bootnode node keys as secrets.
Once completed, view the genesis and enodes (the list of static nodes) configuration maps that every Besu node uses,
and the validator and bootnode node keys as secrets.
![k8s-genesis-configmaps](../../images/kubernetes-genesis-configmaps.png)
@ -144,36 +278,69 @@ the validator and bootnode node keys as secrets.
### 5. Deploy the bootnodes
The Dev charts use two bootnodes to replicate best practices for a production network. Each Besu node has flags
that tell the StatefulSet what to deploy and how to clean up.
This is an optional but recommended step. In a production setup we recommend the use of two ore more bootnodes
for best practices. Each Besu node has a map that tells the StatefulSet what to deploy and how to clean up.
The default `values.yml` for the StatefulSet define the following flags which are present in all the
override values files.
```bash
nodeFlags:
bootnode: true
generateKeys: false
---
quorumFlags:
privacy: false
removeKeysOnDeletion: false
removeKeysOnDelete: true
isBootnode: true # set this to true if this node is a bootnode
usesBootnodes: true # set this to true if the network you are connecting to use a bootnode/s that are deployed in the cluster
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false
reclaimPolicy: Delete # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
aws:
# the aws cli commands uses the name 'quorum-node-secrets-sa' so only change this if you altered the name
serviceAccountName: quorum-node-secrets-sa
# the region you are deploying to
region: ap-southeast-2
azure:
# the script/bootstrap.sh uses the name 'quorum-pod-identity' so only change this if you altered the name
identityName: quorum-pod-identity
# the clientId of the user assigned managed identity created in the template
identityClientId: azure-clientId
keyvaultName: azure-keyvault
# the tenant ID of the key vault
tenantId: azure-tenantId
# the subscription ID to use - this needs to be set explictly when using multi tenancy
subscriptionId: azure-subscriptionId
node:
besu:
metrics:
serviceMonitorEnabled: true
resources:
cpuLimit: 1
cpuRequest: 0.1
memLimit: "2G"
memRequest: "1G"
```
We don't generate keys for the bootnodes and initial validator pool. To create a Tessera pod paired to Besu
for private transactions, set the `privacy` flag to `true`. Optionally remove the secrets for the node if you
delete the StatefulSet (for example removing a member node) by setting the `removeKeysOnDeletion` flag to `true`.
Please set the `aws`, `azure` and `cluster` keys are as per the [Provisioning](#provision-with-helm-charts) step.
`quorumFlags.removeKeysOnDelete: true` tells the chart to delete the node's keys when the chart is deleted.
If you may wish to retain the keys on deletion, please set that value to `false`.
For the bootnodes, set the `bootnode` flag to `true` to indicate they are bootnodes. All the other nodes
(for example, validators, and members) wait for the bootnodes to be up before proceeding, and have this flag set to `false`.
For the bootnodes only, set the `quorumFlags.isBootnode: true`. When using bootnodes you have to also set
`quorumFlags.usesBootnodes: true` to indicate that all nodes on the network will use these bootnodes.
!!! note
If you use bootnodes, you must set `quorumFlags.usesBootnodes: true` in the override values.yaml for
every other node type, that is validators.yaml, txnode.yaml and reader.yaml
```bash
helm install bootnode-1 ./charts/besu-node --namespace besu --values ./values/bootnode.yml
helm install bootnode-2 ./charts/besu-node --namespace besu --values ./values/bootnode.yml
```
!!! warning
It's important to keep the release names of the bootnodes the same as it is tied to the keys that the genesis chart
creates. So we use `bootnode-1` and `bootnode-2` in the previous command.
Once complete, you see two StatefulSets, and the two bootnodes discover themselves and peer.
Because there are no validators present yet, there are no blocks created, as seen in the following logs.
@ -184,6 +351,33 @@ Because there are no validators present yet, there are no blocks created, as see
The validators peer with the bootnodes and themselves, and when a majority of the validators have peered, blocks
are proposed and created on the chain.
These are the next set of nodes that we will deploy. The charts use four validators (default) to replicate best practices
for a network. The override
[values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/validator.yml) for the
StatefulSet looks like below:
```bash
---
quorumFlags:
privacy: false
removeKeysOnDelete: false
isBootnode: false # set this to true if this node is a bootnode
usesBootnodes: true # set this to true if the network you are connecting to use a bootnode/s that are deployed in the cluster
```
Please set the `aws`, `azure` and `cluster` keys are as per the [Provisioning](#provision-with-helm-charts) step.
`quorumFlags.removeKeysOnDelete: true` tells the chart to delete the node's keys when the chart is deleted.
If you may wish to retain the keys on deletion, please set that value to `false`.
!!! warning
Please note that if you delete a majority of the validators, the network will halt. Additionally, if the
validator keys are deleted you may not be able to recover as you need a majority of the validators up to vote to
add new validators into the pool
When using bootnodes (if deployed in the previous step) you have to also set `quorumFlags.usesBootnodes: true`
to indicate that all nodes on the network will use these bootnodes.
For the initial validator pool we set all the node flags to `false` and then deploy.
```bash
@ -195,36 +389,53 @@ helm install validator-4 ./charts/besu-node --namespace besu --values ./values/v
!!! warning
As with the bootnodes, it's important to keep the release names of the initial validators the same as it is tied
to the keys that the genesis chart creates. So we use `validator-1`, `validator-2`, etc. in the previous command.
It's important to keep the release names of the validators the same as it is tied to the keys that the genesis chart
creates. So we use `validator-1`, `validator-2`, etc. in the following command.
Once completed, you may need to give the validators a few minutes to peer and for round changes, depending on when the
first validator was spun up, before the logs display blocks being created.
![k8s-validator-logs](../../images/kubernetes-validator-logs.png)
**To add a validator into the network**, deploy a normal RPC node (step 7) and then
[vote](../../HowTo/Configure/Consensus-Protocols/IBFT.md#add-and-remove-validators) it into the validator pool.
### 7. Add/Remove additional validators to the validator pool
To add (or remove) more validators to the initial validator pool, you need to deploy a node such as an RPC node (step 8)
and then [vote](../../HowTo/Configure/Consensus-Protocols/IBFT.md#add-and-remove-validators) that node in. The vote API
call must be made on a majority of the existing pool and the new node will then become a validator.
Please refer to the [Ingress Section](#8-connecting-to-the-node-from-your-local-machine-via-an-ingress) for details on
making the API calls from your local machine or equivalent.
### 8. Deploy RPC or Transaction nodes
### 7. Deploy RPC or transaction nodes
An RPC node is simply a node that can be used to make public transactions or perform read heavy operations such
as when connected to a chain explorer like [BlockScout](https://github.com/blockscout/blockscout).
These nodes need their own node keys, so set the `generateKeys` flag to `true` for a standard RPC node.
For a transaction node (Besu paired with Tessera for private transactions), set the `privacy` flag to `true` and
deploy in the same manner as before.
The RPC override
[values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/reader.yml) for the
StatefulSet looks identical to that of the validators above, and will create it's own node keys before the node starts.
For an RPC node with the release name `rpc-1`:
To deploy an RPC node:
```bash
helm install rpc-1 ./charts/besu-node --namespace besu --values ./values/reader.yml
```
For a transaction node release name `tx-1`:
A Transaction or Member node in turn is one which has an accompanying Private Transaction Manager, such as Tessera;
which allow you to make private transactions between nodes.
The Transaction override
[values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/txnode.yml) for the
StatefulSet looks identical to that of the validators above and only has `quorumFlags.privacy: true` to indicate that
it is deploying a pair of GoQuorum and Tessera nodes.
To deploy a Transaction or Member node:
```bash
helm install tx-1 ./charts/besu-node --namespace besu --values ./values/txnode.yml
helm install member-1 ./charts/besu-node --namespace besu --values ./values/txnode.yml
```
Logs for `tx-1` resemble the following for Tessera:
Logs for `member-1` resemble the following for Tessera:
![`k8s-tx-tessera-logs`](../../images/kubernetes-tx-tessera-logs.png)
@ -232,47 +443,47 @@ Logs for Besu resemble the following:
![`k8s-tx-Besu-logs`](../../images/kubernetes-tx-Besu-logs.png)
### 8. Connect to the node from your local machine via an ingress
!!! note
In the examples above we use `member-1` and `rpc-1` as release names for the deployments. You can pick any release
name that you'd like to use in place of those as per your requirements.
### 9. Connect to the node from your local machine via an ingress
To view Grafana dashboards or connect to the nodes to make transactions from your local machine, you can
In order to view the Grafana dashboards or connect to the nodes to make transactions from your local machine you can
deploy an ingress controller with rules. We use the `ingress-nginx` ingress controller which can be deployed as follows:
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install besu-ingress ingress-nginx/ingress-nginx \
--namespace besu \
helm install quorum-network-ingress ingress-nginx/ingress-nginx \
--namespace quorum \
--set controller.ingressClassResource.name="network-nginx" \
--set controller.ingressClassResource.controllerValue="k8s.io/network-ingress-nginx" \
--set controller.replicaCount=1 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.service.externalTrafficPolicy=Local
```
Use [pre-defined rules](https://github.com/ConsenSys/quorum-kubernetes/blob/master/ingress/ingress-rules-besu.yml)
to test functionality, and alter to suit your requirements (for example, to connect to multiple nodes via different paths).
to test functionality, and alter to suit your requirements (for example, restrict access for API calls to trusted CIDR blocks).
Edit the [rules](https://github.com/ConsenSys/quorum-kubernetes/blob/master/ingress/ingress-rules-besu.yml) file so that the
service names match your release name. In the example, we deployed a transaction node with the release name `member-1`
so the corresponding service is called `besu-node-member-1` for the `rpc` and `ws` path prefixes. Once you have settings
so the corresponding service is called `besu-node-member-1`. Once you have settings
that match your deployments, deploy the rules as follows:
```bash
kubectl apply -f ../../ingress/ingress-rules-besu.yml
kubectl apply -f ../ingress/ingress-rules-besu.yml
```
Once complete, view the IP address under the `Ingress` section if you're using the Kubernetes Dashboard
or equivalent `kubectl` command.
Once complete, view the IP address listed under the `Ingress` section if you're using the Kubernetes Dashboard
or on the command line `kubectl -n quorum get services quorum-network-ingress-ingress-nginx-controller`.
![`k8s-ingress`](../../images/kubernetes-ingress-ip.png)
You can view the Grafana dashboard by going to:
```bash
# For Besu's grafana address:
http://<INGRESS_IP>/d/XE4V0WGZz/besu-overview?orgId=1&refresh=10s
```
The following is an example RPC call, which confirms that the node running the JSON-RPC service is syncing:
=== "curl HTTP request"
@ -290,3 +501,45 @@ The following is an example RPC call, which confirms that the node running the J
"result" : "0x4e9"
}
```
### 10. Blockchain explorer
You can deploy [BlockScout](https://github.com/blockscout/blockscout) to aid with monitoring the blockchain.
To do this, update the [BlockScout values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/blockscout-besu.yml)
and set the `database` and `secret_key_base` values.
!!! important
Changes to the database requires changes to both the `database` and the `blockscout` dictionaries.
Once completed, deploy the chart using:
```bash
helm dependency update ./charts/blockscout
helm install blockscout ./charts/blockscout --namespace quorum --values ./values/blockscout-goquorum.yaml
```
You can optionally deploy the [Quorum-Explorer](https://github.com/ConsenSys/quorum-explorer) as a lightweight
blockchain explorer. The Quorum Explorer is not recommended for use in production and is intended for
demonstration or Development purposes only. The Explorer can give an overview over the whole network, such as
querying each node on the network for node or block information, voting (add/remove) validators from the
network, demonstrating a SimpleStorage smart contract with privacy enabled, and sending transactions between
wallets as you would do in MetaMask. Please see the [Explorer](./Quorum-Explorer.md) page for details on how
to use the application.
!!! warning
The accounts listed in the file below are for test purposes only and should not be used on a production network.
To deploy the application, update the
[Explorer values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/explorer-besu.yaml)
with details of your nodes and endpoints and then deploy.
```bash
helm install quorum-explorer ./charts/explorer --namespace besu --values ./values/explorer-besu.yaml
```
You will also need deploy the ingress (if not already done in [Monitoring](#3-deploy-the-monitoring-chart) to
access the endpoint on `http://<INGRESS_IP>/explorer`
![`k8s-explorer`](../../images/kubernetes-explorer.png)

@ -25,23 +25,29 @@ Helm charts that you can customize and deploy on a local cluster or in the cloud
directory and working through the example setups before moving to the
[`Helm charts`](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm/) directory.
The Helm charts follow best practices to manage identity (Managed Identities in Azure and IAM in AWS),
vaults (KeyVault in Azure and Secrets Manager in AWS), and CSI drivers.
The `helm` directory contains charts for the various components, and each chart has a `cluster` map with features that
you can toggle.
Provided Helm charts use monitoring, and we recommend deploying the monitoring manifests or charts
to get an overview of the network, nodes, and volumes, and you can create alerts accordingly.
```bash
cluster:
provider: local # choose from: local | aws | azure
cloudNativeServices: false # set to true to use Cloud Native Services (SecretsManager and IAM for AWS; KeyVault & Managed Identities for Azure)
```
An example configuration is available for ingress and routes that you can customize to suit your requirements.
Setting `cluster.cloudNativeServices: true` stores keys in AWS Secrets Manager or Azure Key Vault instead of Kubernetes
Secrets, and will also make use of AWS IAM or Azure Managed Identities for the pods.
### Cloud support
The charts support on premise AWS EKS and Azure AKS cloud providers natively. You can configure the provider in
the [values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/genesis-goquorum.yml)
file by setting `provider` to `local`, `aws`, or `azure`.
You can also pass in extra configuration such as a KeyVault name (Azure only).
The repository's `helm` charts support on-premise and cloud providers such as AWS, Azure, GCP, IBM etc. You can
configure the provider in the
[values.yml](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/genesis-besu.yml) file of
the respective charts by setting `cluster.provider` to `local`, `aws`, or `azure`. If you use
GCP, IBM etc., please set `cluster.provider: local` and `cluster.cloudNativeServices: false`.
The repository also contains [Azure ARM templates](https://github.com/ConsenSys/quorum-kubernetes/tree/master/azure) and
[AWS `eksctl` templates](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws) to deploy the required base infrastructure.
The repository also contains [Azure ARM templates](https://github.com/ConsenSys/quorum-kubernetes/tree/master/azure)
and [AWS `eksctl` templates](https://github.com/ConsenSys/quorum-kubernetes/tree/master/aws) to deploy the
required base infrastructure.
## Limitations
@ -82,21 +88,85 @@ cloud or on premise.
## Concepts
### Namespaces
In Kubernetes, [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) provide a
mechanism for isolating groups of resources within a single cluster.
Both namespaces and resources (for example, StatefulSets or Services) within a namespace must be unique, but resources
across namespaces don't need to be.
!!! note
Namespace-based scoping is not applicable for cluster-wide objects (for example, StorageClass or PersistentVolumes).
### Nodes
Consider using StatefulSets instead of Deployments for Besu. The term 'client node' refers to bootnode, validator
and member/RPC nodes. For Besu nodes, we only use CLI arguments to keep things consistent.
### Role Based Access Controls
We encourage using RBACs for access to the private key of each node, that is, only a specific pod or statefulset is
allowed to access a specific secret.
If you need to specify a Kube configuration file for each pod, use the KUBE_CONFIG_PATH variable.
### Storage
We recommend you use [storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) and
[persistent volume claims (PVCs)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims).
We use separate data volumes to store the blockchain data. This is similar to using separate volumes
to store data when using docker containers natively or docker-compose. This is done for
a few reasons:
When using PVCs ensure you set `allowVolumeExpansion` to `true` to keep costs
low and enable growing the volume over time, rather than creating new volumes and copying data across.
* Containers are mortal and we do not want to store data on them.
* Kubernetes host nodes can fail and we want the chain data to persist.
### Namespaces
Ensure that you provide enough data storage capacity for all nodes on the cluster.
Select the appropriate type of [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) based
on your cloud provider. In the templates, the size of the [volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
is set to 20Gb by default; you can change this depending on your needs. If you have a different storage
account than the one in the charts, you may edit those
[storageClasses](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/charts/besu-node/templates/node-storage.yaml).
When using PVCs, set the `allowVolumeExpansion` to `true`. This helps keep costs low and enables growing the volume
over time rather than creating new volumes and copying data across.
### Monitoring
We recommend deploying metrics to get an overview of the network, nodes, and volumes. You can also create alerts.
[Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) provide
a mechanism for isolating groups of resources within a single cluster. Both namespaces and
resources (for example, StatefulSets and services) within a namespace need to be unique, but not
resources across namespaces.
Besu publishes metrics to Prometheus, and you can configure metrics using the kubernetes scraper configuration. We
also have custom Grafana dashboards to monitor the blockchain.
!!! note
Namespace-based scoping is not applicable for cluster-wide objects (for example, storage classes and persistent volumes claims).
Refer to `values/monitoring.yml` to configure the alerts per your requirements (for example slack or email).
```bash
cd helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack --version 34.10.0 --namespace=besu --create-namespace --values ./values/monitoring.yml --wait
kubectl --namespace besu apply -f ./values/monitoring/
```
You can configure Besu to suit your environment. For example, use the Elastic charts to log to a file
that you can parse using Logstash into an ELK cluster.
```bash
cd helm
helm repo add elastic https://helm.elastic.co
helm repo update
# if on cloud
helm install elasticsearch --version 7.17.1 elastic/elasticsearch --namespace besu --create-namespace --values ./values/elasticsearch.yml
# if local - set the replicas to 1
helm install elasticsearch --version 7.17.1 elastic/elasticsearch --namespace besu --create-namespace --values ./values/elasticsearch.yml --set replicas=1 --set minimumMasterNodes: 1
helm install kibana --version 7.17.1 elastic/kibana --namespace besu --values ./values/kibana.yml
helm install filebeat --version 7.17.1 elastic/filebeat --namespace besu --values ./values/filebeat.yml
```
### Ingress Controllers
If you require the ingress controllers for the RPC calls or the monitoring dashboards, we have provided example
[rules](https://github.com/ConsenSys/quorum-kubernetes/blob/master/ingress/ingress-rules-besu.yml) that
are pre-configured for common use cases. Use these as a reference and develop solutions to match your network
topology and requirements.

@ -28,10 +28,10 @@ Consider the following when deploying and developing with the playground:
* The playground is created specifically for developers and operators to become familiar with the deployment of Besu in
a Kubernetes environment in preparation for going into a cloud or on-premise environment.
Thus, it should **not** be deployed into a production environment.
* The playground is not a complete reflection of the `dev` and `prod` charts as it does not use `Helm`, but rather
* The playground is not a complete reflection of the `helm` charts as it does not use `Helm`, but rather
static or non-templated code that is deployed through `kubectl apply -f`.
This means that without `Helm` there's a significant amount of repeated code.
This is fine for development but not ideal for a production environment.
* The playground uses static/hard-coded keys.
Automatic key generation is only supported in `dev` and `prod` charts.
Automatic key generation is only supported in `helm` charts.
* As the playground is for local development, no cloud integration or lifecycle support is offered.

@ -12,13 +12,8 @@ description: Deploying Besu Helm Charts for production on a Kubernetes cluster
## Overview
The charts in the `prod` folder are similar to those in the `dev` folder but use cloud native services for
**identities** (IAM on AWS and a Managed Identity on Azure) and **secrets** (Secrets Manager on AWS and Key Vault on
Azure). Any keys or secrets are created directly in Secrets Manager or Key Vault, and the Identity is given permission to
retrieve those secrets at runtime. No Kubernetes secrets objects are created.
Access to these secrets are done on the least privileges policy and access to them is denied for
users. If any admins need access to them, they must update the IAM policy.
To get things production-ready, we'll use the same charts, and set a few of
the values in the `cluster` map as in the [Deploy](#deploy-the-network) section.
!!!warning
@ -44,19 +39,25 @@ Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCom
### Deploy the network
For the rest of this tutorial we use the [**Helm charts**](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm). After you have cloned the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository,
change the directory to `prod` for the rest of this tutorial.
For the rest of this tutorial we use Helm charts. After you have cloned the
[Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository, change the directory to `helm` for
the rest of this tutorial.
```bash
cd helm
```
!!!attention
Each helm chart has the following keys that must be set.
Specify either `aws` or `azure` for the `cluster.provider`. Additionally, set `cloudNativeServices: true` and
`reclaimPolicy: Retain` so that it looks like the following for AWS:
Please update all the [values files](https://github.com/ConsenSys/quorum-kubernetes/tree/master/helm/values)
with your choice of cloud provider (AWS or Azure) and set `provider: aws` or `provider: azure` as required.
Depending on the provider, you may also need to update the `azure:` or `aws:` dictionaries with specifics of your
cluster and account.
```bash
cluster:
provider: aws # choose from: aws | azure
cloudNativeServices: true # set to true to use Cloud Native Services (SecretsManager and IAM for AWS; KeyVault & Managed Identities for Azure)
reclaimPolicy: Retain # set to either Retain or Delete; note that PVCs and PVs will still exist after a 'helm delete'. Setting to Retain will keep volumes even if PVCs/PVs are deleted in kubernetes. Setting to Delete will remove volumes from EC2 EBS when PVC is deleted
```
Follow the steps outlined in the [deploy charts](./Deploy-Charts.md) tutorial to deploy the network.

@ -0,0 +1,83 @@
---
title: Besu Kubernetes - Quorum Explorer
description: Using the Quorum Explorer on a Kubernetes cluster
---
## Prerequisites
* Clone the [Quorum-Kubernetes](https://github.com/ConsenSys/quorum-kubernetes) repository
* A [running Kubernetes cluster](./Create-Cluster.md)
* [Kubectl](https://kubernetes.io/docs/tasks/tools/)
* [Helm3](https://helm.sh/docs/intro/install/)
* [Existing network](./Deploy-Charts.md)
## Deploying the Quorum Explorer helm chart
[Quorum-Explorer](https://github.com/ConsenSys/quorum-explorer) as a lightweight
blockchain explorer. The Quorum Explorer is **not** recommended for use in production and is intended for
demonstration or development purposes only.
The explorer can provide an overview over the whole network, such as block information, voting or removing
validators from the network, and demonstrates using the `SimpleStorage` smart contract with privacy enabled, and sending
transactions between wallets in one interface.
To use the explorer, update the [Quorum-Explorer values file](https://github.com/ConsenSys/quorum-kubernetes/blob/master/helm/values/explorer-besu.yaml)
with your node details and endpoints, and then [deploy](./Deploy-Charts.md).
## Nodes
The **Nodes** page provides an overview of the nodes on the network. Select the node you would like to interact
with from the drop-down on the top right, and you'll get details of the node, block height, peers, queued
transactions etc.
![`k8s-explorer`](../../images/kubernetes-explorer.png)
## Validators
The **Validators** page simulates a production environment or consortium where each node individually
runs API calls to vote to add a validator or remove an existing validator.
When using the buttons to remove, discard pending validators, or proposing a validator, the app sends an API request
to the selected node in the drop-down only. To add or remove a validator you need to select a
majority of the existing validator pool individually, and perform the vote API call by clicking the button.
Each node can call a discard on the voting process during or after the validator has been added.
The vote calls made from non-validator nodes have no effect on overall consensus.
![`k8s-explorer-validators`](../../images/kubernetes-explorer-validators.png)
## Explorer
The **Explorer** page gives you the latest blocks from the chain and the latest transactions as they
occur on the network. In addition, you can search by block number or transaction hash using the respective
search bar.
![`k8s-explorer-explorer`](../../images/kubernetes-explorer-explorer.png)
## Contracts
Use the **Contracts** page to compile and deploy a smart contract. Currently, the only contract available
for deployment through the app is the `SimpleStorage` contract. However, in time, we plan
to add more contracts to that view.
In this example, we deploy from `member-1` and select `member-1` and `member-3` in
the **Private For** multi-select. Then click on `Compile` and `Deploy`
![`k8s-explorer-contracts-1`](../../images/kubernetes-explorer-contracts-1.png)
Once deployed, you can interact with the contract. As this is a new transaction, select `member-1`
and `member-3` in **Interact** multi-select, and then click on the appropriate method call to `get`
or `set` the value at the deployed contract address.
![`k8s-explorer-contracts-set`](../../images/kubernetes-explorer-contracts-set.png)
To test the private transaction functionality, select `member-2` from the drop-down on
the top right, you'll notice that you are unable to interact with the contract because `member-2` was not part
of the transaction. Only `members-1` and `member-3` responds correctly.
## Wallet
The **Wallet** page gives you the functionality to send simple ETH transactions between accounts by providing
the account's private key, the recipient's address, and transfer amount in Wei.
![`k8s-explorer-wallet`](../../images/kubernetes-explorer-wallet.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 237 KiB

@ -181,6 +181,7 @@ nav:
- Local playground: Tutorials/Kubernetes/Playground.md
- Create a cluster: Tutorials/Kubernetes/Create-Cluster.md
- Deploy charts: Tutorials/Kubernetes/Deploy-Charts.md
- Quorum Explorer: Tutorials/Kubernetes/Quorum-Explorer.md
- Maintenance: Tutorials/Kubernetes/Maintenance.md
- Production: Tutorials/Kubernetes/Production.md
- Configure Kubernetes mode in NAT Manager : Tutorials/Kubernetes/Nat-Manager-Kubernetes.md

Loading…
Cancel
Save