The home for Hyperlane core contracts, sdk packages, and other infrastructure
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
hyperlane-monorepo/rust/main/helm/hyperlane-agent/templates/external-secret.yaml

49 lines
1.8 KiB

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ include "agent-common.fullname" . }}-external-secret
labels:
{{- include "agent-common.labels" . | nindent 4 }}
annotations:
update-on-redeploy: "{{ now }}"
spec:
secretStoreRef:
name: {{ include "agent-common.secret-store.name" . }}
kind: {{ .Values.externalSecrets.storeType }}
refreshInterval: "1h"
# The secret that will be created
target:
name: {{ include "agent-common.fullname" . }}-secret
template:
type: Opaque
metadata:
labels:
{{- include "agent-common.labels" . | nindent 10 }}
data:
{{- /*
* For each network, create an environment variable with the RPC endpoint.
* The templating of external-secrets will use the data section below to know how
* to replace the correct value in the created secret.
*/}}
{{- range .Values.hyperlane.chains }}
HYP_CHAINS_{{ .name | upper }}_CUSTOMRPCURLS: {{ printf "'{{ .%s_rpcs | mustFromJson | join \",\" }}'" .name }}
{{- if eq .protocol "cosmos" }}
HYP_CHAINS_{{ .name | upper }}_CUSTOMGRPCURLS: {{ printf "'{{ .%s_grpcs | mustFromJson | join \",\" }}'" .name }}
{{- end }}
{{- end }}
data:
{{- /*
* For each network, load the secret in GCP secret manager with the form: environment-rpc-endpoints-network,
* and associate it with the secret key networkname_rpcs.
*/}}
{{- range .Values.hyperlane.chains }}
- secretKey: {{ printf "%s_rpcs" .name }}
remoteRef:
key: {{ printf "%s-rpc-endpoints-%s" $.Values.hyperlane.runEnv .name }}
{{- if eq .protocol "cosmos" }}
Cosmos grpc fallbackprovider (#3139) ### Description Implements grpc fallback provider logic for cosmos - initially tried implementing the fallback provider deprioritization logic at middleware level like in the EVM. The difference between ethers and cosmrs is that in the latter, middleware can only live at the transport layer (`tower` crate level). - based on this github [issue](https://github.com/hyperium/tonic/issues/733), that actually doesn't look possible, because the http::Request type isn't `Clone` so it can't be submitted to multiple providers - ended up implementing the fallback provider at the application layer, by keeping an array of grpc channels - There is now a `call` method in `hyperlane_core::FallbackProvider` which I'm actually really happy with. This method handles the fallbackprovider-specific logic by taking in an async closure, running it on each provider, and iterating providers if the closure call fails. In `grpc.rs` you can see how this is slightly verbose but I think it's quite manageable. The only part that bugs me is having to duplicate `Pin::from(Box::from(future))`, but that's need afaict because the regular closure returns an anonymous type - adds `grpcUrls` and `customGrpcUrls` config items - tests the cosmos fallback provider e2e ### Drive-by changes <!-- Are there any minor or drive-by changes also included? --> ### Related issues - Fixes: https://github.com/hyperlane-xyz/issues/issues/998 ### Backward compatibility <!-- Are these changes backward compatible? Are there any infrastructure implications, e.g. changes that would prohibit deploying older commits using this infra tooling? Yes/No --> ### Testing <!-- What kind of testing have these changes undergone? None/Manual/Unit Tests -->
10 months ago
- secretKey: {{ printf "%s_grpcs" .name }}
remoteRef:
Cosmos grpc fallbackprovider (#3139) ### Description Implements grpc fallback provider logic for cosmos - initially tried implementing the fallback provider deprioritization logic at middleware level like in the EVM. The difference between ethers and cosmrs is that in the latter, middleware can only live at the transport layer (`tower` crate level). - based on this github [issue](https://github.com/hyperium/tonic/issues/733), that actually doesn't look possible, because the http::Request type isn't `Clone` so it can't be submitted to multiple providers - ended up implementing the fallback provider at the application layer, by keeping an array of grpc channels - There is now a `call` method in `hyperlane_core::FallbackProvider` which I'm actually really happy with. This method handles the fallbackprovider-specific logic by taking in an async closure, running it on each provider, and iterating providers if the closure call fails. In `grpc.rs` you can see how this is slightly verbose but I think it's quite manageable. The only part that bugs me is having to duplicate `Pin::from(Box::from(future))`, but that's need afaict because the regular closure returns an anonymous type - adds `grpcUrls` and `customGrpcUrls` config items - tests the cosmos fallback provider e2e ### Drive-by changes <!-- Are there any minor or drive-by changes also included? --> ### Related issues - Fixes: https://github.com/hyperlane-xyz/issues/issues/998 ### Backward compatibility <!-- Are these changes backward compatible? Are there any infrastructure implications, e.g. changes that would prohibit deploying older commits using this infra tooling? Yes/No --> ### Testing <!-- What kind of testing have these changes undergone? None/Manual/Unit Tests -->
10 months ago
key: {{ printf "%s-grpc-endpoints-%s" $.Values.hyperlane.runEnv .name }}
{{- end }}
{{- end }}