bitnamicharts/etcdetcd is a distributed key-value store designed to securely store data across a cluster. etcd is widely used in production on account of its reliability, fault-tolerance and ease of use.
Overview of Etcd
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
consolehelm install my-release oci://registry-1.docker.io/bitnamicharts/etcd
Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?
Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.
!Alt text !Alt text
If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.
This chart bootstraps a etcd deployment on a Kubernetes cluster using the Helm package manager.
To install the chart with the release name my-release:
consolehelm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/etcd
Note: You need to substitute the placeholders
REGISTRY_NAMEandREPOSITORY_NAMEwith a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.ioandREPOSITORY_NAME=bitnamicharts.
These commands deploy etcd on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
This chart can be integrated with Prometheus by setting metrics.enabled to true. This will expose the etcd native Prometheus port in the container and service (if metrics.useSeparateEndpoint=true). It will all have the necessary annotations to be automatically scraped by Prometheus.
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
The chart can deploy PodMonitor objects for integration with Prometheus Operator installations. To do so, set the value *.metrics.podMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:
textno matches for kind "PodMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
Bitnami charts configure credentials at first boot. Any further change in the secrets or credentials require manual intervention. Follow these instructions:
shellkubectl create secret generic SECRET_NAME --from-literal=etcd-root-password=PASSWORD --dry-run -o yaml | kubectl apply -f -
The Bitnami etcd chart can be used to bootstrap an etcd cluster, easy to scale and with available features to implement disaster recovery. It uses static discovery configured via environment variables to bootstrap the etcd cluster. Based on the number of initial replicas, and using the A records added to the DNS configuration by the headless service, the chart can calculate every advertised peer URL.
The chart makes use of some extra elements offered by Kubernetes to ensure the bootstrapping is successful:
Learn more about etcd discovery, Pod Management Policies and recording "not ready" pods.
Here is an example of the environment configuration bootstrapping an etcd cluster with 3 replicas:
| Member | Variable | Value |
|---|---|---|
| 0 | ETCD_NAME | etcd-0 |
| 0 | ETCD_INITIAL_ADVERTISE_PEER_URLS | <[***]> |
| --------- | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | ETCD_NAME | etcd-1 |
| 1 | ETCD_INITIAL_ADVERTISE_PEER_URLS | <[***]> |
| --------- | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 2 | ETCD_NAME | etcd-2 |
| 2 | ETCD_INITIAL_ADVERTISE_PEER_URLS | <[***]> |
| --------- | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| * | ETCD_INITIAL_CLUSTER_TOKEN | etcd-cluster-k8s |
| * | ETCD_INITIAL_CLUSTER | etcd-0=<[]>,etcd-1=<[]>,etcd-2=<[***]> |
The probes (readiness & liveness) are delayed 60 seconds by default, to give the etcd replicas time to start and find each other. After that period, the etcdctl endpoint health command is used to periodically perform health checks on every replica.
The Bitnami etcd chart uses etcd reconfiguration operations to add/remove members of the cluster during scaling.
When scaling down, a "pre-stop" lifecycle hook is used to ensure that the etcdctl member remove command is executed. The hook stores the output of this command in the persistent volume attached to the etcd pod. This hook is also executed when the pod is manually removed using the kubectl delete pod command or rescheduled by Kubernetes for any reason. This implies that the cluster can be scaled up/down without human intervention.
Here is an example to explain how this works:
If, for whatever reason, the "pre-stop" hook fails at removing the member, the initialization logic is able to detect that something went wrong by checking the etcdctl member remove command output that was stored in the persistent volume. It then uses the etcdctl member update command to add back the member. In this case, the cluster isn't automatically scaled down/up while the pod is recovered. Therefore, when other members attempt to connect to the pod, it may cause warnings or errors like the one below:
textE | rafthttp: failed to dial XXXXXXXX on stream Message (peer XXXXXXXX failed to find local node YYYYYYYYY) I | rafthttp: peer XXXXXXXX became inactive (message send to peer failed) W | rafthttp: health check for peer XXXXXXXX could not connect: dial tcp A.B.C.D:2380: i/o timeout
Learn more about etcd runtime configuration and how to safely drain a Kubernetes node.
When updating the etcd StatefulSet (such as when upgrading the chart version via the helm upgrade command), every pod must be replaced following the StatefulSet update strategy.
The chart uses a "RollingUpdate" strategy by default and with default Kubernetes values. In other words, it updates each Pod, one at a time, in the same order as Pod termination (from the largest ordinal to the smallest). It will wait until an updated Pod is "Running" and "Ready" prior to updating its predecessor.
Learn more about StatefulSet update strategies.
If, for whatever reason, (N-1)/2 members of the cluster fail and the "pre-stop" hooks also fail at removing them from the cluster, the cluster disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates. Under this circumstance, the only possible solution is usually to restore the cluster from a snapshot.
IMPORTANT: All members should restore using the same snapshot.
The Bitnami etcd chart solves this problem by optionally offering a Kubernetes cron job that periodically snapshots the keyspace and stores it in a RWX volume. In case the cluster disastrously fails, the pods will automatically try to restore it using the last avalable snapshot.
Learn how to enable this disaster recovery feature.
The chart also sets by default a "soft" Pod AntiAffinity to reduce the risk of the cluster failing disastrously.
Learn more about etcd recovery, Kubernetes cron jobs and pod affinity and anti-affinity
The etcd chart can be configured with Role-based access control and TLS encryption to improve its security.
In order to enable Role-Based Access Control for etcd, set the following parameters:
textauth.rbac.create=true auth.rbac.rootPassword=ETCD_ROOT_PASSWORD
These parameters create a root user with an associate root role with access to everything. The remaining users will use the guest role and won't have permissions to do anything.
In order to enable secure transport between peer nodes deploy the helm chart with these options:
textauth.peer.secureTransport=true auth.peer.useAutoTLS=true
In order to enable secure transport between client and server, create a secret containing the certificate and key files and the CA used to sign the client certificates. In this case, create the secret and then deploy the chart with these options:
textauth.client.secureTransport=true auth.client.enableAuthentication=true auth.client.existingSecret=etcd-client-certs
Learn more about the etcd security model and how to generate self-signed certificates for etcd.
The Bitnami etcd Helm chart supports automatic disaster recovery by periodically snapshotting the keyspace. If the cluster permanently loses more than (N-1)/2 members, it tries to recover the cluster from a previous snapshot.
Enable this feature with the following parameters:
textpersistence.enabled=true disasterRecovery.enabled=true disasterRecovery.pvc.size=2Gi disasterRecovery.pvc.storageClassName=nfs
If the startFromSnapshot.* parameters are used at the same time as the disasterRecovery.* parameters, the PVC provided via the startFromSnapshot.existingClaim parameter will be used to store the periodical snapshots.
NOTE: The disaster recovery feature requires volumes with ReadWriteMany access mode.
Two different approaches are available to back up and restore this Helm Chart:
This method involves the following steps:
NOTE: Under this approach, it is important to create the new deployment on the destination cluster using the same credentials as the original deployment on the source cluster.
This method involves copying the persistent data volumes for the etcd nodes and reusing them in a new deployment with Velero, an open source Kubernetes backup/restore tool. This method is only suitable when:
This method involves the following steps:
The metrics exposed by etcd can be exposed to be scraped by Prometheus. Metrics can be scraped from within the cluster using any of the following approaches:
yamlpodAnnotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics/cluster" prometheus.io/port: "9000"
If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.
In order to use custom configuration parameters, two options are available:
extraEnvVars property. Alternatively, you can use a ConfigMap or a Secret with the environment variables using the extraEnvVarsCM or the extraEnvVarsSecret properties.yamlextraEnvVars: - name: ETCD_AUTO_COMPACTION_RETENTION value: "0" - name: ETCD_HEARTBEAT_INTERVAL value: "150"
etcd.conf.yml: The etcd chart allows mounting a custom etcd.conf.yml file as ConfigMap. In order to so, you can use the configuration property. Alternatively, you can use an existing ConfigMap using the existingConfigmap parameter.Since etcd keeps an exact history of its keyspace, this history should be periodically compacted to avoid performance degradation and eventual storage space exhaustion. Compacting the keyspace history drops all information about keys superseded prior to a given keyspace revision. The space used by these keys then becomes available for additional writes to the keyspace.
autoCompactionMode, by default periodic. Valid values: "periodic", "revision".
autoCompactionRetention for mvcc key value store in hour, by default 0, means disabled.You can enable auto compaction by using following parameters:
consoleautoCompactionMode=periodic autoCompactionRetention=10m
If you have a need for additional containers to run within the same pod as the etcd app (e.g. an additional metrics or logging exporter), you can do so via the sidecars config parameter. Simply define your container according to the Kubernetes container spec.
yamlsidecars: - name: your-image-name image: your-image imagePullPolicy: Always ports: - name: portname containerPort: 1234
Similarly, you can add extra init containers using the initContainers parameter.
yamlinitContainers: - name: your-image-name image: your-image imagePullPolicy: Always ports: - name: portname containerPort: 1234
There are cases where you may want to deploy extra objects, such a ConfigMap containing your app's configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy parameter.
This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod's affinity in the kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.
The Bitnami etcd image stores the etcd data at the /bitnami/etcd path of the container. Persistent Volume Claims are used to keep the data across statefulsets.
The chart mounts a Persistent Volume volume at this location. The volume is created using dynamic volume provisioning by default. An existing PersistentVolumeClaim can also be defined for this purpose.
If you encounter errors when working with persistent volumes, refer to our [troubleshooting guide for persistent volumes]([***]
_Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at [***]
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务