bitnamicharts/kafkaApache Kafka is a distributed streaming platform designed to build real-time pipelines and can be used as a message broker or as a replacement for a log aggregation solution for big data applications.
Overview of Apache Kafka
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
consolehelm install my-release oci://registry-1.docker.io/bitnamicharts/kafka
Tip: Did you know that this app is also available as a Kubernetes App on the Azure Marketplace? Kubernetes Apps are the easiest way to deploy Bitnami on AKS. Click here to see the listing on Azure Marketplace.
Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?
Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.
!Alt text !Alt text
If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.
This chart bootstraps a Kafka deployment on a Kubernetes cluster using the Helm package manager.
To install the chart with the release name my-release:
consolehelm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kafka
Note: You need to substitute the placeholders
REGISTRY_NAMEandREPOSITORY_NAMEwith a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.ioandREPOSITORY_NAME=bitnamicharts.
These commands deploy Kafka on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
This chart allows you to automatically configure Kafka with 4 listeners:
For more complex configurations, set the listeners, advertisedListeners and listenerSecurityProtocolMap parameters as needed.
You can configure different authentication protocols for each listener you configure in Kafka. For instance, you can use sasl_tls authentication for client communications, while using tls for controller and inter-broker communications. This table shows the available protocols and the security they provide:
| Method | Authentication | Encryption via TLS |
|---|---|---|
| plaintext | None | No |
| tls | None | Yes |
| mtls | Yes (two-way authentication) | Yes |
| sasl | Yes (via SASL) | No |
| sasl_tls | Yes (via SASL) | Yes |
Configure the authentication protocols for client, controller and inter-broker communications by setting the listeners.client.protocol, listeners.controller.protocol and listeners.interbroker.protocol parameters to the desired ones, respectively.
If you enabled SASL authentication on any listener, you can set the SASL credentials using the parameters below:
sasl.client.users/sasl.client.passwords: when enabling SASL authentication for communications with clients.sasl.interbroker.user/sasl.interbroker.password: when enabling SASL authentication for inter-broker communications.sasl.controller.user/sasl.controller.password: when enabling SASL authentication for controller communications.In order to configure TLS authentication/encryption, you can create a secret per Kafka node you have in the cluster containing the Java Key Stores (JKS) files: the truststore (kafka.truststore.jks) and the keystore (kafka.keystore.jks). Then, you need pass the secret names with the tls.existingSecret parameter when deploying the chart.
Note: If the JKS files are password protected (recommended), you will need to provide the password to get access to the keystores. To do so, use the
tls.keystorePasswordandtls.truststorePasswordparameters to provide your passwords.
For instance, to configure TLS authentication on a Kafka cluster with 2 Kafka nodes use the commands below to create the secrets:
consolekubectl create secret generic kafka-jks-0 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-0.keystore.jks kubectl create secret generic kafka-jks-1 --from-file=kafka.truststore.jks=./kafka.truststore.jks --from-file=kafka.keystore.jks=./kafka-1.keystore.jks
Note: the command above assumes you already created the truststore and keystores files. This script can help you with the JKS files generation.
If, for some reason (like using CertManager) you can not use the default JKS secret scheme, you can use the additional parameters:
tls.jksTruststoreSecret to define additional secret, where the kafka.truststore.jks is being kept. The truststore password must be the same as in tls.truststorePasswordtls.jksTruststoreKey to overwrite the default value of the truststore key (kafka.truststore.jks).Note: If you are using CertManager, particularly when an ACME issuer is used, the
ca.crtfield is not put in theSecretthat CertManager creates. To handle this, thetls.pemChainIncludedproperty can be set totrueand the initContainer created by this Chart will attempt to extract the intermediate certs from thetls.crtfield of the secret (which is a PEM chain) Note: The truststore/keystore from above must be protected with the same passwords set in thetls.keystorePasswordandtls.truststorePasswordparameters.
You can deploy the chart with authentication using the following parameters:
consolereplicaCount=2 listeners.client.protocol=SASL listeners.interbroker.protocol=TLS tls.existingSecret=kafka-jks tls.keystorePassword=jksPassword tls.truststorePassword=jksPassword sasl.client.users[0]=brokerUser sasl.client.passwords[0]=brokerPassword
By setting the following parameter: listeners.client.protocol=SSL and listener.client.sslClientAuth=required, Kafka will require the clients to authenticate to Kafka brokers via certificate.
As result, we will be able to see in kafka-authorizer.log the events specific Subject: [...] Principal = User:CN=kafka,OU=...,O=...,L=...,C=..,ST=... is [...].
The Bitnami Kafka chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in sasl.existingSecret. To update credentials, use one of the following:
helm upgrade specifying new credentials in the sasl section as explained in the authentication section.helm upgrade specifying a new secret in sasl.existingSecretIn order to access Kafka Brokers from outside the cluster, an additional listener and advertised listener must be configured. Additionally, a specific service per kafka pod will be created.
There are three ways of configuring external access. Using Load*** services, using NodePort services or using ClusterIP services.
You have two alternatives to use Load*** services:
consoleexternalAccess.enabled=true externalAccess.broker.service.type=Load*** externalAccess.controller.service.type=Load*** externalAccess.broker.service.ports.external=9094 externalAccess.controller.service.ports.external=9094 defaultInitContainers.autoDiscovery.enabled=true serviceAccount.create=true broker.automountServiceAccountToken=true controller.automountServiceAccountToken=true rbac.create=true
Note: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
consoleexternalAccess.enabled=true externalAccess.controller.service.type=Load*** externalAccess.controller.service.containerPorts.external=9094 externalAccess.controller.service.load***IPs[0]='external-ip-1' externalAccess.controller.service.load***IPs[1]='external-ip-2' externalAccess.broker.service.type=Load*** externalAccess.broker.service.ports.external=9094 externalAccess.broker.service.load***IPs[0]='external-ip-3' externalAccess.broker.service.load***IPs[1]='external-ip-4'
Note: You need to know in advance the load *** IPs so each Kafka broker advertised listener is configured with it.
Following the aforementioned steps will also allow to connect the brokers from the outside using the cluster's default service (when service.type is Load*** or NodePort). Use the property service.externalPort to specify the port used for external connections.
You have two alternatives to use NodePort services:
Option A) Use random node ports using an initContainer that discover them automatically.
consoleexternalAccess.enabled=true externalAccess.controller.service.type=NodePort externalAccess.broker.service.type=NodePort defaultInitContainers.autoDiscovery.enabled=true serviceAccount.create=true rbac.create=true
Note: This option requires creating RBAC rules on clusters where RBAC policies are enabled.
Option B) Manually specify the node ports:
consoleexternalAccess.enabled=true externalAccess.controller.service.type=NodePort externalAccess.controller.service.nodePorts[0]='node-port-1' externalAccess.controller.service.nodePorts[1]='node-port-2'
Note: You need to know in advance the node ports that will be exposed so each Kafka broker advertised listener is configured with it.
The pod will try to get the external ip of the node using curl -s [***] unless externalAccess.<controller|broker>.service.domain or externalAccess.<controller|broker>.service.useHostIPs is provided.
Option C) Manually specify distinct external IPs (using controller+broker nodes)
consoleexternalAccess.enabled=true externalAccess.controller.service.type=NodePort externalAccess.controller.service.externalIPs[0]='172.16.0.20' externalAccess.controller.service.externalIPs[1]='172.16.0.21' externalAccess.controller.service.externalIPs[2]='172.16.0.22'
Note: You need to know in advance the available IP of your cluster that will be exposed so each Kafka broker advertised listener is configured with it.
Note: This option requires that an ingress is deployed within your cluster
consoleexternalAccess.enabled=true externalAccess.controller.service.type=ClusterIP externalAccess.controller.service.ports.external=9094 externalAccess.controller.service.domain='ingress-ip' externalAccess.broker.service.type=ClusterIP externalAccess.broker.service.ports.external=9094 externalAccess.broker.service.domain='ingress-ip'
Note: the deployed ingress must contain the following block:
consoletcp: 9094: "{{ include "common.names.namespace" . }}/{{ include "common.names.fullname" . }}-0-external:9094" 9095: "{{ include "common.names.namespace" . }}/{{ include "common.names.fullname" . }}-1-external:9094" 9096: "{{ include "common.names.namespace" . }}/{{ include "common.names.fullname" . }}-2-external:9094"
You can use the following values to generate External-DNS annotations which automatically creates DNS records for each ReplicaSet pod:
yamlexternalAccess: controller: service: annotations: external-dns.alpha.kubernetes.io/hostname: "{{ .targetPod }}.example.com"
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources values (check parameters table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
This chart can be integrated with Prometheus by setting metrics.jmx.enabled to true. This will deploy a sidecar container with jmx_exporter in all pods and a metrics service, which can be configured under the metrics.jmx.service section. This service will have the necessary annotations to be automatically scraped by Prometheus.
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:
textno matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
If you have a need for additional containers to run within the same pod as Kafka (e.g. an additional metrics or logging exporter), you can do so via the sidecars parameters. Simply define your container according to the Kubernetes container spec.
yamlsidecars: - name: your-image-name image: your-image imagePullPolicy: Always ports: - name: portname containerPort: 1234
This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod's affinity in the kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.
There are cases where you may want to deploy extra objects, such as Kafka Connect. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy parameter. The following example would create a deployment including a Kafka Connect deployment so you can connect Kafka with MongoDB®:
yamlextraDeploy: - | apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "common.names.fullname" . }}-connect labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }} app.kubernetes.io/component: connector spec: replicas: 1 selector: matchLabels: {{- include "common.labels.matchLabels" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 6 }} app.kubernetes.io/component: connector template: metadata: labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 8 }} app.kubernetes.io/component: connector spec: containers: - name: connect image: KAFKA-CONNECT-IMAGE imagePullPolicy: IfNotPresent ports: - name: connector containerPort: 8083 volumeMounts: - name: configuration mountPath: /bitnami/kafka/config volumes: - name: configuration configMap: name: {{ include "common.names.fullname" . }}-connect - | apiVersion: v1 kind: ConfigMap metadata: name: {{ include "common.names.fullname" . }}-connect labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }} app.kubernetes.io/component: connector data: connect-standalone.properties: |- bootstrap.servers = {{ include "common.names.fullname" . }}-controller-0.{{ include "common.names.fullname" . }}-controller-headless.{{ include "common.names.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.ports.client }} ... mongodb.properties: |- connection.uri=mongodb://root:password@mongodb-hostname:27017 ... - | apiVersion: v1 kind: Service metadata: name: {{ include "common.names.fullname" . }}-connect labels: {{- include "common.labels.standard" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }} app.kubernetes.io/component: connector spec: ports: - protocol: TCP port: 8083 targetPort: connector selector: {{- include "common.labels.matchLabels" ( dict "customLabels" .Values.commonLabels "context" $ ) | nindent 4 }} app.kubernetes.io/component: connector
You can create the Kafka Connect image using the Dockerfile below:
DockerfileFROM bitnami/kafka:latest # Download MongoDB® Connector for Apache Kafka [***] RUN mkdir -p /opt/bitnami/kafka/plugins && \ cd /opt/bitnami/kafka/plugins && \ curl --remote-name --location --silent [***] CMD /opt/bitnami/kafka/bin/connect-standalone.sh /bitnami/kafka/config/connect-standalone.properties /bitnami/kafka/config/mongo.properties
The Bitnami Kafka image stores the Kafka data at the /bitnami/kafka path of the container. Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.
By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.
You can enable this initContainer by setting volumePermissions.enabled to true.
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
| Name | Description | Value |
|---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) |
_Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at [***]
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务