bitnami/kafkaApache Kafka is a distributed streaming platform designed to build real-time pipelines and can be used as a message broker or as a replacement for a log aggregation solution for big data applications.
Overview of Apache Kafka Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
consoledocker run --name kafka bitnami/kafka:latest
Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?
Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.
!Alt text !Alt text
If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.
Deploying Bitnami applications as Helm Charts is the easiest way to get started with our applications on Kubernetes. Read more about the installation in the Bitnami Apache Kafka Chart GitHub repository.
Non-root container images add an extra layer of security and are generally recommended for production environments. However, because they run as a non-root user, privileged tasks are typically off-limits. Learn more about non-root containers in our docs.
Dockerfile linksLearn more about the Bitnami tagging policy and the difference between rolling tags and immutable tags in our documentation page.
You can see the equivalence between the different tags by taking a look at the tags-info.yaml file present in the branch folder, i.e bitnami/ASSET/BRANCH/DISTRO/tags-info.yaml.
Subscribe to project updates by watching the bitnami/containers GitHub repo.
The recommended way to get the Bitnami Apache Kafka Docker Image is to pull the prebuilt image from the Docker Hub Registry.
consoledocker pull bitnami/kafka:latest
To use a specific version, you can pull a versioned tag. You can view the list of available versions in the Docker Hub Registry.
consoledocker pull bitnami/kafka:[TAG]
If you wish, you can also build the image yourself by cloning the repository, changing to the directory containing the Dockerfile and executing the docker build command. Remember to replace the APP, VERSION and OPERATING-SYSTEM path placeholders in the example command below with the correct values.
consolegit clone [***] cd bitnami/APP/VERSION/OPERATING-SYSTEM docker build -t bitnami/APP:latest .
If you remove the container all your data and configurations will be lost, and the next time you run the image the database will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.
Note: If you have already started using your database, follow the steps on backing up and restoring to pull the data from your running container down to your host.
The image exposes a volume at /bitnami/kafka for the Apache Kafka data. For persistence you can mount a directory at this location from your host. If the mounted directory is empty, it will be initialized on the first run.
Using Docker Compose:
This requires a minor change to the docker-compose.yml file present in this repository:
yamlkafka: ... volumes: - /path/to/kafka-persistence:/bitnami/kafka ...
NOTE: As this is a non-root container, the mounted files and directories must have the proper permissions for the UID
1001.
Using Docker container networking, an Apache Kafka server running inside a container can easily be accessed by your application containers.
Containers attached to the same network can communicate with each other using the container name as the hostname.
In this example, we will create an Apache Kafka client instance that will connect to the server instance that is running on the same docker network as the client.
consoledocker network create app-tier --driver bridge
Use the --network app-tier argument to the docker run command to attach the Apache Kafka container to the app-tier network.
consoledocker run -d --name kafka-server --hostname kafka-server \ --network app-tier \ -e KAFKA_CFG_NODE_ID=0 \ -e KAFKA_CFG_PROCESS_ROLES=controller,broker \ -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \ -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \ -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \ bitnami/kafka:latest
Finally we create a new container instance to launch the Apache Kafka client and connect to the server created in the previous step:
consoledocker run -it --rm \ --network app-tier \ bitnami/kafka:latest kafka-topics.sh --list --bootstrap-server kafka-server:9092
When not specified, Docker Compose automatically sets up a new network and attaches all deployed services to that network. However, we will explicitly define a new bridge network named app-tier. In this example we assume that you want to connect to the Apache Kafka server from your own custom application image which is identified in the following snippet by the service name myapp.
yamlversion: '2' networks: app-tier: driver: bridge services: kafka: image: bitnami/kafka:latest networks: - app-tier environment: - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER myapp: image: YOUR_APPLICATION_IMAGE networks: - app-tier
IMPORTANT:
- Please update the
YOUR_APPLICATION_IMAGEplaceholder in the above snippet with your application image- In your application container, use the hostname
kafkato connect to the Apache Kafka server
Launch the containers using:
consoledocker-compose up -d
| Name | Description | Default Value |
|---|---|---|
KAFKA_MOUNTED_CONF_DIR | Kafka directory for mounted configuration files. | ${KAFKA_VOLUME_DIR}/config |
KAFKA_CLUSTER_ID | Kafka cluster ID. | nil |
KAFKA_CFG_CONTROLLER_QUORUM_BOOTSTRAP_SERVERS | List of endpoints to use for bootstrapping the cluster metadata. | localhost:9093 |
KAFKA_INITIAL_CONTROLLERS | List of Kafka cluster initial controllers. | nil |
KAFKA_SKIP_KRAFT_STORAGE_INIT | If set to true, skip Kraft storage initialization when process.roles are configured. | false |
KAFKA_CFG_SASL_ENABLED_MECHANISMS | Kafka sasl.enabled.mechanisms configuration override. | PLAIN,SCRAM-SHA-256,SCRAM-SHA-512 |
KAFKA_CLIENT_LISTENER_NAME | Name of the listener intended to be used by clients, if set, configures the producer/consumer accordingly. | nil |
KAFKA_OPTS | Kafka deployment options. | nil |
KAFKA_ZOOKEEPER_PROTOCOL | Authentication protocol for Zookeeper connections. Allowed protocols: PLAINTEXT, SASL, SSL, and SASL_SSL. | PLAINTEXT |
KAFKA_ZOOKEEPER_PASSWORD | Kafka Zookeeper user password for SASL authentication. | nil |
KAFKA_ZOOKEEPER_USER | Kafka Zookeeper user for SASL authentication. | nil |
KAFKA_ZOOKEEPER_TLS_TYPE | Choose the TLS certificate format to use. Allowed values: JKS, PEM. | JKS |
KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_FILE | Kafka Zookeeper truststore file location. | nil |
KAFKA_ZOOKEEPER_TLS_KEYSTORE_PASSWORD | Kafka Zookeeper keystore file password and key password. | nil |
KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_PASSWORD | Kafka Zookeeper truststore file password. | nil |
KAFKA_ZOOKEEPER_TLS_VERIFY_HOSTNAME | Verify Zookeeper hostname on TLS certificates. | true |
KAFKA_INTER_BROKER_USER | Kafka inter broker communication user. | user |
KAFKA_INTER_BROKER_PASSWORD | Kafka inter broker communication password. | bitnami |
KAFKA_CONTROLLER_USER | Kafka control plane communication user. | controller_user |
KAFKA_CONTROLLER_PASSWORD | Kafka control plane communication password. | bitnami |
KAFKA_CERTIFICATE_PASSWORD | Password for certificates. | nil |
KAFKA_TLS_TRUSTSTORE_FILE | Kafka truststore file location. | nil |
KAFKA_TLS_TYPE | Choose the TLS certificate format to use. | JKS |
KAFKA_TLS_CLIENT_AUTH | Configures kafka broker to request client authentication. | required |
KAFKA_CLIENT_USERS | List of users that will be created when using SASL_SCRAM for client communications. Separated by commas, semicolons or whitespaces. | user |
KAFKA_CLIENT_PASSWORDS | Passwords for the users specified at KAFKA_CLIENT_USERS. Separated by commas, semicolons or whitespaces. | bitnami |
KAFKA_HEAP_OPTS | Kafka heap options for Java. | -Xmx1024m -Xms1024m |
JAVA_TOOL_OPTIONS | Java tool options. | nil |
| Name | Description | Value |
|---|---|---|
KAFKA_BASE_DIR | Kafka installation directory. | ${BITNAMI_ROOT_DIR}/kafka |
KAFKA_VOLUME_DIR | Kafka persistence directory. | /bitnami/kafka |
KAFKA_DATA_DIR | Kafka directory where data is stored. | ${KAFKA_VOLUME_DIR}/data |
KAFKA_CONF_DIR | Kafka configuration directory. | ${KAFKA_BASE_DIR}/config |
KAFKA_CONF_FILE | Kafka configuration file. | ${KAFKA_CONF_DIR}/server.properties |
KAFKA_CERTS_DIR | Kafka directory for certificate files. | ${KAFKA_CONF_DIR}/certs |
KAFKA_INITSCRIPTS_DIR | Kafka directory for init scripts. | /docker-entrypoint-initdb.d |
KAFKA_LOG_DIR | Directory where Kafka logs are stored. | ${KAFKA_BASE_DIR}/logs |
KAFKA_HOME | Kafka home directory. | $KAFKA_BASE_DIR |
KAFKA_DAEMON_USER | Kafka system user. | kafka |
KAFKA_DAEMON_GROUP | Kafka system group. | kafka |
Additionally, any environment variable beginning with KAFKA_CFG_ will be mapped to its corresponding Apache Kafka key. For example, use KAFKA_CFG_BACKGROUND_THREADS in order to set background.threads or KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE in order to configure auto.create.topics.enable.
consoledocker run --name kafka -e KAFKA_CFG_PROCESS_ROLES ... -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true bitnami/kafka:latest
or by modifying the docker-compose.yml file present in this repository:
yamlkafka: ... environment: - KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true ...
To use Apache Kafka in a development setup, create the following docker-compose.yml file:
yamlversion: "3" services: kafka: image: bitnami/kafka:latest ports: - 9092:9092 environment: - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
To deploy it, run the following command in the directory where the docker-compose.yml file is located:
consoledocker-compose up -d
In order to use internal and external clients to access Apache Kafka brokers you need to configure one listener for each kind of client.
To do so, add the following environment variables to your docker-compose:
diffenvironment: - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker + - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094 + - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094 + - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
And expose the external port:
(the internal, client one can still be used within the docker network)
diffports: - - 9092:9092 + - 9094:9094
Note: To connect from an external machine, change localhost above to your host's external IP/hostname and include EXTERNAL://0.0.0.0:9094 in KAFKA_CFG_LISTENERS to allow for remote connections.
These clients, from the same host, will use localhost to connect to Apache Kafka.
consolekafka-console-producer.sh --producer.config /opt/bitnami/kafka/config/producer.properties --bootstrap-server 127.0.0.1:9094 --topic test kafka-console-consumer.sh --consumer.config /opt/bitnami/kafka/config/consumer.properties --bootstrap-server 127.0.0.1:9094 --topic test --from-beginning
If running these commands from another machine, change the address accordingly.
These clients, from other containers on the same Docker network, will use the kafka container service hostname to connect to Apache Kafka.
consolekafka-console-producer.sh --producer.config /opt/bitnami/kafka/config/producer.properties --bootstrap-server kafka:9092 --topic test kafka-console-consumer.sh --consumer.config /opt/bitnami/kafka/config/consumer.properties --bootstrap-server kafka:9092 --topic test --from-beginning
Similarly, application code will need to use bootstrap.servers=kafka:9092
More info about Apache Kafka listeners can be found in this great article
In order to configure authentication, you must configure the Apache Kafka listeners properly. Let's see an example to configure Apache Kafka with SASL_SSL authentication for communications with clients, and SASL authentication for controller-related communications.
The environment variables below should be defined to configure the listeners, and the SASL credentials for client communications:
consoleKAFKA_CFG_LISTENERS=SASL_SSL://:9092,CONTROLLER://:9093 KAFKA_CFG_ADVERTISED_LISTENERS=SASL_SSL://localhost:9092 KAFKA_CLIENT_USERS=user KAFKA_CLIENT_PASSWORDS=password KAFKA_CLIENT_LISTENER_NAME=SASL_SSL KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:SASL_PLAINTEXT,SASL_SSL:SASL_SSL KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN KAFKA_CONTROLLER_USER=controller_user KAFKA_CONTROLLER_PASSWORD=controller_password
You must also use your own certificates for SSL. You can drop your Java Key Stores or PEM files into /opt/bitnami/kafka/config/certs. If the JKS or PEM certs are password protected (recommended), you will need to provide it to get access to the keystores:
KAFKA_CERTIFICATE_PASSWORD=myCertificatePassword
If the truststore is mounted in a different location than /opt/bitnami/kafka/config/certs/kafka.truststore.jks, /opt/bitnami/kafka/config/certs/kafka.truststore.pem, /bitnami/kafka/config/certs/kafka.truststore.jks or /bitnami/kafka/config/certs/kafka.truststore.pem, set the KAFKA_TLS_TRUSTSTORE_FILE variable.
The following script can help you with the creation of the JKS and certificates:
Keep in mind the following notes:
kafka.example.com. After entering this value, when prompted "What is your first and last name?", enter this value as well.
KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM to an empty string.The following docker-compose file is an example showing how to mount your JKS certificates protected by the password certificatePassword123. Additionally it is specifying the Apache Kafka container hostname and the credentials for the client user.
yamlversion: '2' services: kafka: image: bitnami/kafka:latest hostname: kafka.example.com ports: - 9092 environment: # KRaft - KAFKA_CFG_NODE_ID=0 - KAFKA_CFG_PROCESS_ROLES=controller,broker # Listeners - KAFKA_CFG_LISTENERS=SASL_SSL://:9092,CONTROLLER://:9093 - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:SASL_PLAINTEXT,SASL_SSL:SASL_SSL - KAFKA_CFG_ADVERTISED_LISTENERS=SASL_SSL://:9092 - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=SASL_SSL - KAFKA_CLIENT_LISTENER_NAME=SASL_SSL # Remove this line if consumer/producer.properties are not required # SASL - KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN - KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN - KAFKA_CONTROLLER_USER=controller_user - KAFKA_CONTROLLER_PASSWORD=controller_password - KAFKA_INTER_BROKER_USER=interbroker_user - KAFKA_INTER_BROKER_PASSWORD=interbroker_password - KAFKA_CLIENT_USERS=user - KAFKA_CLIENT_PASSWORDS=password # SSL - KAFKA_TLS_TYPE=JKS # or PEM - KAFKA_CERTIFICATE_PASSWORD=certificatePassword123 volumes: # Both .jks and .pem files are supported # - ./kafka.keystore.pem:/opt/bitnami/kafka/config/certs/kafka.keystore.pem:ro # - ./kafka.keystore.key:/opt/bitnami/kafka/config/certs/kafka.keystore.key:ro # - ./kafka.truststore.pem:/opt/bitnami/kafka/config/certs/kafka.truststore.pem:ro - ./kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro - ./kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro
In order to get the required credentials to consume and produce messages you need to provide the credentials in the client. If your Apache Kafka clien
_Note: the README for this container is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at [***]
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务