wurstmeister/kafkadocker-compose.yml, e.g. in order to increase the message.max.bytes parameter set the environment to KAFKA_MESSAGE_MAX_BYTES: 2000000. To turn off automatic topic creation set KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'LOG4J_. These will be mapped to log4j.properties. For example: LOG4J_LOGGER_KAFKA_AUTHORIZER_LOGGER=DEBUG, authorizerAppenderNOTE: There are several 'gotchas' with configuring networking. If you are not sure about what the requirements are, please check out the Connectivity Guide in the Wiki
Start a cluster:
docker-compose up -d Add more brokers:
docker-compose scale kafka=3Destroy a cluster:
docker-compose stopThe default docker-compose.yml should be seen as a starting point. By default each broker will get a new port number and broker id on restart. Depending on your use case this might not be desirable. If you need to use specific ports and broker ids, modify the docker-compose configuration accordingly, e.g. docker-compose-single-broker.yml:
docker-compose -f docker-compose-single-broker.yml upYou can configure the broker id in different ways
KAFKA_BROKER_IDBROKER_ID_COMMAND, e.g. BROKER_ID_COMMAND: "hostname | awk -F'-' '{print $$2}'"If you don't specify a broker id in your docker-compose file, it will automatically be generated (see [***] This allows scaling up and down. In this case it is recommended to use the --no-recreate option of docker-compose to ensure that containers are not re-created and thus keep their names and ids.
If you want to have kafka-docker automatically create topics in Kafka during
creation, a KAFKA_CREATE_TOPICS environment variable can be
added in docker-compose.yml.
Here is an example snippet from docker-compose.yml:
environment: KAFKA_CREATE_TOPICS: "Topic1:1:3,Topic2:1:1:compact"
Topic 1 will have 1 partition and 3 replicas, Topic 2 will have 1 partition, 1 replica and a cleanup.policy set to compact.
If you wish to use multi-line YAML or some other delimiter between your topic definitions, override the default , separator by specifying the KAFKA_CREATE_TOPICS_SEPARATOR environment variable.
For example, KAFKA_CREATE_TOPICS_SEPARATOR: "$$'\n'" would use a newline to split the topic definitions. Syntax has to follow docker-compose escaping rules, and ANSI-C quoting.
You can configure the advertised hostname in different ways
KAFKA_ADVERTISED_HOST_NAMEHOSTNAME_COMMAND, e.g. HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"When using commands, make sure you review the "Variable Substitution" section in [***]
If KAFKA_ADVERTISED_HOST_NAME is specified, it takes precedence over HOSTNAME_COMMAND
For AWS deployment, you can use the Metadata service to get the container host's IP:
HOSTNAME_COMMAND=wget -t3 -T2 -qO- [***]
Reference: [***]
If you require the value of HOSTNAME_COMMAND in any of your other KAFKA_XXX variables, use the _{HOSTNAME_COMMAND} string in your variable value, i.e.
KAFKA_ADVERTISED_LISTENERS=SSL://_{HOSTNAME_COMMAND}:9093,PLAINTEXT://9092
If the required advertised port is not static, it may be necessary to determine this programatically. This can be done with the PORT_COMMAND environment variable.
PORT_COMMAND: "docker port $$(hostname) 9092/tcp | cut -d: -f2
This can be then interpolated in any other KAFKA_XXX config using the _{PORT_COMMAND} string, i.e.
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://1.2.3.4:_{PORT_COMMAND}
It may be useful to have the Kafka Documentation open, to understand the various broker listener configuration options.
Since 0.9.0, Kafka has supported multiple listener configurations for brokers to help support different protocols and discriminate between internal and external traffic. Later versions of Kafka have deprecated advertised.host.name and advertised.port.
NOTE: advertised.host.name and advertised.port still work as expected, but should not be used if configuring the listeners.
The example environment below:
HOSTNAME_COMMAND: curl [***] KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094 KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
Will result in the following broker config:
advertised.listeners = OUTSIDE://ec2-xx-xx-xxx-xx.us-west-2.compute.amazonaws.com:9094,INSIDE://:9092 listeners = OUTSIDE://:9094,INSIDE://:9092 inter.broker.listener.name = INSIDE
You can configure the broker rack affinity in different ways
KAFKA_BROKER_RACKRACK_COMMAND, e.g. RACK_COMMAND: "curl [***]"In the above example the AWS metadata service is used to put the instance's availability zone in the broker.rack property.
For monitoring purposes you may wish to configure JMX. Additional to the standard JMX parameters, problems could arise from the underlying RMI protocol used to connect
For example, to connect to a kafka running locally (assumes exposing port 1099)
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099" JMX_PORT: 1099
Jconsole can now connect at jconsole 192.168.99.100:1099
The listener configuration above is necessary when deploying Kafka in a Docker Swarm using an overlay network. By separating OUTSIDE and INSIDE listeners, a host can communicate with clients outside the overlay network while still benefiting from it from within the swarm.
In addition to the multiple-listener configuration, additional best practices for operating Kafka in a Docker Swarm include:
ports: - target: 9094 published: 9094 protocol: tcp mode: host
Older compose files using the short-version of port mapping may encounter Kafka client issues if their connection to individual brokers cannot be guaranteed.
See the included sample compose file docker-compose-swarm.yml
[***]
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务