专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏

bitnamicharts/rabbitmq Docker 镜像 - 轩辕镜像

rabbitmq
bitnamicharts/rabbitmq
Bitnami的RabbitMQ Helm chart,用于在Kubernetes环境中便捷、可靠地部署和管理RabbitMQ消息队列。
3 收藏0 次下载activebitnamicharts
🚀专业版镜像服务,面向生产环境设计
版本下载
🚀专业版镜像服务,面向生产环境设计

Bitnami Secure Images Helm chart for RabbitMQ

RabbitMQ is an open source general-purpose message broker that is designed for consistent, highly-available messaging scenarios (both synchronous and asynchronous).

Overview of RabbitMQ

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

console
helm install my-release oci://registry-1.docker.io/bitnamicharts/rabbitmq

Tip: Did you know that this app is also available as a Kubernetes App on the Azure Marketplace? Kubernetes Apps are the easiest way to deploy Bitnami on AKS. Click here to see the listing on Azure Marketplace.

Why use Bitnami Secure Images?

Those are hardened, minimal CVE images built and maintained by Bitnami. Bitnami Secure Images are based on the cloud-optimized, security-hardened enterprise OS Photon Linux. Why choose BSI images?

  • Hardened secure images of popular open source software with Near-Zero Vulnerabilities
  • Vulnerability Triage & Prioritization with VEX Statements, KEV and EPSS Scores
  • Compliance focus with FIPS, STIG, and air-gap options, including secure bill of materials (SBOM)
  • Software supply chain provenance attestation through in-toto
  • First class support for the internet’s favorite Helm charts

Each image comes with valuable security metadata. You can view the metadata in our public catalog here. Note: Some data is only available with commercial subscriptions to BSI.

!Alt text !Alt text

If you are looking for our previous generation of images based on Debian Linux, please see the Bitnami Legacy registry.

Introduction

This chart bootstraps a RabbitMQ deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

console
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/rabbitmq

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys RabbitMQ on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This enable the rabbitmq_prometheus plugin and expose a metrics endpoints in all pods and the RabbitMQ service. The service will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. There are different ServiceMonitor objects per RabbitMQ endpoins. The chart includes:

  • metrics.serviceMonitor.default for the /metrics endpoint.
  • metrics.serviceMonitor.perObject for the /metrics/per-object endpoint.
  • metrics.serviceMonitor.detailed for the /metrics/detailed endpoint.

Enable each ServiceMonitor by setting metrics.serviceMonitor.*.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

text
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling vs Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Set pod affinity

This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod's affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Scale horizontally

To horizontally scale this chart once it has been deployed, two options are available:

  • Use the kubectl scale command.
  • Upgrade the chart modifying the replicaCount parameter.
text
    replicaCount=3
    auth.password="$RABBITMQ_PASSWORD"
    auth.erlangCookie="$RABBITMQ_ERLANG_COOKIE"

NOTE: It is mandatory to specify the password and Erlang cookie that was set the first time the chart was installed when upgrading the chart. Otherwise, new pods won't be able to join the cluster.

When scaling down the solution, unnecessary RabbitMQ nodes are automatically stopped, but they are not removed from the cluster. These nodes must be manually removed via the rabbitmqctl forget_cluster_node command.

For instance, if RabbitMQ was initially installed with three replicas and then scaled down to two replicas, run the commands below (assuming that the release name is rabbitmq and the clustering type is hostname):

console
    kubectl exec rabbitmq-0 --container rabbitmq -- rabbitmqctl forget_cluster_node ***
    kubectl delete pvc data-rabbitmq-2

NOTE: It is mandatory to specify the password and Erlang cookie that was set the first time the chart was installed when upgrading the chart.

Securing traffic using TLS

To enable TLS support, first generate the certificates as described in the RabbitMQ documentation for SSL certificate generation.

Once the certificates are generated, you have two alternatives:

  • Create a secret with the certificates and associate the secret when deploying the chart
  • Include the certificates in the values.yaml file when deploying the chart

Set the auth.tls.failIfNoPeerCert parameter to false to allow a TLS connection if the client fails to provide a certificate.

Set the auth.tls.sslOptionsVerify to verify_peer to force a node to perform peer verification. When set to verify_none, peer verification will be disabled and certificate exchange won't be performed.

This chart also facilitates the creation of TLS secrets for use with the Ingress controller (although this is not mandatory). There are several common use cases:

  • Generate certificate secrets based on chart parameters.
  • Enable externally generated certificates.
  • Manage application certificates via an external service (like cert-manager).
  • Create self-signed certificates within the chart (if supported).

In the first two cases, a certificate and a key are needed. Files are expected in .pem format.

Here is an example of a certificate file:

NOTE: There may be more than one certificate if there is a certificate chain.

text
-----BEGIN CERTIFICATE-----
MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
...
jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
-----END CERTIFICATE-----

Here is an example of a certificate key:

text
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
...
wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
-----END RSA PRIVATE KEY-----
  • If using Helm to manage the certificates based on the parameters, copy these values into the certificate and key values for a given *.ingress.secrets entry.
  • If managing TLS secrets separately, it is necessary to create a TLS secret with name INGRESS_HOSTNAME-tls (where INGRESS_HOSTNAME is a placeholder to be replaced with the hostname you set using the *.ingress.hostname parameter).
  • If your cluster has a cert-manager add-on to automate the management and issuance of TLS certificates, add to *.ingress.annotations the corresponding ones for cert-manager.
  • If using self-signed certificates created by Helm, set both *.ingress.tls and *.ingress.selfSigned to true.
Load custom definitions

It is possible to load a RabbitMQ definitions file to configure RabbitMQ. Follow the steps below:

Because definitions may contain RabbitMQ credentials, store the JSON as a Kubernetes secret. Within the secret's data, choose a key name that corresponds with the desired load definitions filename (i.e. load_definition.json) and use the JSON object as the value.

Next, specify the load_definitions property as an extraConfiguration pointing to the load definition file path within the container (i.e. /app/load_definition.json) and set loadDefinition.enable to true. Any load definitions specified will be available within in the container at /app.

NOTE: Loading a definition will take precedence over any configuration done through Helm values.

If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. These secrets can also be templated to use supplied chart values. Here is an example:

yaml
auth:
  password: CHANGEME
extraSecrets:
  load-definition:
    load_definition.json: |
      {
        "users": [
          {
            "name": "{{ .Values.auth.username }}",
            "password": "{{ .Values.auth.password }}",
            "tags": "administrator"
          }
        ],
        "vhosts": [
          {
            "name": "/"
          }
        ]
      }
loadDefinition:
  enabled: true
  existingSecret: load-definition
extraConfiguration: |
  load_definitions = /app/load_definition.json
Update credentials

The Bitnami RabbitMQ chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in auth.existingSecret. To update credentials, use one of the following:

  • Run helm upgrade specifying a new password in auth.password and auth.updatePassword=true.
  • Run helm upgrade specifying a new secret in auth.existingSecret and auth.updatePassword=true.
Configure LDAP support

LDAP support can be enabled in the chart by specifying the ldap.* parameters while creating a release. For example:

text
ldap.enabled="true"
ldap.server="my-ldap-server"
ldap.port="389"
ldap.user_dn_pattern="cn=${username},dc=example,dc=org"

If ldap.tls.enabled is set to true, consider using ldap.port=636 and checking the settings in the advancedConfiguration chart parameters.

Configure memory high watermark

It is possible to configure a memory high watermark on RabbitMQ to define memory thresholds using the memoryHighWatermark.* parameters. To do so, you have two alternatives:

  • Set an absolute limit of RAM to be used on each RabbitMQ node, as shown in the configuration example below:
text
memoryHighWatermark.enabled="true"
memoryHighWatermark.type="absolute"
memoryHighWatermark.value="512Mi"
  • Set a relative limit of RAM to be used on each RabbitMQ node. To enable this feature, define the memory limits at pod level too. An example configuration is shown below:
text
memoryHighWatermark.enabled="true"
memoryHighWatermark.type="relative"
memoryHighWatermark.value="0.4"
resources.limits.memory="2Gi"
Add extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property.

yaml
extraEnvVars:
  - name: LOG_LEVEL
    value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the .extraEnvVarsCM or the extraEnvVarsSecret properties.

Configure the default user/vhost

If you want to create default user/vhost and set the default permission. you can use extraConfiguration:

yaml
auth:
  username: default-user
extraConfiguration: |-
  default_vhost = default-vhost
  default_permissions.configure = .*
  default_permissions.read = .*
  default_permissions.write = .*
Use plugins

The Bitnami Docker RabbitMQ image ships a set of plugins by default. By default, this chart enables rabbitmq_management and rabbitmq_peer_discovery_k8s since they are required for RabbitMQ to work on K8s.

To enable extra plugins, set the extraPlugins parameter with the list of plugins you want to enable. In addition to this, the communityPlugins parameter can be used to specify a list of URLs (separated by spaces) for custom plugins for RabbitMQ.

text
communityPlugins="[***]"
extraPlugins="my-custom-plugin"
Advanced logging

In case you want to configure RabbitMQ logging set logs value to false and set the log config in extraConfiguration following the official documentation.

An example:

yaml
logs: false # custom logging
extraConfiguration: |
  log.default.level = warning
  log.file = false
  log.console = true
  log.console.level = warning
  log.console.formatter = json
How to Avoid Deadlocked Deployments After a Cluster-Wide Restart

RabbitMQ nodes assume their peers come back online within five minutes (by default). When the OrderedReady pod management policy is used with a readiness probe that implicitly requires a fully booted node, the deployment can deadlock:

  • Kubernetes will expect the first node to pass a readiness probe
  • The readiness probe may require a fully booted node
  • The node will fully boot after it detects that its peers have come online
  • Kubernetes will not start any more pods until the first one boots

The following combination of deployment settings avoids the problem:

  • Use podManagementPolicy: "Parallel" to boot multiple cluster nodes in parallel
  • Use rabbitmq-diagnostics ping for readiness probe

To learn more, please consult RabbitMQ documentation guides:

  • RabbitMQ Clustering guide: Node Restarts
  • RabbitMQ Clustering guide: Restarts and Readiness Probes
  • Recommendations for Operator-less (DIY) deployments to Kubernetes
Do Not Force Boot Nodes on a Regular Basis

Note that forcing nodes to boot is not a solution and doing so can be dangerous. Forced booting is a last resort mechanism in RabbitMQ that helps make remaining cluster nodes recover and rejoin each other after a permanent loss of some of their former peers. In other words, forced booting a node is an emergency event recovery procedure.

Known issues
  • Changing the password through RabbitMQ's UI can make the pod fail due to the default liveness probes. If you do so, remember to make the chart aware of the new password. Updating the default secret with the password you set through RabbitMQ's UI will automatically recreate the pods. If you are using your own secret, you may have to manually recreate the pods.
Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Persistence

The Bitnami RabbitMQ image stores the RabbitMQ data and configurations at the /opt/bitnami/rabbitmq/var/lib/rabbitmq/ path of the container.

The chart mounts a Persistent Volume at this location. By default, the volume is created using dynamic volume provisioning. An existing PersistentVolumeClaim can also be defined.

Use existing PersistentVolumeClaims
  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
console
helm install my-release --set persistence.existingClaim=PVC_NAME oci://REGISTRY_NAME/REPOSITORY_NAME/rabbitmq

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Adjust permissions of the persistence volume mountpoint

As the image runs as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Prometheus Metrics

RabbitMQ has built-in support for Prometheus metrics exposed at GET /metrics. However, these metrics are all cluster-wide, and do not show any per-queue or per-node metrics.

To get per-object metrics, there is a second metrics endpoint at GET /metrics/detailed that accepts query parameters to choose which metric families you would like to see. For instance, you can pass family=node_coarse_metrics&family=queue_coarse_metrics to see per-node and per-queue metrics, but with no need to see Erlang, connection, or channel metrics.

Additionally, there is a third metrics endpoint: GET /metrics/per-object. which returns all per-object metrics. However, this can be computationally expensive on a large cluster with many objects, and so RabbitMQ docs suggest using GET /metrics/detailed mentioned above to filter your scraping and only fetch the per-object metrics that are needed for a given monitoring application.

Because they expose different sets of data, a valid use case is to scrape metrics from both GET /metrics and GET /metrics/detailed, ingesting both cluster-level and per-object metrics. The metrics.serviceMonitor.default and metrics.serviceMonitor.detailed values support configuring a ServiceMonitor that targets one or both of these metrics.

Parameters

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.security.allowInsecureImagesAllows skipping image verification

_Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at [***]

查看更多 rabbitmq 相关镜像 →
rabbitmq logo
rabbitmq
by library
官方
RabbitMQ是一款开源的多协议消息代理,主要用于在分布式系统中实现应用间的异步通信,支持AMQP、MQTT、STOMP等多种消息协议,能够有效解耦服务、提升系统可靠性与可扩展性,适用于微服务架构、实时数据处理及异步任务处理等场景,由Erlang语言开发,具备高并发、高可用特性,广泛应用于企业级系统。
53301B+ pulls
上次更新:20 天前
bitnami/rabbitmq logo
bitnami/rabbitmq
by VMware
认证
Bitnami Secure Image for rabbitmq是比特纳米公司推出的针对RabbitMQ消息代理的安全镜像,该镜像经专业安全加固,预装必要依赖与优化配置,可保障消息传输的机密性与完整性,支持快速部署至云平台、容器环境或本地服务器,能有效简化开发与运维流程,同时满足企业级安全合规需求,为分布式系统中的消息通信提供稳定可靠的安全运行环境。
130500M+ pulls
上次更新:4 个月前
ubuntu/rabbitmq logo
ubuntu/rabbitmq
by Canonical
认证
基于Ubuntu的RabbitMQ镜像 - 一个开源的多协议消息代理。
2.8K pulls
上次更新:2 个月前
wayofdev/rabbitmq logo
wayofdev/rabbitmq
by wayofdev
暂无描述
10K+ pulls
上次更新:5 天前
masstransit/rabbitmq logo
masstransit/rabbitmq
by masstransit
包含已启用延迟消息交换插件的RabbitMQ管理Docker镜像。
1310M+ pulls
上次更新:4 个月前
spryker/rabbitmq logo
spryker/rabbitmq
by spryker
扩展官方RabbitMQ镜像,启用HiPE(高性能Erlang)编译以显著提升运行时性能,基于RabbitMQ 3.7.14和Alpine 3.8构建。
1M+ pulls
上次更新:29 天前

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

oldzhang的头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
bitnamicharts/rabbitmq
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
提交工单
免费获取在线技术支持请 提交工单,官方QQ群:13763429 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
免费获取在线技术支持请提交工单,官方QQ群: 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.
轩辕镜像 官方专业版 Logo
轩辕镜像轩辕镜像官方专业版
首页个人中心搜索镜像
交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 13763429