专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏

prom/alertmanager Docker 镜像 - 轩辕镜像

alertmanager
prom/alertmanager
自动构建
prom/alertmanager是Prometheus生态的告警管理组件,用于处理来自Prometheus服务器等客户端的告警,提供去重、分组、路由至邮件/PagerDuty等接收器的功能,并支持告警静默和抑制,确保告警高效分发与管理。
244 收藏0 次下载activeprom镜像
🚀专业版镜像服务,面向生产环境设计
版本下载
🚀专业版镜像服务,面向生产环境设计

Alertmanager

The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, PagerDuty, OpsGenie, or many other mechanisms thanks to the webhook receiver. It also takes care of silencing and inhibition of alerts.

  • Documentation

Install

There are various ways of installing Alertmanager.

Precompiled binaries

Precompiled binaries for released versions are available in the download section on prometheus.io. Using the latest production release binary is the recommended way of installing Alertmanager.

Docker images

Docker images are available on Quay.io or Docker Hub.

You can launch an Alertmanager container for trying it out with

$ docker run --name alertmanager -d -p 127.0.0.1:9093:9093 quay.io/prometheus/alertmanager

Alertmanager will now be reachable at http://localhost:9093/.

Compiling the binary

You can either go install it:

$ go install github.com/prometheus/alertmanager/cmd/...@latest
# cd $GOPATH/src/github.com/prometheus/alertmanager
$ alertmanager --config.file=<your_file>

Or clone the repository and build manually:

$ mkdir -p $GOPATH/src/github.com/prometheus
$ cd $GOPATH/src/github.com/prometheus
$ git clone [***]
$ cd alertmanager
$ make build
$ ./alertmanager --config.file=<your_file>

You can also build just one of the binaries in this repo by passing a name to the build function:

$ make build BINARIES=amtool

Example

This is an example configuration that should cover most relevant aspects of the new YAML configuration format. The full documentation of the configuration can be found here.

yaml
global:
  # The smarthost and SMTP sender used for mail notifications.
  smtp_smarthost: 'localhost:25'
  smtp_from: '***'

# The root route on which each incoming alert enters.
route:
  # The root route must not have any matchers as it is the entry point for
  # all alerts. It needs to have a receiver configured so alerts that do not
  # match any of the sub-routes are sent to someone.
  receiver: 'team-X-mails'

  # The labels by which incoming alerts are grouped together. For example,
  # multiple alerts coming in for cluster=A and alertname=LatencyHigh would
  # be batched into a single group.
  #
  # To aggregate by all possible labels use '...' as the sole label name.
  # This effectively disables aggregation entirely, passing through all
  # alerts as-is. This is unlikely to be what you want, unless you have
  # a very low alert volume or your upstream notification system performs
  # its own grouping. Example: group_by: [...]
  group_by: ['alertname', 'cluster']

  # When a new group of alerts is created by an incoming alert, wait at
  # least 'group_wait' to send the initial notification.
  # This way ensures that you get multiple alerts for the same group that start
  # firing shortly after another are batched together on the first
  # notification.
  group_wait: 30s

  # When the first notification was sent, wait 'group_interval' to send a batch
  # of new alerts that started firing for that group.
  group_interval: 5m

  # If an alert has successfully been sent, wait 'repeat_interval' to
  # resend them.
  repeat_interval: 3h

  # All the above attributes are inherited by all child routes and can
  # overwritten on each.

  # The child route trees.
  routes:
  # This route performs a regular expression match on alert labels to
  # catch alerts that are related to a list of services.
  - matchers:
    - service=~"^(foo1|foo2|baz)$"
    receiver: team-X-mails

    # The service has a sub-route for critical alerts, any alerts
    # that do not match, i.e. severity != critical, fall-back to the
    # parent node and are sent to 'team-X-mails'
    routes:
    - matchers:
      - severity="critical"
      receiver: team-X-pager

  - matchers:
    - service="files"
    receiver: team-Y-mails

    routes:
    - matchers:
      - severity="critical"
      receiver: team-Y-pager

  # This route handles all alerts coming from a database service. If there's
  # no team to handle it, it defaults to the DB team.
  - matchers:
    - service="database"

    receiver: team-DB-pager
    # Also group alerts by affected database.
    group_by: [alertname, cluster, database]

    routes:
    - matchers:
      - owner="team-X"
      receiver: team-X-pager

    - matchers:
      - owner="team-Y"
      receiver: team-Y-pager


# Inhibition rules allow to mute a set of alerts given that another alert is
# firing.
# We use this to mute any warning-level notifications if the same alert is
# already critical.
inhibit_rules:
- source_matchers:
    - severity="critical"
  target_matchers:
    - severity="warning"
  # Apply inhibition if the alertname is the same.
  # CAUTION: 
  #   If all label names listed in `equal` are missing 
  #   from both the source and target alerts,
  #   the inhibition rule will apply!
  equal: ['alertname']


receivers:
- name: 'team-X-mails'
  email_configs:
  - to: '***, ***'

- name: 'team-X-pager'
  email_configs:
  - to: '***'
  pagerduty_configs:
  - routing_key: <team-X-key>

- name: 'team-Y-mails'
  email_configs:
  - to: '***'

- name: 'team-Y-pager'
  pagerduty_configs:
  - routing_key: <team-Y-key>

- name: 'team-DB-pager'
  pagerduty_configs:
  - routing_key: <team-DB-key>

API

The current Alertmanager API is version 2. This API is fully generated via the OpenAPI project and Go Swagger with the exception of the HTTP handlers themselves. The API specification can be found in api/v2/openapi.yaml. A HTML rendered version can be accessed here. Clients can be easily generated via any OpenAPI generator for all major languages.

APIv2 is accessed via the /api/v2 prefix. APIv1 was deprecated in 0.16.0 and is removed as of version 0.27.0. The v2 /status endpoint would be /api/v2/status. If --web.route-prefix is set then API routes are prefixed with that as well, so --web.route-prefix=/alertmanager/ would relate to /alertmanager/api/v2/status.

amtool

amtool is a cli tool for interacting with the Alertmanager API. It is bundled with all releases of Alertmanager.

Install

Alternatively you can install with:

$ go install github.com/prometheus/alertmanager/cmd/amtool@latest
Examples

View all currently firing alerts:

$ amtool alert
Alertname        Starts At                Summary
Test_Alert       2017-08-02 18:30:18 UTC  This is a testing alert!
Test_Alert       2017-08-02 18:30:18 UTC  This is a testing alert!
Check_Foo_Fails  2017-08-02 18:30:18 UTC  This is a testing alert!
Check_Foo_Fails  2017-08-02 18:30:18 UTC  This is a testing alert!

View all currently firing alerts with extended output:

$ amtool -o extended alert
Labels                                        Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node0"       link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]
alertname="Test_Alert" instance="node1"       link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]
alertname="Check_Foo_Fails" instance="node0"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]
alertname="Check_Foo_Fails" instance="node1"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]

In addition to viewing alerts, you can use the rich query syntax provided by Alertmanager:

$ amtool -o extended alert query alertname="Test_Alert"
Labels                                   Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node0"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]
alertname="Test_Alert" instance="node1"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]

$ amtool -o extended alert query instance=~".+1"
Labels                                        Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node1"       link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]
alertname="Check_Foo_Fails" instance="node1"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]

$ amtool -o extended alert query alertname=~"Test.*" instance=~".+1"
Labels                                   Annotations                                                    Starts At                Ends At                  Generator URL
alertname="Test_Alert" instance="node1"  link="[***]" summary="This is a testing alert!"  2017-08-02 18:31:24 UTC  0001-01-01 00:00:00 UTC  [***]

Silence an alert:

$ amtool silence add alertname=Test_Alert
b3ede22e-ca14-4aa0-932c-ca2f3445f926

$ amtool silence add alertname="Test_Alert" instance=~".+0"
e48cb58a-0b17-49ba-b734-3585139b1d25

View silences:

$ amtool silence query
ID                                    Matchers              Ends At                  Created By  Comment
b3ede22e-ca14-4aa0-932c-ca2f3445f926  alertname=Test_Alert  2017-08-02 19:54:50 UTC  kellel

$ amtool silence query instance=~".+0"
ID                                    Matchers                            Ends At                  Created By  Comment
e48cb58a-0b17-49ba-b734-3585139b1d25  alertname=Test_Alert instance=~.+0  2017-08-02 22:41:39 UTC  kellel

Expire a silence:

$ amtool silence expire b3ede22e-ca14-4aa0-932c-ca2f3445f926

Expire all silences matching a query:

$ amtool silence query instance=~".+0"
ID                                    Matchers                            Ends At                  Created By  Comment
e48cb58a-0b17-49ba-b734-3585139b1d25  alertname=Test_Alert instance=~.+0  2017-08-02 22:41:39 UTC  kellel

$ amtool silence expire $(amtool silence query -q instance=~".+0")

$ amtool silence query instance=~".+0"

Expire all silences:

$ amtool silence expire $(amtool silence query -q)

Try out how a template works. Let's say you have this in your configuration file:

templates:
  - '/foo/bar/*.tmpl'

Then you can test out how a template would look like with example by using this command:

amtool template render --template.glob='/foo/bar/*.tmpl' --template.text='{{ template "slack.default.markdown.v1" . }}'
Configuration

amtool allows a configuration file to specify some options for convenience. The default configuration file paths are $HOME/.config/amtool/config.yml or /etc/amtool/config.yml

An example configuration file might look like the following:

# Define the path that `amtool` can find your `alertmanager` instance
alertmanager.url: "http://localhost:9093"

# Override the default author. (unset defaults to your username)
author: ***

# Force amtool to give you an error if you don't include a comment on a silence
comment_required: true

# Set a default output format. (unset defaults to simple)
output: extended

# Set a default receiver
receiver: team-X-pager
Routes

amtool allows you to visualize the routes of your configuration in form of text tree view. Also you can use it to test the routing by passing it label set of an alert and it prints out all receivers the alert would match ordered and separated by ,. (If you use --verify.receivers amtool returns error code 1 on mismatch)

Example of usage:

# View routing tree of remote Alertmanager
$ amtool config routes --alertmanager.url=http://localhost:9090

# Test if alert matches expected receiver
$ amtool config routes test --config.file=doc/examples/simple.yml --tree --verify.receivers=team-X-pager service=database owner=team-X

High Availability

Alertmanager's high availability is in production use at many companies and is enabled by default.

Important: Both UDP and TCP are needed in alertmanager 0.15 and higher for the cluster to work.

  • If you are using a firewall, make sure to whitelist the clustering port for both protocols.
  • If you are running in a container, make sure to expose the clustering port for both protocols.

To create a highly available cluster of the Alertmanager the instances need to be configured to communicate with each other. This is configured using the --cluster.* flags.

  • --cluster.listen-address string: cluster listen address (default "0.0.0.0:9094"; empty string disables HA mode)
  • --cluster.advertise-address string: cluster advertise address
  • --cluster.peer value: initial peers (repeat flag for each additional peer)
  • --cluster.peer-timeout value: peer timeout period (default "15s")
  • --cluster.peers-resolve-timeout value: peers resolve timeout period (default "15s")
  • --cluster.gossip-interval value: cluster message propagation speed (default "200ms")
  • --cluster.pushpull-interval value: lower values will increase convergence speeds at expense of bandwidth (default "1m0s")
  • --cluster.settle-timeout value: maximum time to wait for cluster connections to settle before evaluating notifications.
  • --cluster.tcp-timeout value: timeout value for tcp connections, reads and writes (default "10s")
  • --cluster.probe-timeout value: time to wait for ack before marking node unhealthy (default "500ms")
  • --cluster.probe-interval value: interval between random node probes (default "1s")
  • --cluster.reconnect-interval value: interval between attempting to reconnect to lost peers (default "10s")
  • --cluster.reconnect-timeout value: length of time to attempt to reconnect to a lost peer (default: "6h0m0s")
  • --cluster.label value: the label is an optional string to include on each packet and stream. It uniquely identifies the cluster and prevents cross-communication issues when sending gossip messages (default:"")

The chosen port in the cluster.listen-address flag is the port that needs to be specified in the cluster.peer flag of the other peers.

The cluster.advertise-address flag is required if the instance doesn't have an IP address that is part of RFC 6890 with a default route.

To start a cluster of three peers on your local machine use goreman and the Procfile within this repository.

goreman start

To point your Prometheus 1.4, or later, instance to multiple Alertmanagers, configure them in your prometheus.yml configuration file, for example:

yaml
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - alertmanager1:9093
      - alertmanager2:9093
      - alertmanager3:9093

Important: Do not load balance traffic between Prometheus and its Alertmanagers, but instead point Prometheus to a list of all Alertmanagers. The Alertmanager implementation expects all alerts to be sent to all Alertmanagers to ensure high availability.

Turn off high availability

If running Alertmanager in high availability mode is not desired, setting --cluster.listen-address= prevents Alertmanager from listening to incoming peer requests.

Contributing

Check the Prometheus contributing page.

To contribute to the user interface, refer to ui/app/CONTRIBUTING.md.

Architecture

License

Apache License 2.0, see LICENSE.

查看更多 alertmanager 相关镜像 →
bitnami/alertmanager logo
bitnami/alertmanager
by VMware
认证
alertmanager的Bitnami安全镜像
1410M+ pulls
上次更新:5 个月前
ubuntu/alertmanager logo
ubuntu/alertmanager
by Canonical
认证
Ubuntu Rock版Alertmanager,用于处理Prometheus等客户端应用发送的警报,支持去重、分组、路由至接收器(如邮件、PagerDuty),并提供警报静音和抑制功能,基于Ubuntu且接收安全更新。
8.3K pulls
上次更新:9 个月前
ibmcom/alertmanager logo
ibmcom/alertmanager
by ibmcom
用于IBM Cloud Private-CE社区版AMD64架构的Alert Manager告警管理组件Docker镜像
1M+ pulls
上次更新:6 年前
portworx/alertmanager logo
portworx/alertmanager
by portworx
Prometheus Alertmanager是Prometheus生态的告警管理组件,负责处理Prometheus产生的告警,提供告警去重、分组、路由功能,支持将告警发送到邮件、Slack等接收端,实现监控告警的集中管理与分发。
50K+ pulls
上次更新:5 天前
signoz/alertmanager logo
signoz/alertmanager
by signoz
暂无描述
500K+ pulls
上次更新:9 个月前
giantswarm/alertmanager logo
giantswarm/alertmanager
by giantswarm
暂无描述
100K+ pulls
上次更新:10 个月前

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

登录仓库拉取

通过 Docker 登录认证访问私有仓库

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

Docker Compose

Docker Compose 项目配置

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

宝塔面板

在宝塔面板一键配置镜像

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

极空间

极空间 NAS 系统配置服务

爱快路由

爱快 iKuai 路由系统配置

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

专属域名拉取

无需登录使用专属域名

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

轩辕镜像免费版与专业版有什么区别?

免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。

轩辕镜像支持哪些镜像仓库?

专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。

流量耗尽错误提示

当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。

410 错误问题

通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。

manifest unknown 错误

先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。

镜像拉取成功后,如何去掉轩辕镜像域名前缀?

使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

oldzhang的头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
prom/alertmanager
官方博客Docker 镜像使用技巧与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
提交工单
免费获取在线技术支持请 提交工单,官方QQ群:13763429 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
免费获取在线技术支持请提交工单,官方QQ群: 。
轩辕镜像面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.
轩辕镜像 官方专业版 Logo
轩辕镜像轩辕镜像官方专业版
首页个人中心搜索镜像
交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 13763429