Skip to main content
DevHelm’s alerting system has two layers: alert channels that define where notifications go, and notification policies that define when and how they’re sent. Together they give you precise control over who gets alerted, through which integration, and in what order.
Define this in code. Manage alert channels and notification policies as part of your monitoring-as-code workflow: YAML format · Terraform · CI/CD patterns

How alerting works

When an incident is confirmed, DevHelm evaluates your notification policies to decide what to do:
Incident CONFIRMED

Evaluate all notification policies (priority order)

For each matching policy → execute escalation chain

Step 1: Notify channels (immediate or delayed)

If requireAck and no acknowledgment → wait delay → Step 2

... repeat through all steps

Incident RESOLVED → notify based on onResolve setting
Key behaviors:
  • All matching policies run — there is no “first match wins” behavior
  • Priority controls evaluation order — higher priority policies are evaluated first
  • Escalation chains execute independently — each matching policy runs its own chain

The two layers

Alert channels

An alert channel is a configured destination — a Slack webhook, a PagerDuty routing key, an email address, or any of the seven supported integrations. Channels are reusable; you create them once and reference them from multiple notification policies.

Notification policies

A notification policy defines:
  1. Match rules — which incidents trigger this policy (by severity, monitor, region, etc.)
  2. Escalation chain — the ordered sequence of alert steps to execute
  3. Priority — the evaluation order relative to other policies

Alert flow example

Imagine a setup with two notification policies: Policy A (priority 10) — “Critical alerts”
  • Match: severity DOWN
  • Escalation: Step 1 → Slack (immediate), Step 2 → PagerDuty (after 5 minutes, require ack)
Policy B (priority 5) — “All alerts”
  • Match: catch-all (no rules)
  • Escalation: Step 1 → Email (immediate)
When a DOWN incident is confirmed:
  1. Policy A matches — Slack is notified immediately, PagerDuty after 5 minutes
  2. Policy B matches — Email is sent immediately
When a DEGRADED incident is confirmed:
  1. Policy A does not match (severity is not DOWN)
  2. Policy B matches — Email is sent

Suppression

Alerts are suppressed in two cases, regardless of policy configuration:
  1. Maintenance windows — Active windows suppress all notifications for covered monitors
  2. Resource group suppression — Group-level incidents suppress member-level alerts
See Alert suppression for details.

Notification dispatches

Every notification sent by DevHelm is tracked as a dispatch. Dispatches record the channel, delivery status, and acknowledgment state. Use them for audit trails and debugging delivery issues.
curl "https://api.devhelm.io/api/v1/notification-dispatches?incident_id=<incident-id>" \
  -H "Authorization: Bearer $DEVHELM_API_TOKEN"

Next steps

Alert channels

Configure Slack, PagerDuty, email, and other destinations.

Notification policies

Route alerts based on severity, monitor, and region.

Escalation chains

Build multi-step escalation with delays and acknowledgment.

Alert suppression

Suppress alerts during maintenance and via resource groups.