Skip to main content
By the end of this guide, you’ll have a multi-step escalation chain that notifies progressively more people until someone acknowledges the incident.
For conceptual background, see Escalation chains.

Build a three-tier chain

This example implements a common escalation pattern: team Slack → on-call PagerDuty → management email.
1

Create the alert channels

devhelm alert-channels create \
  --name "Team Slack" --type SLACK \
  --webhook-url "$SLACK_WEBHOOK_URL"

devhelm alert-channels create \
  --name "On-Call PagerDuty" --type PAGERDUTY \
  --routing-key "$PAGERDUTY_ROUTING_KEY"

devhelm alert-channels create \
  --name "Management Email" --type EMAIL \
  --recipients "eng-leads@example.com"
2

Create the notification policy

curl -X POST https://api.devhelm.io/api/v1/notification-policies \
  -H "Authorization: Bearer $DEVHELM_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Critical escalation",
    "priority": 100,
    "matchRules": [
      { "type": "severity_gte", "value": "DOWN" }
    ],
    "escalation": {
      "steps": [
        {
          "delayMinutes": 0,
          "channelIds": ["<team-slack-id>"],
          "requireAck": true
        },
        {
          "delayMinutes": 10,
          "channelIds": ["<oncall-pagerduty-id>"],
          "requireAck": true,
          "repeatIntervalSeconds": 300
        },
        {
          "delayMinutes": 30,
          "channelIds": ["<management-email-id>"]
        }
      ],
      "onResolve": "notify_all_steps",
      "onReopen": "restart_from_beginning"
    }
  }'
3

Verify the escalation

Test with a failing monitor:
devhelm monitors create \
  --name "Escalation Test (delete me)" \
  --type HTTP \
  --url https://httpstat.us/500 \
  --frequency 30 \
  --regions us-east
Watch for:
  1. Slack notification arrives immediately
  2. If no one acknowledges within 10 minutes → PagerDuty pages on-call
  3. PagerDuty repeats every 5 minutes until acknowledged
  4. If still unacknowledged at 30 minutes → management email
Delete the test monitor when done.

How the chain executes

t=0     → Step 1: Slack (requireAck: true)
         ↓ wait for ack...
t=10min → Step 2: PagerDuty (requireAck: true, repeat every 5min)
         ↓ wait for ack...
t=30min → Step 3: Email (final step)
If someone acknowledges at any step, the chain stops escalating. The incident stays open until resolved, but no further steps execute.

Acknowledgment options

MethodHow
PagerDuty/OpsGenieAcknowledge the alert in the external system
DevHelm APIPOST /api/v1/notification-dispatches/<id>/acknowledge

Resolution behavior

The onResolve field controls what happens when the incident resolves:
SettingBehavior
notify_all_stepsAll notified steps get a resolution message
notify_current_stepOnly the active step gets notified
silentNo resolution message (PagerDuty/OpsGenie still auto-close)

Variations

{
  "steps": [
    { "delayMinutes": 0, "channelIds": ["<slack-id>"], "requireAck": true },
    { "delayMinutes": 15, "channelIds": ["<pagerduty-id>"], "requireAck": true, "repeatIntervalSeconds": 300 }
  ],
  "onResolve": "notify_all_steps"
}
{
  "steps": [
    { "delayMinutes": 0, "channelIds": ["<slack-id>", "<email-id>"] }
  ],
  "onResolve": "notify_all_steps"
}
Create two notification policies at the same priority — one matching business-hours tags and one matching after-hours. The escalation chains can differ (e.g., Slack-only during the day, PagerDuty after hours).

Next steps

Escalation chains reference

Full step configuration and behavior options.

Alert routing by tag

Route different monitors to different escalation chains.

Testing your alerts

Validate the full escalation pipeline.