Skip to main content
Notification policies control which incidents trigger which alert channels. Each policy combines match rules (what to alert on) with an escalation chain (how to alert).
Define this in code. Manage notification policies as part of your monitoring-as-code workflow: YAML format · Terraform · CI/CD patterns

How matching works

When an incident is confirmed, DevHelm evaluates all enabled notification policies in priority order (highest priority first). All matching policies execute — there is no “first match wins” behavior. A policy matches when all of its match rules pass (AND logic). A policy with no match rules is a catch-all that matches every incident.

Match rules

Each match rule filters incidents on a specific attribute:
Rule typeMatches onValue field
severity_gteIncident severity at or above the specified levelvalue (e.g., "DOWN")
monitor_id_inSpecific monitor IDsmonitorIds
monitor_type_inMonitor types (HTTP, TCP, DNS, etc.)values
monitor_tag_inMonitor tagsvalues
region_inAffected probe regionsregions
incident_statusIncident event type (created, resolved, reopened)value
service_id_inStatus Data service IDsvalues
resource_group_id_inResource group IDsvalues
component_name_inService component namesvalues

Severity ordering

Severity comparison uses: DOWN > DEGRADED > MAINTENANCE. A severity_gte rule with value DEGRADED matches both DOWN and DEGRADED incidents but not MAINTENANCE.

Match rule fields

FieldTypeDescription
typestringRule type from the table above
valuestringSingle-value match (for severity_gte, incident_status)
monitorIdsUUID[]Monitor IDs for monitor_id_in
regionsstring[]Region codes for region_in
valuesstring[]Multi-value match (for monitor_type_in, monitor_tag_in, etc.)

Priority

The priority field (integer, default 0) controls evaluation order. Higher values are evaluated first. While all matching policies run, priority determines which escalation chains start first when multiple policies match simultaneously. Use priority to structure a tiered alerting strategy:
PriorityPolicyPurpose
100Critical — PagerDutyPage on-call for DOWN incidents
50Warning — SlackPost to a channel for DEGRADED incidents
0Catch-all — EmailEmail summary for everything else

Creating a policy

devhelm notification-policies create \
  --name "Critical alerts" \
  --priority 100 \
  --match-severity-gte DOWN \
  --escalation-channel-ids "<slack-id>,<pagerduty-id>"

Request fields

FieldTypeRequiredDefaultDescription
namestringYesHuman-readable name
matchRulesMatchRule[]No[] (catch-all)Filtering rules
escalationEscalationChainYesSteps, channels, and behavior
enabledbooleanNotrueWhether the policy is active
priorityintegerNo0Evaluation order (higher = first)

Catch-all policies

A policy with an empty matchRules array matches every incident. Use a low-priority catch-all as a safety net:
{
  "name": "Catch-all — Email digest",
  "priority": 0,
  "matchRules": [],
  "escalation": {
    "steps": [{
      "delayMinutes": 0,
      "channelIds": ["<email-channel-id>"]
    }]
  }
}

Testing a policy

Dry-run match rules against a hypothetical incident to verify routing before real incidents arrive:
curl -X POST https://api.devhelm.io/api/v1/notification-policies/<policy-id>/test \
  -H "Authorization: Bearer $DEVHELM_API_TOKEN"

Policy fields

FieldTypeDescription
idUUIDUnique policy identifier
namestringHuman-readable name
matchRulesMatchRule[]All must pass for a match (AND logic)
escalationEscalationChainOrdered alert steps
enabledbooleanWhether the policy is active
priorityintegerEvaluation order (higher = first)

Next steps

Escalation chains

Build multi-step escalation with delays and acknowledgment.

Alert channels

Configure the destinations referenced by escalation steps.

Alert suppression

Suppress alerts during maintenance and via resource groups.