Skip to main content
AI assistants can generate devhelm.yml files, suggest assertions, and help you maintain monitoring configs as your infrastructure evolves.

Generating configs from scratch

Describe your infrastructure and let the AI create a monitoring config:
“I have a REST API at api.example.com with /health, /v1/users, and /v1/orders endpoints. Create a DevHelm config that monitors all three with appropriate assertions.”
An AI assistant with access to DevHelm’s MCP tools or YAML schema knowledge can produce:
version: 1
monitors:
  - name: API Health
    type: HTTP
    config:
      url: https://api.example.com/health
      method: GET
    frequency: 30
    assertions:
      - type: StatusCodeAssertion
        config:
          expected: "200"
          operator: equals
      - type: ResponseTimeAssertion
        config:
          maxResponseTime: 2000

  - name: Users API
    type: HTTP
    config:
      url: https://api.example.com/v1/users
      method: GET
    frequency: 60
    assertions:
      - type: StatusCodeAssertion
        config:
          expected: "200"
          operator: equals

  - name: Orders API
    type: HTTP
    config:
      url: https://api.example.com/v1/orders
      method: GET
    frequency: 60
    assertions:
      - type: StatusCodeAssertion
        config:
          expected: "200"
          operator: equals

Reviewing and improving existing configs

Share your current devhelm.yml and ask for improvements:
“Review this config. Are there any gaps in monitoring coverage or assertions that could reduce false positives?”
The AI can identify:
  • Missing assertions — endpoints without response time checks
  • Inconsistent frequencies — critical and non-critical services at the same interval
  • Missing alerting — monitors without notification policies
  • Coverage gaps — services mentioned in code but not monitored

Maintaining configs as code evolves

When you add new API endpoints, the AI can update your monitoring config:
“I just added a /v1/payments endpoint. Add a monitor with the same pattern as the existing ones, but check every 30 seconds since it’s payment-critical.”
This keeps monitoring in sync with your codebase without manual editing.

Best practices for AI-assisted config

Always validate

Run devhelm validate on any AI-generated config before deploying:
devhelm validate -f devhelm.yml
AI models can produce syntactically valid YAML that uses wrong field names or invalid enum values. Validation catches these.

Use plan before deploy

Preview changes before applying them:
devhelm plan -f devhelm.yml
Review the plan output to confirm the AI made the changes you expected.

Keep humans in the loop

AI-generated configs are a starting point. Review:
  • Are the assertions appropriate for each endpoint?
  • Are the frequencies correct for the service criticality?
  • Are secrets referenced correctly (not hardcoded)?
  • Are the right alert channels attached?

Version control everything

Commit AI-generated configs to Git like any other code change. This gives you review through PRs and rollback through Git history.

Workflow: AI agent + MaC

The most powerful pattern combines AI agents with monitoring-as-code workflows:
  1. Agent generates config based on your description
  2. You review the generated YAML in a PR
  3. CI validates with devhelm validate
  4. CI previews with devhelm plan --detailed-exitcode
  5. You approve the PR
  6. CI deploys with devhelm deploy --yes
The AI handles the tedious config generation; your CI/CD pipeline handles safe deployment.

YAML file format

Complete YAML schema reference.

MCP Server

Connect your AI agent to DevHelm.