Skip to content

Notifications

Breeze RMM delivers notifications through multiple channels whenever alerts trigger, automations execute, or other platform events occur. Notification channels are configured per organization and dispatched asynchronously through a BullMQ queue backed by Redis. Every alert automatically generates in-app notifications for all active users in the affected organization; additional channels (email, webhook, Slack, Teams, PagerDuty, SMS) are routed based on alert rule configuration or organization defaults.


Architecture Overview

Notification delivery follows a two-stage pipeline:

  1. Event Bus — When an alert is triggered, acknowledged, or resolved, the internal event bus emits an event (alert.triggered, alert.acknowledged, alert.resolved).
  2. Notification Dispatcher — A BullMQ worker picks up the event, sends in-app notifications immediately, then queues individual send jobs for each configured notification channel.

The dispatcher runs with a concurrency of 5 workers and tracks delivery status (pending, sent, failed) in the alert_notifications table. Escalation policies can schedule delayed follow-up notifications that are automatically cancelled when an alert is acknowledged or resolved.


Notification Channels

Channels define where notifications are delivered. Each channel belongs to an organization, has a type, a JSON configuration object, and an enabled/disabled flag.

Supported channel types

TypeDescriptionConfig key(s)
emailSends alert emails via the configured EmailServicerecipients or to
slackPosts to a Slack channel via incoming webhookwebhookUrl
teamsPosts to a Microsoft Teams channel via incoming webhookwebhookUrl
webhookSends an HTTP request to any HTTPS endpointurl, method, headers, authType, …
pagerdutyCreates an incident via PagerDuty Events API v2routingKey or integrationKey
smsSends text messages via Twilio Programmable MessagingphoneNumbers, from, messagingServiceSid

Channel database schema

ColumnTypeDescription
idUUIDPrimary key
orgIdUUIDOwning organization (required)
namevarchar(255)Human-readable channel name
typeenumOne of: email, slack, teams, webhook, pagerduty, sms
configJSONBType-specific configuration object
enabledbooleanWhether the channel is active (default true)
createdAttimestampCreation timestamp
updatedAttimestampLast modification timestamp

Sender Details

Email

The email sender uses the platform EmailService to deliver formatted alert notification emails.

Configuration fields:

FieldTypeRequiredDescription
recipients or tostring or string[]YesOne or more email addresses

Payload includes: alert name, severity, summary, device name, occurrence time, dashboard URL, and organization name.

The sender validates email addresses against the pattern ^[^\s@]+@[^\s@]+\.[^\s@]+$. If the EmailService is not configured (no SMTP settings), the sender returns a failure with the error “Email service not configured”.

Webhook

The webhook sender delivers alert data as structured JSON to any HTTPS endpoint. It includes SSRF protection that blocks requests to private/loopback IP ranges, localhost, and .local hostnames. DNS resolution is also checked to prevent rebinding attacks.

Configuration fields:

FieldTypeRequiredDescription
urlstringYesHTTPS webhook URL
methodstringNoHTTP method: POST, PUT, or PATCH (default POST)
headersobjectNoCustom HTTP headers
authTypestringNonone, bearer, basic, or api_key
authTokenstringIf bearerBearer token value
authUsernamestringIf basicBasic auth username
authPasswordstringIf basicBasic auth password
apiKeyHeaderstringIf api_keyHeader name for the API key
apiKeyValuestringIf api_keyAPI key value
timeoutnumberNoRequest timeout in ms (1000—60000, default 30000)
retryCountnumberNoNumber of retries on failure (default 0)
payloadTemplatestringNoCustom JSON template with {{variable}} placeholders

Default payload structure:

{
"event": "alert.triggered",
"timestamp": "2026-02-18T12:00:00.000Z",
"alert": {
"id": "uuid",
"name": "CPU High",
"severity": "high",
"summary": "CPU usage exceeded 95%",
"triggeredAt": "2026-02-18T11:59:00.000Z",
"ruleId": "uuid",
"ruleName": "High CPU Rule"
},
"device": {
"id": "uuid",
"name": "web-server-01"
},
"organization": {
"id": "uuid",
"name": "Acme Corp"
},
"context": {}
}

Payload templates support {{variable}} syntax with dot-notation paths. Available variables: alertId, alertName, severity, summary, deviceId, deviceName, orgId, orgName, triggeredAt, ruleId, ruleName, timestamp, plus any keys from the alert context object.

Retry behavior: Failed requests are retried with exponential backoff (2^attempt seconds). Client errors (4xx) are not retried.

Slack and Teams

Slack and Teams channels use incoming webhook URLs and deliver notifications through the webhook sender with a text-based payload template.

Configuration:

FieldTypeRequiredDescription
webhookUrlstringYesIncoming webhook URL from Slack or Teams

Message format: [SEVERITY] Alert Name: Summary message

Both channel types reuse the webhook sender internally with a hardcoded payload template: {"text":"[{{severity}}] {{alertName}}: {{summary}}{{dashboardUrl}}"}. The dashboard URL is appended when the DASHBOARD_URL environment variable is set.

PagerDuty

The PagerDuty sender creates incidents via the PagerDuty Events API v2 at https://events.pagerduty.com/v2/enqueue.

Configuration fields:

FieldTypeRequiredDescription
routingKey or integrationKeystringYesPagerDuty service integration key
severitystringNoOverride severity: critical, error, warning, info
sourcestringNoSource field (defaults to device name or breeze-rmm)
componentstringNoComponent field (defaults to device name)
groupstringNoGroup field (defaults to organization name)
classstringNoClass field (defaults to rule name)
dedupKeystringNoDeduplication key (defaults to alert ID)
customDetailsobjectNoAdditional key-value pairs merged into custom_details
timeoutnumberNoRequest timeout in ms (1000—60000, default 15000)

Severity mapping (when no override is configured):

Alert SeverityPagerDuty Severity
criticalcritical
higherror
mediumwarning
low / infoinfo

The sender sends a trigger event action. The dedup key defaults to the alert ID, enabling PagerDuty to group repeated alerts into a single incident.

SMS

The SMS sender delivers text messages via Twilio Programmable Messaging.

Configuration fields:

FieldTypeRequiredDescription
phoneNumbersstring[]YesArray of recipient phone numbers in E.164 format
fromstringNoSender phone number in E.164 format
messagingServiceSidstringNoTwilio Messaging Service SID

Message format: [SEVERITY] Alert Name on Device (Org): Summary message

Messages are truncated to 1400 characters. Phone numbers must be in E.164 format (e.g., +15551234567). The SMS service must be configured via the Twilio integration; if not, the sender returns “SMS service not configured”.

The send result includes sentCount and failedCount for multi-recipient deliveries, along with per-recipient error details.

In-App

In-app notifications are created directly in the user_notifications database table and appear in each user’s notification center within the Breeze dashboard.

Delivery scope: When an alert triggers, in-app notifications are sent to:

  • All active users directly assigned to the alert’s organization
  • All active partner users with access to the organization (either all org access or selected access that includes the org)

User IDs are deduplicated before notification records are created.

No configuration required. In-app notifications are always sent as the baseline delivery mechanism for every alert. They do not need a notification channel record.

Notification fields:

FieldTypeDescription
idUUIDNotification ID
userIdUUIDTarget user
orgIdUUIDOrganization context
typeenumalert, device, script, automation, system, user, security
priorityenumlow, normal, high, urgent (mapped from alert severity)
titlevarchar(255)Notification title (alert name)
messagetextNotification body
linkvarchar(500)Deep link (e.g., /alerts/{alertId})
metadataJSONBAdditional context (alert ID, severity, device info)
readbooleanRead status (default false)
readAttimestampWhen marked as read
createdAttimestampCreation timestamp

Priority mapping from alert severity:

Alert SeverityNotification Priority
criticalurgent
highhigh
mediumnormal
low / infolow

In-App Notification Management

The /notifications API endpoints allow authenticated users to manage their in-app notifications.

Listing notifications

GET /notifications?limit=50&offset=0&unreadOnly=true&type=alert

Query parameters:

ParameterTypeDefaultDescription
limitnumber50Max notifications to return
offsetnumber0Pagination offset
unreadOnlybooleanfalseFilter to unread notifications only
typestringFilter by type: alert, device, script, automation, system, user, security

Response:

{
"notifications": [...],
"total": 142,
"unreadCount": 7,
"limit": 50,
"offset": 0
}

Notifications are returned in reverse chronological order (newest first). The unreadCount is always returned regardless of filters, showing the user’s total unread count.

Getting the unread count

GET /notifications/unread-count

Returns { "count": 7 } — the number of unread notifications for the current user.

Marking notifications as read or unread

PATCH /notifications/read
Content-Type: application/json
{
"ids": ["uuid1", "uuid2"],
"read": true
}

Request body:

FieldTypeDescription
idsstring[]Specific notification IDs to update
allbooleanSet to true to update all notifications
readbooleanTarget read state (default true)

Use "all": true to mark all notifications as read in a single operation. Use "read": false to mark notifications as unread.

Deleting notifications

Delete a single notification:

DELETE /notifications/:id

Returns 404 if the notification does not exist or belongs to another user.

Delete all notifications:

DELETE /notifications

Removes all notifications for the current user.


Mobile Push Notifications

Breeze also supports push notifications to registered mobile devices via Firebase Cloud Messaging (FCM) for Android and Apple Push Notification Service (APNS) for iOS.

Key features:

  • Severity filtering — Mobile devices can specify which alert severities they want to receive via the alertSeverities array
  • Quiet hours — Each mobile device can configure quiet hours (start/end time with timezone) during which push notifications are suppressed
  • Per-device delivery — Push notifications are sent individually to each registered mobile device for a user

Push notifications subscribe to the same alert.triggered event bus and run in parallel with the notification dispatcher. Delivery status is tracked in the push_notifications table with states: pending, sent, stubbed, failed.


Notification Routing

From alerts

When an alert triggers, the notification dispatcher determines which channels to notify:

  1. In-app notifications are sent immediately to all active users in the organization (always).
  2. If the alert was created by an alert rule with notificationChannelIds in its override settings, those specific channels are used.
  3. If no rule-level channel overrides exist (or the alert has no associated rule), all enabled channels for the organization are used.
  4. Each channel receives a queued send job via BullMQ.

From automations

Automations can send notifications as an action step using the send_notification action type, which targets a specific notification channel by ID.

Escalation policies

Alert rules can reference an escalation policy via escalationPolicyId in their override settings. Escalation policies define timed steps that send additional notifications if an alert remains unacknowledged.

Escalation policy structure:

{
"name": "Critical Alert Escalation",
"steps": [
{
"delayMinutes": 15,
"channelIds": ["channel-uuid-1"]
},
{
"delayMinutes": 60,
"channelIds": ["channel-uuid-1", "channel-uuid-2"]
}
]
}

Each step schedules delayed notification jobs. When an alert is acknowledged or resolved, all pending escalation jobs for that alert are automatically cancelled.


API Reference

In-App Notification Endpoints

MethodPathDescription
GET/notificationsList notifications for the current user
GET/notifications/unread-countGet unread notification count
PATCH/notifications/readMark notifications as read/unread (by IDs or all)
DELETE/notifications/:idDelete a single notification
DELETE/notificationsDelete all notifications for the current user

All endpoints require authentication via authMiddleware.

Notification Channel Endpoints

MethodPathDescription
GET/alerts/channelsList notification channels (?orgId=&type=&enabled=)
POST/alerts/channelsCreate a notification channel
PUT/alerts/channels/:idUpdate a notification channel
DELETE/alerts/channels/:idDelete a notification channel
POST/alerts/channels/:id/testSend a test notification through the channel

Channel endpoints require organization, partner, or system scope.

Creating a channel

Terminal window
curl -X POST /alerts/channels \
-H "Content-Type: application/json" \
-d '{
"name": "Ops Team Email",
"type": "email",
"config": {
},
"enabled": true
}'

Testing a channel

Send a test notification to verify the channel is configured correctly:

POST /alerts/channels/:id/test

The test endpoint sends a real notification with a test payload through the selected channel type and returns the result:

{
"channelId": "uuid",
"channelName": "Ops Team Email",
"channelType": "email",
"testMessage": {
"title": "Test Alert from Breeze RMM",
"message": "This is a test notification sent to channel \"Ops Team Email\" at 2026-02-18T...",
"severity": "info",
"source": "manual_test"
},
"testResult": {
"success": true,
"message": "Test email sent successfully",
"details": { "recipients": ["[email protected]"] }
},
"testedAt": "2026-02-18T12:00:00.000Z",
"testedBy": "user-uuid"
}

Troubleshooting

No notifications being sent for alerts. Verify that notification channels exist and are enabled for the organization. Check whether the alert rule has notificationChannelIds in its override settings — if empty, the dispatcher falls back to all enabled org channels. If no channels are configured at all, only in-app notifications will be delivered. Confirm the notification dispatcher was initialized at application startup.

Email notifications not arriving. Ensure the EmailService is configured with valid SMTP settings. Check that the channel config has a recipients or to field with valid email addresses. The sender validates addresses against a basic pattern; typos in email addresses will cause validation failures.

Webhook returning errors. Check that the URL uses HTTPS — HTTP URLs are rejected. Verify the endpoint is publicly reachable; private IP ranges, localhost, and .local hostnames are blocked. If the webhook requires authentication, confirm the authType, authToken, or credential fields are correctly configured. Review the statusCode in the error response for HTTP-level failures. 4xx errors are not retried.

Slack or Teams messages not appearing. Confirm the webhookUrl is correctly copied from the Slack/Teams incoming webhook configuration. The URL must be a valid HTTPS URL. Test the channel using POST /alerts/channels/:id/test to get a detailed error response.

PagerDuty incidents not created. Verify the routingKey or integrationKey is valid and belongs to an active PagerDuty service integration. Check that the Events API v2 integration is enabled on the PagerDuty service. The default timeout is 15 seconds.

SMS not delivered. Confirm the Twilio integration is configured (Twilio credentials are set). All phone numbers must be in E.164 format (e.g., +15551234567). Check the send result for per-recipient error details. The SMS body is truncated at 1400 characters.

Escalation notifications still firing after alert was acknowledged. Escalation cancellation listens for alert.acknowledged and alert.resolved events on the event bus. If the event bus is not running or the events are not emitted, delayed escalation jobs will continue to fire. Verify that alert acknowledge/resolve operations emit the correct events.

In-app notifications not appearing for partner users. Partner users only receive in-app notifications if they have orgAccess: 'all' or orgAccess: 'selected' with the specific organization included in their orgIds array. Verify the partner user’s access configuration.

Notification queue backed up. Use getNotificationQueueStatus() to check queue metrics (waiting, active, completed, failed, delayed counts). The worker runs with concurrency 5. A large number of failed jobs may indicate a downstream service outage. Check Redis connectivity and the BullMQ worker health.