Alerts & Notifications
Signal Without the Noise
The worst alert is the one that wakes your whole team for nothing. Sitewatch delivers evidence-rich alerts across six independent channels with three layers of noise control: retry confirmation kills false positives, a 30-minute cooldown prevents alert storms, and fingerprint deduplication means one problem equals one incident -- not twenty.
- 2-of-3 retry confirmation before any alert fires
- 30-minute per-incident cooldown prevents alert fatigue
- SHA-256 fingerprint deduplication -- one problem, one incident
Slack
#web-ops channel
team@acme.com
SMS
+1 *** ***-4821
Webhook
POST /hooks/alerts
PagerDuty
P1 severity triggered
Opsgenie
P2 alert created
Why it matters
Alerts that respect your attention
Retry confirmation
Every detected issue goes through 2-of-3 retries before an alert fires. A single failed request never wakes anyone up. Near-zero false positives by design.
30-minute cooldown
Once an incident is reported, repeat alerts for the same issue are suppressed for 30 minutes. No alert storms during a prolonged outage. No inbox flooding.
Fingerprint deduplication
Each problem gets a unique SHA-256 fingerprint. One broken script means one incident, not a separate alert for every page that references it.
Slack with Block Kit
Rich Slack messages with severity indicators, structured fields (site, page URL, severity), and a "View Incident" button linking directly to the report.
Styled email reports
HTML emails with incident type, evidence table, suggested fix steps, and a direct link to the incident -- everything your team needs in one message.
SMS alerts
Concise incident summaries to verified phone numbers. When Slack and email go unnoticed, SMS gets through.
Webhook alerts
Structured JSON payloads to any endpoint with automatic retry. Pipe alerts into any tool or workflow you already use.
PagerDuty
Route incidents to the right on-call rotation. Native Events API v2 with severity mapping, dedup keys, and full trigger/acknowledge/resolve lifecycle.
Opsgenie
Plug directly into your escalation policies. Native Alert API with P1–P5 priority mapping, alias-based dedup, and full alert lifecycle management.
Independent channel delivery
All six channels are dispatched independently. If one channel fails, the others still deliver. Partial success still updates the cooldown timer so you are never double-alerted.
6
Alert channels
30 min
Per-incident cooldown
Near-zero
False positives
What each channel delivers
Six channels, zero gaps
Slack alerts
- Block Kit format with header and severity indicator
- Structured fields: site, page URL, severity level
- "View Incident" action button linking to full report
- Webhook URL validated against hooks.slack.com
- Test connection from workspace settings
Email alerts
- Styled HTML template with incident type and evidence table
- Suggested fix steps included in every email
- Subject line: [SiteGuard] SEVERITY: Type on hostname
- Recipients: explicit list or fallback to workspace owner
- CTA button linking directly to the incident report
SMS alerts
- Twilio-powered delivery with carrier-grade reliability
- Phone number verification before activation
- Concise incident summaries optimized for mobile
- Reaches on-call engineers when Slack and email go unnoticed
Webhook alerts
- Custom JSON payloads with structured incident data
- Send to any HTTP endpoint you control
- Includes site, page URL, severity, evidence, and incident link
- Automatic retry on delivery failure
PagerDuty alerts
- Native Events API v2 integration
- Severity mapping: critical, high, medium, low
- Dedup keys prevent duplicate PagerDuty incidents
- Full trigger/acknowledge/resolve lifecycle
Opsgenie alerts
- Native Alert API integration
- Priority mapping from P1 to P5
- Alias-based deduplication prevents duplicate alerts
- Full alert lifecycle management: create, acknowledge, close
From detection to delivery
How an alert goes from detection to your inbox
Issue detected
A daily or on-demand check finds a broken asset, redirect loop, host drift, or other silent failure on one of your monitored pages.
Retry confirmation
The issue is retried up to 2 more times. Only if 2 of 3 checks confirm the failure is the issue promoted to a real incident. Transient glitches are filtered out.
Fingerprint and deduplicate
A SHA-256 fingerprint is generated for the issue. If an active incident with the same fingerprint exists and its 30-minute cooldown has not expired, no new alert is sent.
Six-channel dispatch
Alerts fire independently across all configured channels — Slack, email, SMS, webhook, PagerDuty, and Opsgenie. If any channel fails, the others still deliver. The cooldown timer starts once at least one channel succeeds.
Alerts that tell you what broke and how to fix it
Free plan. No credit card. Connect your channels in under two minutes.
Why Sitewatch
Sitewatch alerts vs basic monitoring notifications
| Feature | Basic monitoring alerts | Sitewatch |
|---|---|---|
| False positive handling | Alert on first failure | 2-of-3 retry confirmation |
| Alert storms | Every check cycle = new alert | 30-min per-incident cooldown |
| Duplicate incidents | Same issue, many alerts | SHA-256 fingerprint deduplication |
| Delivery channels | Email only | 6 channels: Slack, email, SMS, webhook, PagerDuty, Opsgenie |
| Alert content | "Your site is down" | Evidence table, fix steps, direct incident link |
| Channel failure | Silent failure, no alert at all | Channels are independent -- one failing never blocks the other |
False positive handling
Alert storms
Duplicate incidents
Delivery channels
Alert content
Channel failure
FAQ
Frequently asked questions
Six channels total: email (all plans), Slack and webhooks (Starter and Pro), and PagerDuty, Opsgenie, and SMS (Pro only). Slack uses Block Kit formatting with action buttons. Email includes styled HTML with evidence tables and fix steps. SMS delivers to verified phone numbers (30/month on Pro). Webhooks send custom JSON payloads to any endpoint. PagerDuty uses native Events API v2 with severity mapping. Opsgenie uses native Alert API with P1–P5 priority mapping.
Every detected issue goes through 2-of-3 retry confirmation. A single transient failure never triggers an alert. The issue must be confirmed on at least two out of three consecutive checks before an incident is created and alerts are sent.
Each issue gets a unique SHA-256 fingerprint. If an active incident with the same fingerprint already exists and the 30-minute cooldown has not expired, no additional alert is sent. This prevents alert storms during prolonged outages.
All six channels are dispatched independently. If Slack delivery fails, the other configured channels still fire. The cooldown timer updates as long as at least one channel succeeds.
Yes. Per-site alert routing is supported. You can override notification settings on a per-site basis, so different sites can alert to different channels, recipients, or escalation paths.
Slack alerts include a header, severity indicator, site name, page URL, severity level, and a "View Incident" button. Email alerts include the incident type, an evidence table with affected resources, suggested fix steps, and a link to the full incident report.
Explore more