Skip to content

Alerts & Notifications

Signal Without the Noise

The worst alert is the one that wakes your whole team for nothing. Sitewatch delivers evidence-rich alerts across six independent channels with three layers of noise control: retry confirmation kills false positives, a 30-minute cooldown prevents alert storms, and fingerprint deduplication means one problem equals one incident -- not twenty.

  • 2-of-3 retry confirmation before any alert fires
  • 30-minute per-incident cooldown prevents alert fatigue
  • SHA-256 fingerprint deduplication -- one problem, one incident

Why it matters

Alerts that respect your attention

Retry confirmation

Every detected issue goes through 2-of-3 retries before an alert fires. A single failed request never wakes anyone up. Near-zero false positives by design.

30-minute cooldown

Once an incident is reported, repeat alerts for the same issue are suppressed for 30 minutes. No alert storms during a prolonged outage. No inbox flooding.

Fingerprint deduplication

Each problem gets a unique SHA-256 fingerprint. One broken script means one incident, not a separate alert for every page that references it.

Slack with Block Kit

Rich Slack messages with severity indicators, structured fields (site, page URL, severity), and a "View Incident" button linking directly to the report.

Styled email reports

HTML emails with incident type, evidence table, suggested fix steps, and a direct link to the incident -- everything your team needs in one message.

SMS alerts

Concise incident summaries to verified phone numbers. When Slack and email go unnoticed, SMS gets through.

Webhook alerts

Structured JSON payloads to any endpoint with automatic retry. Pipe alerts into any tool or workflow you already use.

PagerDuty

Route incidents to the right on-call rotation. Native Events API v2 with severity mapping, dedup keys, and full trigger/acknowledge/resolve lifecycle.

Opsgenie

Plug directly into your escalation policies. Native Alert API with P1–P5 priority mapping, alias-based dedup, and full alert lifecycle management.

Independent channel delivery

All six channels are dispatched independently. If one channel fails, the others still deliver. Partial success still updates the cooldown timer so you are never double-alerted.

6

Alert channels

30 min

Per-incident cooldown

Near-zero

False positives

What each channel delivers

Six channels, zero gaps

Slack alerts

  • Block Kit format with header and severity indicator
  • Structured fields: site, page URL, severity level
  • "View Incident" action button linking to full report
  • Webhook URL validated against hooks.slack.com
  • Test connection from workspace settings

Email alerts

  • Styled HTML template with incident type and evidence table
  • Suggested fix steps included in every email
  • Subject line: [SiteGuard] SEVERITY: Type on hostname
  • Recipients: explicit list or fallback to workspace owner
  • CTA button linking directly to the incident report

SMS alerts

  • Twilio-powered delivery with carrier-grade reliability
  • Phone number verification before activation
  • Concise incident summaries optimized for mobile
  • Reaches on-call engineers when Slack and email go unnoticed

Webhook alerts

  • Custom JSON payloads with structured incident data
  • Send to any HTTP endpoint you control
  • Includes site, page URL, severity, evidence, and incident link
  • Automatic retry on delivery failure

PagerDuty alerts

  • Native Events API v2 integration
  • Severity mapping: critical, high, medium, low
  • Dedup keys prevent duplicate PagerDuty incidents
  • Full trigger/acknowledge/resolve lifecycle

Opsgenie alerts

  • Native Alert API integration
  • Priority mapping from P1 to P5
  • Alias-based deduplication prevents duplicate alerts
  • Full alert lifecycle management: create, acknowledge, close

From detection to delivery

How an alert goes from detection to your inbox

01

Issue detected

A daily or on-demand check finds a broken asset, redirect loop, host drift, or other silent failure on one of your monitored pages.

02

Retry confirmation

The issue is retried up to 2 more times. Only if 2 of 3 checks confirm the failure is the issue promoted to a real incident. Transient glitches are filtered out.

03

Fingerprint and deduplicate

A SHA-256 fingerprint is generated for the issue. If an active incident with the same fingerprint exists and its 30-minute cooldown has not expired, no new alert is sent.

04

Six-channel dispatch

Alerts fire independently across all configured channels — Slack, email, SMS, webhook, PagerDuty, and Opsgenie. If any channel fails, the others still deliver. The cooldown timer starts once at least one channel succeeds.

Alerts that tell you what broke and how to fix it

Free plan. No credit card. Connect your channels in under two minutes.

Why Sitewatch

Sitewatch alerts vs basic monitoring notifications

False positive handling

Basic monitoring alerts:Alert on first failure
Sitewatch:2-of-3 retry confirmation

Alert storms

Basic monitoring alerts:Every check cycle = new alert
Sitewatch:30-min per-incident cooldown

Duplicate incidents

Basic monitoring alerts:Same issue, many alerts
Sitewatch:SHA-256 fingerprint deduplication

Delivery channels

Basic monitoring alerts:Email only
Sitewatch:6 channels: Slack, email, SMS, webhook, PagerDuty, Opsgenie

Alert content

Basic monitoring alerts:"Your site is down"
Sitewatch:Evidence table, fix steps, direct incident link

Channel failure

Basic monitoring alerts:Silent failure, no alert at all
Sitewatch:Channels are independent -- one failing never blocks the other

FAQ

Frequently asked questions