Dev team guide
Why CI Passing Doesn't Mean Your Deploy Is Safe
Your CI passed. Slack says deployed. And 47 minutes later a user tweets that your checkout is broken. Nothing in your test suite caught it — because CI doesn't see production. This post covers the four failure types that slip past a green CI run, and what post-deploy monitoring does instead.
- 4 failure types CI misses by design
- Real scenario: Stripe.js 404 in production
- How deploy hooks catch what CI can't
Unit tests
Integration tests
Build succeeded
Stripe.js on production
404 — CDN edge miss
The problem
The false confidence of a green CI run
CI pipelines are excellent at what they're designed for: testing your code in a controlled, reproducible environment. Your unit tests verify logic. Your integration tests verify service contracts. Your build step verifies the bundle compiles. Green across the board.
But that environment is not production. It has no CDN edge. It has no third-party script dependencies fetched live. It has no hosting-provider-specific MIME type configuration. It has no real cache state. When your pipeline reports success, it means your code is correct — not that your site is working.
The gap shows up in the 20–30% of outages that happen in the first hour after a deploy. The deploy is "successful." The server is responding. HTTP status is 200. And users are staring at a broken checkout, a blank page, or an unstyled UI that makes the site look abandoned.
CI is a development tool. It runs in a sandbox. Post-deploy monitoring is a production tool. It runs against the real thing. Both are necessary — and most teams only have one.
11
Detection rules
5–30 min
Check intervals
Free
1 site
The 4 failure types
What CI misses — systematically
These four failure modes are not edge cases. They are structural gaps between the CI environment and production.
1. CDN asset 404s
Your build produces main-a4f9c2.js. Your HTML references it. CI says: build passed. But on the CDN edge, the previous asset (main-9d3e1a.js) is still cached — and the new one hasn't propagated yet, or was uploaded to the wrong bucket path. Result: 404 on every JS request. Page renders as a blank shell. Your CI had no idea — it never touched the CDN.
2. MIME type mismatches
Your hosting provider or CDN serves your JS bundle as text/plain instead of application/javascript. Browsers refuse to execute scripts with incorrect MIME types as a security measure. The file is there — a request for it returns 200 OK. But the browser blocks it silently. No CI test checks the Content-Type header of assets served from the production CDN, because CI doesn't use the production CDN.
3. Redirect loops introduced by deployment
A new environment variable changes a redirect rule. Or a framework upgrade modifies trailing-slash behavior. Or a CDN rule conflicts with application-level redirects. The resulting infinite redirect loop is immediately visible to users — ERR_TOO_MANY_REDIRECTS — but completely invisible to CI, which runs against a local server with none of those redirect layers.
4. Third-party script outages
Stripe.js, Segment, Intercom, HubSpot — any third-party script loaded from an external CDN. CI tests mock these or skip them entirely. But on production, a third-party CDN outage or a script version being yanked can break your checkout, auth flow, or analytics without a single change to your codebase. Your CI is green because you didn't change anything. Your users can't pay.
Walkthrough
Scenario: Stripe.js 404 — 2 hours undetected
Here is a concrete failure that plays out across teams every week.
What happened: A routine frontend deploy goes out at 14:47. The build compiles cleanly. All 94 tests pass. The deployment platform confirms success. A Slack message appears: "Deployed to production. ✓"
What CI missed: The deploy updated index.html with a new Content Security Policy header that inadvertently blocked js.stripe.com. Stripe.js loads, but immediately throws a CSP violation. The payment form renders visually — but clicking "Pay" does nothing.
The detection gap: No unit test exercises CSP headers. The integration test for payments mocks Stripe. The uptime monitor sees 200 OK on the homepage. Nobody on the team makes a purchase at 14:47.
How it was discovered: At 16:52, a user replies to a support email saying their team has been trying to upgrade for two hours. The team rolls back at 17:04. 2 hours and 17 minutes of broken checkout, zero revenue from payment intents in that window.
What post-deploy monitoring would have caught: A deploy hook triggers a Sitewatch check at 14:48. The check loads the pricing page, validates every loaded script including Stripe.js, and detects the CSP header blocking the external domain. Alert fires at 14:51. Rollback happens before 15:00.
The solution
What post-deploy monitoring looks like
Connect your pipeline
Add a Sitewatch deploy hook to your Vercel, Netlify, or GitHub Actions workflow. One webhook URL. The hook fires the moment your deploy completes.
Production is checked immediately
Sitewatch loads your critical pages from the real production URL, validates every JS and CSS asset, checks MIME types on CDN-served files, and follows redirect chains — all against the live site, not a sandbox.
Alert with root cause, not just "broken"
If a Stripe.js request is blocked by CSP, you get an alert that says exactly that — not "something went wrong." Plain-English diagnosis so you can act immediately, not debug from scratch.
Common questions
E2E tests (Playwright, Cypress) are closer to production than unit tests — but they still run in CI against a preview URL or staging environment, not the production CDN. They also don't validate MIME types, redirect loops at the CDN layer, or third-party script availability. They're a valuable layer, but they don't replace post-deploy production monitoring.
A scheduled monitor checks your site every N minutes regardless of deploys. A deploy hook triggers an immediate check the moment a deploy completes. For catching deploy regressions, deploy hooks are far more effective — the check fires while the issue is fresh and rollback is easiest. Scheduled monitors are the safety net between deploys.
Typically 2–4 minutes for a full check including asset validation, MIME type inspection, and redirect tracing. The check starts within seconds of the hook firing. For most teams, this means alerts arrive before the deploy notification has left everyone's attention.
No. Sitewatch validates your pages by loading them as a browser would — following all asset requests, checking response headers, tracing redirect chains. You configure which pages to check, not what assertions to run. No test scripts, no selectors, no maintenance overhead.
Keep reading
Related resources
Deploy Hooks
Trigger checks from your CI/CD pipeline.
Post-Deployment Monitoring Checklist
What to verify after every deploy.
Broken Assets Monitoring
Detect broken JS, CSS, and images.
Deploy Verification Reports
Automatic reports after every ship.
CDN Cache Issues Detection
Catch CDN edge misses and stale assets.
Run a free scan on your production site after your next deploy
No test scripts. No configuration overhead. Free plan available.