Add visual regression testing to your CI/CD without managing infrastructure
Automatically detect CSS regressions on every PR. Capture baseline screenshots via API, compare with pixelmatch — no Puppeteer pools, no infrastructure.
Your frontend team pushed a CSS change. Looks good locally. But in production, the checkout button is misaligned on Safari. Three customers abandoned carts. You didn't know until the support team called.
Visual regression testing catches this before production. But it's fragile:
- Flaky Puppeteer setups fail intermittently
- Managing screenshot baselines is error-prone
- Device matrix testing (desktop, mobile, tablet) explodes your test suite
- Maintaining headless Chrome in CI/CD is a nightmare
There's a better way: Capture screenshots via API, compare pixel-by-pixel with pixelmatch. No browser infrastructure. No flakiness. Automatic detection of CSS regressions.
The problem: visual bugs slip through
Manual QA catches visual bugs. But manual QA:
- Doesn't scale (can't test every page variant)
- Misses edge cases (dark mode, different browsers, mobile)
- Is inconsistent (human eyes miss small shifts)
- Slows down deployment (visual QA is a bottleneck)
Puppeteer-based visual testing solves some of this, but introduces new problems:
- Infrastructure overhead — Managing browser pools, handling timeouts, scaling under CI/CD load
- Flakiness — Timing issues, font rendering differences, network delays cause false positives
- Device matrix complexity — Testing desktop + 5 mobile variants = 5x the test suite complexity
- Baseline maintenance — Updating 50 baseline screenshots when the design changes is tedious and error-prone
Teams end up choosing: manual visual QA (slow, doesn't scale) or no visual QA (regressions slip through).
Solution: API-captured screenshots + client-side comparison
The architecture is simple:
- PageBolt captures screenshots — no Puppeteer, no Chrome, consistent rendering across devices
- pixelmatch compares the baseline vs. current — pure Node.js, runs anywhere, zero infrastructure
You own the comparison logic. PageBolt owns the browser.
Step 1: Capture a baseline (run once on main)
curl -X POST https://pagebolt.dev/api/v1/screenshot \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://mysite.com/checkout",
"viewportDevice": "iphone_14_pro",
"fullPage": true,
"blockBanners": true,
"blockAds": true
}' \
--output baselines/checkout-iphone.png
Repeat for each device you care about. Store the PNGs in your repo (or S3). These are your baselines.
Step 2: Capture the PR version
In CI, after deploying to staging:
curl -X POST https://pagebolt.dev/api/v1/screenshot \
-H "x-api-key: $PAGEBOLT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://pr-staging.mysite.com/checkout",
"viewportDevice": "iphone_14_pro",
"fullPage": true,
"blockBanners": true,
"blockAds": true
}' \
--output current/checkout-iphone.png
Step 3: Compare with pixelmatch
// compare.js
const fs = require('fs');
const { PNG } = require('pngjs');
const pixelmatch = require('pixelmatch');
const baseline = PNG.sync.read(fs.readFileSync('baselines/checkout-iphone.png'));
const current = PNG.sync.read(fs.readFileSync('current/checkout-iphone.png'));
const { width, height } = baseline;
const diff = new PNG({ width, height });
const numDiffPixels = pixelmatch(
baseline.data, current.data, diff.data,
width, height,
{ threshold: 0.1 }
);
fs.writeFileSync('diff/checkout-iphone-diff.png', PNG.sync.write(diff));
const diffPct = (numDiffPixels / (width * height)) * 100;
console.log(`Diff: ${diffPct.toFixed(2)}% (${numDiffPixels} pixels)`);
if (diffPct > 0.5) {
console.error('Visual regression detected — failing CI');
process.exit(1);
}
Full GitHub Actions workflow
name: Visual Regression Tests
on: [pull_request]
jobs:
visual:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm install pngjs pixelmatch
- name: Deploy PR to staging
run: npm run build && npm run deploy-staging
env:
STAGING_URL: https://pr-${{ github.event.number }}.staging.mysite.com
- name: Capture current screenshots
run: |
mkdir -p current diff
for device in iphone_14_pro macbook_pro_14; do
curl -sX POST https://pagebolt.dev/api/v1/screenshot \
-H "x-api-key: ${{ secrets.PAGEBOLT_API_KEY }}" \
-H "Content-Type: application/json" \
-d "{
\"url\": \"https://pr-${{ github.event.number }}.staging.mysite.com/checkout\",
\"viewportDevice\": \"$device\",
\"fullPage\": true,
\"blockBanners\": true
}" \
--output "current/checkout-${device}.png"
done
- name: Compare against baselines
run: node compare.js
- name: Upload diff images
if: failure()
uses: actions/upload-artifact@v4
with:
name: visual-regression-diffs
path: diff/
- name: Comment diff on PR
if: failure()
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '❌ Visual regression detected. Download the diff artifact from this workflow run.'
})
This workflow:
- Deploys the PR branch to staging
- Captures screenshots via PageBolt on 2 devices — ~10 seconds total
- Compares pixel-by-pixel with pixelmatch
- Uploads diff images as artifacts if regression detected
- Comments on the PR with a failure notice
Total additional CI time: ~15 seconds. No Puppeteer. No flakiness from missing browser versions.
Device coverage with zero extra work
PageBolt includes 25+ device presets. Change one field to test any combination:
"viewportDevice": "iphone_14_pro" // 390x844 @3x
"viewportDevice": "ipad_pro_12_9" // 1024x1366 @2x
"viewportDevice": "macbook_pro_14" // 1512x982 @2x
"viewportDevice": "desktop_1920x1080" // 1920x1080 @1x
"viewportDevice": "samsung_galaxy_s23" // 360x780 @3x
A Puppeteer setup would need separate Chrome instances per device. PageBolt returns any device from the same API call.
Cost comparison: Puppeteer vs hosted
| Factor | Self-hosted Puppeteer | PageBolt |
|---|---|---|
| Setup time | 3-4 hours | 15 minutes |
| Monthly maintenance | 8-10 hours | 0 hours |
| Infrastructure cost | $100-300/mo (EC2) | $29/mo (Starter) |
| Device coverage | 1 device (your runner) | 25+ presets |
| CI flakiness | 10-15% | <1% |
| Baseline comparison | Separate tooling needed | Separate tooling needed |
| Screenshots per month | Unlimited (your infra) | 5,000 (Starter) |
Both approaches need a comparison library like pixelmatch. The difference is who runs the browser.
When to use this pattern
Use PageBolt + pixelmatch if:
- ✅ You want multi-device regression coverage (mobile + desktop)
- ✅ Your team doesn't want to maintain headless Chrome in CI
- ✅ You're testing public-facing pages or component library docs
- ✅ You want sub-minute CI feedback on visual changes
Stick with self-hosted Puppeteer if:
- ✅ You need to test pages behind complex auth (sessions, 2FA)
- ✅ You have a dedicated DevOps team already managing browser infrastructure
- ✅ Screenshot volume is very high (>50k/month makes hosted less cost-effective)
Getting started
- Sign up at pagebolt.dev — free tier: 100 requests/month, no credit card
- Get your API key from the dashboard
- Save a baseline screenshot from your main branch:
curl -X POST ... --output baselines/page.png - Add
pixelmatchto your project:npm install pngjs pixelmatch - Wire up the comparison script and CI workflow above
First visual regression caught before production is worth the 15 minutes.
Try it free — 100 requests/month, no credit card required.
Get Started Free
100 requests/month, no credit card
Screenshots, PDFs, video recording, and browser automation — no headless browser to manage.
Get Your Free API Key →