Why We Replaced Our Puppeteer-Based CI Screenshot Checks with an API Call
We ran Puppeteer in CI for visual regression checks. It broke constantly, added 8 minutes to builds, and consumed 1.5GB of RAM. Here's what we replaced it with and what we learned.
We had Puppeteer running in our GitHub Actions CI pipeline. It took screenshots of five pages after every deploy, diffed them against baselines, and failed the build if anything changed unexpectedly.
In theory: great. In practice: it was the most fragile thing in our entire CI pipeline.
What was breaking
Puppeteer version drift. Puppeteer bundles Chromium, but the version it bundles changes between minor releases. We'd upgrade a dependency, Puppeteer would download a new Chromium binary, and suddenly our screenshots looked slightly different — different font rendering, different anti-aliasing — causing false failures on diffs that had nothing to do with our UI.
Memory. A headless Chrome instance uses about 300MB. We were screenshotting five pages, so 1.5GB of RAM just for the screenshot step. Our ubuntu-latest runners have 7GB total. Fine on its own, but not fine when it ran in parallel with our test suite.
Timing. We had await page.waitForSelector('#content') on every page. Works most of the time. Fails silently when the selector appears before data loads, capturing a skeleton state instead of the real page. We added sleep(2000) as a hack. That made our CI 2 seconds slower per page — 10 seconds total — and still occasionally captured the wrong state.
The install step. npm install puppeteer downloads a ~170MB Chromium binary. Even with caching, this added 30–45 seconds to cold builds.
Here's the CI step that replaced all of it:
- name: Visual regression check
env:
PAGEBOLT_API_KEY: ${{ secrets.PAGEBOLT_API_KEY }}
run: node scripts/visual-check.js
That's it. No apt-get install for Chrome dependencies. No Puppeteer install. No --no-sandbox flags.
The migration
Before — our Puppeteer setup:
// scripts/screenshot-puppeteer.js (deleted)
import puppeteer from "puppeteer";
const browser = await puppeteer.launch({
args: ["--no-sandbox", "--disable-setuid-sandbox", "--disable-dev-shm-usage"],
headless: "new",
});
const PAGES = ["/", "/pricing", "/docs", "/login", "/dashboard"];
for (const path of PAGES) {
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 900 });
await page.goto(`${BASE_URL}${path}`, { waitUntil: "networkidle2" });
await page.waitForSelector("#content");
await new Promise((r) => setTimeout(r, 2000)); // the hack
const screenshot = await page.screenshot({ fullPage: true });
await fs.writeFile(`screenshots/${path.replace("/", "") || "home"}.png`, screenshot);
await page.close();
}
await browser.close();
After — the API replacement:
// scripts/visual-check.js
import fs from "fs/promises";
import { PNG } from "pngjs";
import pixelmatch from "pixelmatch";
const BASE_URL = process.env.STAGING_URL;
const PAGEBOLT_API_KEY = process.env.PAGEBOLT_API_KEY;
const PAGES = [
{ name: "home", path: "/" },
{ name: "pricing", path: "/pricing" },
{ name: "docs", path: "/docs" },
{ name: "login", path: "/login" },
{ name: "dashboard", path: "/dashboard" },
];
async function screenshot(url) {
const res = await fetch("https://pagebolt.dev/api/v1/screenshot", {
method: "POST",
headers: { "x-api-key": PAGEBOLT_API_KEY, "Content-Type": "application/json" },
body: JSON.stringify({ url, fullPage: true, blockBanners: true }),
});
if (!res.ok) throw new Error(`${res.status}: ${await res.text()}`);
return Buffer.from(await res.arrayBuffer());
}
// Screenshot all pages in parallel (no browser pool to manage)
const screenshots = await Promise.all(
PAGES.map(async (page) => ({
...page,
image: await screenshot(`${BASE_URL}${page.path}`),
}))
);
// Diff against baselines
let failed = false;
for (const { name, image } of screenshots) {
const baselinePath = `baselines/${name}.png`;
let baseline;
try {
baseline = await fs.readFile(baselinePath);
} catch {
await fs.writeFile(baselinePath, image);
console.log(`[${name}] Baseline saved`);
continue;
}
const img1 = PNG.sync.read(baseline);
const img2 = PNG.sync.read(image);
const diff = new PNG({ width: img1.width, height: img1.height });
const changed = pixelmatch(img1.data, img2.data, diff.data, img1.width, img1.height, {
threshold: 0.1,
});
const ratio = changed / (img1.width * img1.height);
if (ratio > 0.02) {
console.error(`❌ ${name}: ${(ratio * 100).toFixed(2)}% changed`);
await fs.writeFile(`diff-output/${name}-diff.png`, PNG.sync.write(diff));
failed = true;
} else {
console.log(`✓ ${name}: ${(ratio * 100).toFixed(2)}% (ok)`);
}
}
if (failed) process.exit(1);
The numbers
| Puppeteer | API | |
|---|---|---|
| CI step duration | ~8 min | ~25 sec |
| RAM usage | ~1.5 GB | ~50 MB |
| Install size | ~200 MB | 0 MB |
| False failures/month | 3–5 | 0 |
--no-sandbox hacks |
Yes | No |
| Parallel screenshots | Sequential (browser pool) | Native |
The false failure rate was the thing that mattered most. Every false failure meant someone had to investigate, confirm it was a Puppeteer artifact, and re-run the build. That's 10–15 minutes of developer time, three to five times a month.
What we still use Puppeteer for
Nothing, as of six months ago. The use cases we had for it:
- Visual regression in CI → API
- PDF generation → API
- Screenshot on deploy → API
- Authenticated page capture → API with cookies
If you're running Puppeteer in CI, the question isn't whether to migrate — it's which jobs to migrate first. Start with the ones that break most often.
Try it free
100 requests/month, no credit card required. OG images, screenshots, PDFs, and video — one API.
Get API Key — Free