MCP Server Security — Why Hosted Browser Automation Creates a Safer Audit Trail Than Self-Hosted Puppeteer
Most self-hosted MCP tools lack security controls. Self-hosted Puppeteer means direct access risks. Here's why hosted APIs create safer audit trails.
You're building AI agents that automate browser tasks. You have two options: self-hosted Puppeteer MCP, or a hosted API.
The security implications are not obvious. Let's break them down.
The Self-Hosted Puppeteer Problem
Self-hosted Puppeteer gives your AI agent direct browser access:
// Your agent code
const browser = puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://app.example.com');
This looks clean. But here's what's actually happening:
Direct access: Your agent has full browser control. Every cookie it touches, every page it visits, every screenshot it takes is handled locally on your infrastructure.
Credential exposure: If your agent runs in an untrusted environment (cloud, shared hardware, compromised container), anyone with access can intercept credentials, session tokens, and data the browser touches.
Infrastructure burden: You manage Chromium, memory, crashes, timeouts, cleanup. At scale, this becomes a DevOps nightmare.
Audit trail: None. When something goes wrong, you have local logs. You don't have centralized, tamper-proof records of what happened.
The Security Data
Security evaluations of MCP tools in the ecosystem reveal a consistent pattern across the majority:
- No rate limiting
- No request logging
- No access control boundaries
- Direct filesystem exposure
- Credential handling in plaintext
Self-hosted Puppeteer MCP has all of these problems by default.
The Hosted Alternative
A hosted browser automation API inverts the model:
// Your agent code
const response = await fetch('https://pagebolt.dev/api/v1/screenshot', {
method: 'POST',
headers: { 'x-api-key': apiKey },
body: JSON.stringify({ url: 'https://app.example.com' })
});
const screenshot = await response.blob();
Your agent doesn't get a browser. It gets an API call.
What changes:
- Rate limiting is built in. Rate limits kick in per key (10–300 req/min depending on plan). Brute force attacks fail immediately. Runaway agents get throttled.
- Audit trails are automatic. Every call is logged with timestamp, user, action, success/failure. You can query: "What did this API key do on March 2?" Compliance teams can audit it.
- Credentials stay isolated. Your agent passes cookies via headers, but the hosted service never logs or stores them. Session tokens don't leak into your infrastructure.
- Access boundaries are enforced. Your agent can't read local files. It can only call the screenshot API. A compromised agent is limited to taking screenshots, not pivoting into your infrastructure.
Real Comparison: Puppeteer vs Hosted
Scenario: Your AI agent processes customer data and takes screenshots for compliance.
Self-hosted Puppeteer:
Agent runs with direct browser access
Agent navigates to /customer/123/dashboard
Agent takes screenshot
Agent navigates to /admin/settings ← oops, it can reach this
Agent extracts API keys from page
Attacker now has credentials
You have local logs. You discovered the breach 3 days later.
Hosted API:
Agent calls POST /screenshot?url=/customer/123/dashboard
API enforces rate limit: OK
API logs call with timestamp, agent ID, URL
API returns screenshot
Agent calls POST /screenshot?url=/admin/settings
API logs call
You review audit trail 5 minutes later
You see the suspicious call immediately
You revoke the API key
Attacker's access terminated
The Tradeoff Is Real
Self-hosted Puppeteer gives you code visibility and full control. You can audit the Puppeteer source code. You own all the data locally.
Hosted APIs trade some control for:
- Instant attack mitigation (revoke API key, not a compromised agent)
- Rate limiting (brute force protection out of the box)
- Audit trails (compliance, incident response, forensics)
- Zero infrastructure management (no Chromium crashes, no memory leaks)
When Self-Hosted Makes Sense
If your AI agents run in a fully trusted environment (your own machines, your own data center, air-gapped network), self-hosted Puppeteer is fine.
If your agents run anywhere else — cloud, shared infrastructure, customer devices, untrusted containers — a hosted API with audit trails is safer.
The Numbers
- Most self-hosted MCP tools lack security controls (rate limiting, logging, access boundaries)
- Self-hosted Puppeteer: 0 audit trails by default
- Hosted API: Automatic logging, rate limiting, access boundaries
The security model isn't a nice-to-have. It's fundamental.
Getting Started
PageBolt's hosted model is simple: call an API, get a screenshot (or PDF, or video). All calls are logged, rate-limited, and auditable.
Free tier: 100 requests/month.
If you're evaluating MCP tools for production AI workflows, the security model matters as much as the capability. Hosted with audit trails beats self-hosted without them.
Hosted browser automation with automatic audit trails
Every API call logged, rate-limited, and auditable. No Chromium to manage, no credential exposure. Free tier: 100 requests/month.
Get API Key — Free