Edge Functions Cold Start Optimization
Vercel serverless functions experience cold starts when scaling to zero. Enable Fluid Compute, lazy-load dependencies, and use Edge Functions to reduce initialization latency.
The Edge Case: Cold Start Latency Undermining Performance
Deploy a serverless function to Vercel, invoke it once, and you get a 50ms response. Wait 15 minutes and invoke it again—you might wait 800ms. That's a cold start. The execution environment spun down, and Vercel had to initialize a new runtime, load your code, and bootstrap dependencies before handling the request. For Edge Functions, cold starts are less severe (50-150ms) but still measurable. For Node.js serverless functions, cold starts can hit 1-2 seconds on a cold instance.
The problem isn't just the first slow request—it's unpredictable latency. Users hitting a low-traffic endpoint at 2 AM experience 2-second responses. Users hitting the same endpoint at 2 PM experience 50ms responses. Your p95 latency looks terrible even though your hot path is blazing fast. Vercel tries to keep functions warm with minimum instances on paid plans, but traffic spikes or infrequently used endpoints still trigger cold starts.
What Causes Cold Starts
Cold starts happen when Vercel scales your function instances down to zero. When a new request arrives, Vercel must:
- Provision a new container or V8 isolate
- Bootstrap the runtime (Node.js, Python, or Edge)
- Load your function code and dependencies
- Execute initialization code (top-level imports, database connections)
- Finally, handle the request
Each step adds latency. For Node.js functions, the runtime boot alone is 200-300ms. Dependencies matter—if you import heavy libraries like sharp or prisma at the top level, cold starts increase. For Edge Functions, V8 isolates boot faster (50-100ms) but have stricter limits on initialization work.
Vercel's Edge Functions run on V8 isolates that boot in milliseconds, eliminating container cold start penalty. But Edge Functions have resource limits (128MB memory, 50ms CPU time) and can't use Node.js APIs. If you're hitting cold starts on Node.js serverless functions, migrating to Edge Functions reduces latency—but only if your code fits Edge constraints.
Fluid Compute: Vercel's Cold Start Optimization
Vercel introduced Fluid Compute in early 2025 as a new execution model that dramatically reduces cold starts. Fluid keeps function instances warm longer and runs multiple invocations on the same instance, amortizing initialization costs across requests. Instead of scaling to zero immediately, Fluid maintains idle instances and reuses them for subsequent requests.
Fluid Compute also uses bytecode caching for Node.js functions. After the first invocation, Vercel compiles your JavaScript to bytecode and caches it. Subsequent cold starts load the cached bytecode instead of recompiling, cutting initialization time by 30-40%. For a function with a 1-second cold start, Fluid drops it to 600-700ms.
Fluid is enabled by default on new Vercel projects. For existing projects, enable it in vercel.json:
// vercel.json
{
"functions": {
"api/**/*.ts": {
"runtime": "nodejs20.x",
"memory": 1024,
"maxDuration": 30
}
},
"build": {
"env": {
"VERCEL_ENABLE_FLUID_COMPUTE": "1"
}
}
}Fluid Compute changes the pricing model from invocation count to CPU time consumed. For I/O-bound functions (database queries, API calls), Fluid is cheaper. For CPU-bound functions (image processing, heavy computation), Fluid can be more expensive—you're paying for actual CPU usage instead of per-invocation minimums.
Code Optimization: Reducing Cold Start Time
Regardless of Fluid Compute, your code structure affects cold start latency. Three optimizations matter most:
Lazy-Load Dependencies
Don't import heavy libraries at the top level. Import them inside your function handler or use dynamic imports. This defers dependency loading until the first request, but subsequent requests reuse the loaded module.
// BAD: Imports at top level - all loaded during cold start
import sharp from 'sharp';
import prisma from './lib/prisma';
export default async function handler(req, res) {
// Function body
}
// GOOD: Lazy load dependencies
export default async function handler(req, res) {
const sharp = (await import('sharp')).default;
const prisma = (await import('./lib/prisma')).default;
// Function body
}For database clients like Prisma, lazy-loading cuts cold starts by 200-300ms because Prisma's query engine doesn't initialize until the first import. For image processing libraries like sharp, deferring the import avoids loading native modules during function initialization.
Avoid Synchronous Initialization
Top-level synchronous code blocks cold starts. If you have a large array, heavy computation, or file system access at the module level, Vercel must execute it before the function can handle requests. Move initialization inside the handler or use lazy initialization.
// BAD: Synchronous initialization at module level
const CONFIG = JSON.parse(fs.readFileSync('./config.json'));
export default async function handler(req, res) {
// Function body
}
// GOOD: Lazy initialization
let CONFIG = null;
export default async function handler(req, res) {
if (!CONFIG) {
CONFIG = JSON.parse(await fs.promises.readFile('./config.json'));
}
// Function body
}This pattern avoids file system access during cold starts and defers the cost to the first request only.
Use Edge Functions for Stateless, Fast Requests
If your function is stateless, completes in under 50ms, and doesn't need Node.js APIs, use Edge Functions. Edge Functions run on V8 isolates that boot in milliseconds and don't have container cold start penalties. They're ideal for authentication redirects, header manipulation, and simple API responses.
// Edge function - cold start: 50-100ms
import { NextResponse } from 'next/server';
export const config = {
runtime: 'edge',
};
export default function middleware(req) {
const auth = req.headers.get('authorization');
if (!auth) {
return NextResponse.redirect('/login');
}
return NextResponse.next();
}Edge Functions have strict limits: 128MB memory, 50ms CPU time, no Node.js APIs. If you need database access, heavy computation, or longer execution times, use Node.js serverless functions with Fluid Compute.
Function Warming: Keeping Instances Hot
Vercel automatically keeps a minimum of one function instance warm on paid plans for production environments. But this applies per function, not per route. If you have 100 API routes, Vercel keeps 1 instance warm per function—100 instances total. If your traffic is uneven, some functions still experience cold starts.
You can warm specific functions by invoking them on a schedule using Vercel Cron Jobs or an external service like Pingdom. Schedule a warmup request every 5-10 minutes to keep the instance hot:
// vercel.json - Cron job warming
{
"crons": [
{
"path": "/api/warmup",
"schedule": "*/5 * * * *"
}
]
}
// api/warmup.ts
export default async function handler(req, res) {
// Invoke critical functions to keep them warm
await Promise.all([
fetch('/api/critical-endpoint-1'),
fetch('/api/critical-endpoint-2'),
]);
res.status(200).json({ status: 'warmed' });
}This approach adds cost—you're paying for warmup invocations that don't serve user traffic. But for latency-sensitive endpoints (checkout, login, API gateway), the cost is worth it.
Database Connection Reuse
Establishing a database connection during cold start adds 100-300ms of latency. Reuse connections across invocations to avoid this cost. For serverless functions, use connection pooling with a library like pg's pool or Prisma's data proxy.
// BAD: New connection per invocation
import { Pool } from 'pg';
export default async function handler(req, res) {
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const result = await pool.query('SELECT * FROM users');
res.json(result.rows);
}
// GOOD: Reuse pool across invocations
import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 1, // Single connection per instance
});
export default async function handler(req, res) {
const result = await pool.query('SELECT * FROM users');
res.json(result.rows);
}For Prisma, use the Prisma Data Proxy for serverless environments. The proxy maintains connections externally and avoids connection establishment cold starts. Configure it in your schema file:
// schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL") // Internal connection for migrations
}Set DIRECT_URL to your database's direct connection string (for migrations) and DATABASE_URL to the Prisma Data Proxy URL (for runtime queries).
Streaming Responses: Improving Perceived Performance
Even if cold start latency is unavoidable, streaming responses improve perceived performance. Send response headers and partial body as soon as possible, then stream the rest. Users see content loading incrementally instead of waiting for the full response.
// api/stream.ts - Streaming response
export default async function handler(req, res) {
// Send headers immediately
res.setHeader('Content-Type', 'application/json');
// Stream partial results
const stream = new ReadableStream({
async start(controller) {
const results = await fetchFromDatabase();
for (const result of results) {
controller.enqueue(JSON.stringify(result) + '\n');
}
controller.close();
}
});
return new Response(stream);
}This pattern works well with Edge Functions, which have faster cold starts. For database queries, stream rows as they're returned instead of waiting for the full result set. For API calls, stream the response body as chunks arrive.
The Bottom Line
Cold starts are the hidden latency tax of serverless computing. Vercel's Fluid Compute reduces but doesn't eliminate them. For production applications, combine Fluid Compute with code optimization: lazy-load dependencies, avoid synchronous initialization, and reuse database connections. For latency-sensitive endpoints, use Edge Functions or function warming. Measure cold start impact with A/B testing—if your users see 2-second cold starts once per hour, is that acceptable? If not, pay for warming or migrate to a long-running server for those endpoints.
Advertisement
Explore these curated resources to deepen your understanding
Official Documentation
Vercel Functions: Cold Start Optimization
Official Vercel guide on improving function performance and reducing cold starts
Fluid Compute Documentation
Vercel's new execution model with automatic cold start optimization
Vercel Functions Runtimes
Runtime-specific documentation including cold start behavior and limits
Tools & Utilities
Further Reading
Vercel Serverless Cold Start: The Ultimate Guide to Crushing Latency & Boosting Performance
Comprehensive guide to cold start optimization strategies on Vercel
Vercel Fluid Compute: Serverless Functions Guide (2026)
Deep dive into Fluid Compute architecture and pricing implications
Edge Requests Vercel: Unleash Blazing Fast Global Performance
Comparison of Edge Functions vs serverless for low-latency workloads
Related Insights
Explore related edge cases and patterns
Advertisement