EdgeCases Logo
Mar 2026
Performance
Surface
6 min read

Vercel Edge Function Resource Limits

The hidden constraints of Vercel's edge runtime that can break your functions in production — memory, CPU, and execution time gotchas

Vercel
Edge Functions
Performance
Serverless
Debugging

Vercel Edge Functions promise ultra-fast performance at the network edge, but they come with strict resource constraints that can silently break your application. Understanding these limits is crucial for building reliable edge-based functionality.

Understanding Edge Runtime Constraints

Vercel Edge Functions run on the Edge Runtime, a lightweight JavaScript environment optimized for speed. This optimization comes with strict limitations:

ResourceLimitImpact
Memory~128MB totalAutomatic termination on exceed
CPU Time~30ms executionFunction timeout after limit
Bundle Size1MB compressedDeploy failure
Execution Duration25 seconds maxRequest timeout

The Edge Runtime Difference:

// ❌ Not available in Edge Runtime
import fs from 'fs'              // Node.js APIs
import { Connection } from 'mysql2' // Native modules
import crypto from 'crypto'      // Some Node.js built-ins

// ✅ Available in Edge Runtime  
import { NextResponse } from 'next/server'
const crypto = await import('node:crypto') // Web Crypto API
fetch('https://api.example.com')           // Web APIs

Memory Limit Gotchas

The ~128MB memory limit is shared across all concurrent executions on the same edge location:

Memory Accumulation Pattern:

// ❌ Problematic: Memory accumulates with concurrent requests
let globalCache = new Map() // Shared across requests

export default async function handler(req) {
  const data = await fetch('https://api.example.com/large-dataset')
  const result = await data.json() // Could be 50MB+
  
  // Cache persists between requests
  globalCache.set(req.url, result)
  
  return NextResponse.json(result)
}

Memory-Safe Patterns:

// ✅ Good: Process data in chunks
export default async function handler(req) {
  const response = await fetch('https://api.example.com/stream')
  const reader = response.body?.getReader()
  
  const chunks = []
  let totalSize = 0
  const MAX_SIZE = 50 * 1024 * 1024 // 50MB limit
  
  while (true) {
    const { done, value } = await reader?.read() ?? { done: true }
    if (done) break
    
    totalSize += value.length
    if (totalSize > MAX_SIZE) {
      return NextResponse.json(
        { error: 'Response too large for edge function' }, 
        { status: 413 }
      )
    }
    
    chunks.push(value)
  }
  
  return new Response(new Uint8Array(chunks.flat()))
}

CPU Time and Execution Limits

Edge Functions have both CPU time limits (~30ms of active computation) and total execution limits (25 seconds):

CPU Time vs Wall Clock Time:

// ❌ CPU-intensive operations hit limits quickly
export default async function handler(req) {
  const startTime = Date.now()
  
  // This burns CPU time rapidly
  let result = 0
  for (let i = 0; i < 10000000; i++) {
    result += Math.sqrt(i) * Math.sin(i)
  }
  
  const endTime = Date.now()
  console.log(`Wall time: ${endTime - startTime}ms`) // Might be 100ms
  // But CPU time could be 50ms+ = TIMEOUT
  
  return NextResponse.json({ result })
}

I/O Bound vs CPU Bound:

// ✅ I/O operations don't count toward CPU limit
export default async function handler(req) {
  // These await calls use wall time, not CPU time
  const [userData, analytics, preferences] = await Promise.all([
    fetch('https://api.example.com/user').then(r => r.json()),
    fetch('https://api.example.com/analytics').then(r => r.json()),
    fetch('https://api.example.com/preferences').then(r => r.json())
  ])
  
  // Minimal CPU usage for JSON manipulation
  const combined = {
    ...userData,
    stats: analytics,
    settings: preferences
  }
  
  return NextResponse.json(combined)
}

Bundle Size and Import Constraints

The 1MB compressed bundle limit affects what libraries you can import:

Bundle Size Culprits:

// ❌ These will likely exceed bundle limits
import moment from 'moment'           // ~67KB gzipped
import lodash from 'lodash'           // ~25KB gzipped  
import aws-sdk from 'aws-sdk'         // ~400KB+ gzipped
import { PrismaClient } from '@prisma/client' // ~500KB+ gzipped

// Bundle size calculation:
// Your code + dependencies + Next.js runtime > 1MB = Deploy failure

Edge-Optimized Alternatives:

// ✅ Lightweight alternatives
import { format, parseISO } from 'date-fns'    // ~2KB per function
import { get, set } from 'lodash-es'           // Tree-shakeable
import { S3Client } from '@aws-sdk/client-s3'  // Modular AWS SDK

// Web APIs instead of libraries
const date = new Intl.DateTimeFormat('en-US').format(new Date())
const encoder = new TextEncoder()
const hash = await crypto.subtle.digest('SHA-256', encoder.encode('data'))

Detection and Debugging Strategies

Edge Function failures can be silent or produce cryptic errors. Here's how to detect and debug them:

Error Patterns to Watch For:

export default async function handler(req) {
  try {
    return await processRequest(req)
  } catch (error) {
    // Edge Function specific errors
    if (error.message.includes('memory limit exceeded')) {
      return NextResponse.json(
        { error: 'Request too large for edge processing' },
        { status: 413 }
      )
    }
    
    if (error.message.includes('execution timeout')) {
      return NextResponse.json(
        { error: 'Processing timeout - try serverless function' },
        { status: 504 }
      )
    }
    
    if (error.message.includes('module not found')) {
      return NextResponse.json(
        { error: 'Feature not available on edge runtime' },
        { status: 501 }
      )
    }
    
    throw error // Re-throw unknown errors
  }
}

Performance Monitoring:

export default async function handler(req) {
  const startTime = performance.now()
  
  const result = await processData()
  
  const endTime = performance.now()
  const duration = endTime - startTime
  
  // Log slow operations
  if (duration > 10000) { // 10 seconds
    console.warn(`Slow edge function: ${duration}ms`)
  }
  
  return NextResponse.json({ 
    result,
    meta: {
      executionTimeMs: Math.round(duration),
      isSlowFunction: duration > 5000
    }
  })
}

Workarounds and Alternative Patterns

When Edge Functions can't handle your use case, here are proven alternatives:

1. Hybrid Approach (Edge + Serverless):

// pages/api/edge-proxy.js (Edge Function)
export default async function handler(req) {
  const { searchParams } = new URL(req.url)
  const size = searchParams.get('size')
  
  // Route based on complexity
  if (size === 'large' || req.method === 'POST') {
    // Redirect to serverless function
    const serverlessUrl = `https://${process.env.VERCEL_URL}/api/serverless`
    return NextResponse.redirect(serverlessUrl)
  }
  
  // Handle simple cases at edge
  return NextResponse.json({ message: 'Fast edge response' })
}

export const config = {
  runtime: 'edge',
}

2. Streaming for Large Responses:

export default async function handler(req) {
  const stream = new ReadableStream({
    async start(controller) {
      try {
        const response = await fetch('https://api.example.com/stream')
        const reader = response.body?.getReader()
        
        while (true) {
          const { done, value } = await reader?.read() ?? { done: true }
          if (done) break
          
          // Forward chunks without accumulating
          controller.enqueue(value)
        }
      } catch (error) {
        controller.error(error)
      } finally {
        controller.close()
      }
    }
  })
  
  return new Response(stream, {
    headers: {
      'Content-Type': 'application/json',
      'Transfer-Encoding': 'chunked'
    }
  })
}

Migration Strategy: When to Use Edge vs Serverless

Choosing the right runtime for each function:

Use CaseEdge FunctionServerlessWhy
Auth middlewareLow latency critical
Database queriesMemory/connection limits
File uploadsSize limits
Image processingCPU/memory intensive
API proxyingSimple pass-through
Heavy computationCPU time limits
Real-time featuresGlobal distribution

The key is using Edge Functions for lightweight, latency-sensitive operations and Serverless Functions for resource-intensive processing.

Advertisement

Advertisement