Error Reporting Batch
Efficiently batch and report errors to reduce network overhead and costs
Error Reporting Batch
Sending every error immediately to your monitoring service can overwhelm your network and inflate costs. Batching errors improves performance and reduces monitoring expenses.
Why Batch Errors?
Problems with immediate reporting:
- High network overhead
- Rate limiting from monitoring services
- Increased costs (per-event pricing)
- Poor performance on slow connections
- Can impact user experience
Benefits of batching:
- Fewer network requests
- Lower monitoring costs
- Better performance
- Avoids rate limits
- Groups related errors
Basic Batch Implementation
// lib/error-batch.ts
interface ErrorEvent {
error: Error;
context?: Record<string, any>;
timestamp: number;
id: string;
}
class ErrorBatcher {
private queue: ErrorEvent[] = [];
private flushInterval: NodeJS.Timeout | null = null;
constructor(
private config: {
maxSize?: number; // Max errors before auto-flush
flushIntervalMs?: number; // Time between flushes
endpoint: string; // Where to send errors
}
) {
this.config.maxSize = config.maxSize || 10;
this.config.flushIntervalMs = config.flushIntervalMs || 5000; // 5s
this.startAutoFlush();
this.setupPageUnloadHandler();
}
add(error: Error, context?: Record<string, any>) {
const event: ErrorEvent = {
error: this.serializeError(error),
context,
timestamp: Date.now(),
id: this.generateId(),
};
this.queue.push(event);
// Auto-flush if queue is full
if (this.queue.length >= this.config.maxSize!) {
this.flush();
}
}
private startAutoFlush() {
this.flushInterval = setInterval(() => {
if (this.queue.length > 0) {
this.flush();
}
}, this.config.flushIntervalMs);
}
private setupPageUnloadHandler() {
// Flush on page unload
window.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'hidden') {
this.flush();
}
});
window.addEventListener('pagehide', () => {
this.flush();
});
}
private async flush() {
if (this.queue.length === 0) return;
const batch = [...this.queue];
this.queue = [];
try {
// Use sendBeacon for reliability (works even when page is closing)
const success = navigator.sendBeacon(
this.config.endpoint,
JSON.stringify({ errors: batch })
);
if (!success) {
// Fallback to fetch with keepalive
await fetch(this.config.endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ errors: batch }),
keepalive: true,
});
}
console.log(`[Error Batch] Sent ${batch.length} errors`);
} catch (error) {
console.error('[Error Batch] Failed to send batch:', error);
// Re-queue on failure (with limit to prevent infinite growth)
if (this.queue.length < 100) {
this.queue.unshift(...batch);
}
}
}
private serializeError(error: any) {
return {
name: error.name,
message: error.message,
stack: error.stack,
// Include any custom properties
...error,
};
}
private generateId(): string {
return `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
destroy() {
if (this.flushInterval) {
clearInterval(this.flushInterval);
}
this.flush();
}
}
export const errorBatcher = new ErrorBatcher({
maxSize: 10,
flushIntervalMs: 5000,
endpoint: '/api/errors/batch',
});Usage
// lib/error-handler.ts
import { errorBatcher } from './error-batch';
export function reportError(error: Error, context?: any) {
// Add to batch instead of sending immediately
errorBatcher.add(error, context);
}
// Usage throughout app
try {
await fetchData();
} catch (error) {
reportError(error, {
userId: user.id,
page: '/dashboard',
});
}Advanced Batching Strategies
Priority-Based Batching
// Different priorities flush at different rates
enum ErrorPriority {
LOW = 'LOW', // Analytics, tracking
MEDIUM = 'MEDIUM', // Non-critical errors
HIGH = 'HIGH', // User-facing errors
CRITICAL = 'CRITICAL', // Payment, auth errors
}
class PriorityErrorBatcher {
private queues = {
[ErrorPriority.LOW]: [] as ErrorEvent[],
[ErrorPriority.MEDIUM]: [] as ErrorEvent[],
[ErrorPriority.HIGH]: [] as ErrorEvent[],
[ErrorPriority.CRITICAL]: [] as ErrorEvent[],
};
private config = {
[ErrorPriority.LOW]: { maxSize: 50, intervalMs: 30000 }, // 30s
[ErrorPriority.MEDIUM]: { maxSize: 20, intervalMs: 10000 }, // 10s
[ErrorPriority.HIGH]: { maxSize: 10, intervalMs: 5000 }, // 5s
[ErrorPriority.CRITICAL]: { maxSize: 1, intervalMs: 0 }, // Immediate
};
add(error: Error, priority: ErrorPriority, context?: any) {
const event = this.createEvent(error, context);
this.queues[priority].push(event);
const config = this.config[priority];
// Critical errors send immediately
if (priority === ErrorPriority.CRITICAL) {
this.flushQueue(priority);
return;
}
// Auto-flush if queue is full
if (this.queues[priority].length >= config.maxSize) {
this.flushQueue(priority);
}
}
private async flushQueue(priority: ErrorPriority) {
const queue = this.queues[priority];
if (queue.length === 0) return;
const batch = [...queue];
this.queues[priority] = [];
await this.sendBatch(batch, priority);
}
private async sendBatch(batch: ErrorEvent[], priority: ErrorPriority) {
try {
await fetch('/api/errors/batch', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
errors: batch,
priority,
}),
keepalive: true,
});
} catch (error) {
console.error(`Failed to send ${priority} errors:`, error);
}
}
}
export const priorityBatcher = new PriorityErrorBatcher();
// Usage
reportError(validationError, ErrorPriority.LOW);
reportError(networkError, ErrorPriority.MEDIUM);
reportError(renderError, ErrorPriority.HIGH);
reportError(paymentError, ErrorPriority.CRITICAL); // Sent immediatelyDeduplication
// Avoid sending duplicate errors
class DeduplicatingBatcher {
private queue: ErrorEvent[] = [];
private seen = new Set<string>();
add(error: Error, context?: any) {
const fingerprint = this.getErrorFingerprint(error);
// Skip if we've seen this error recently
if (this.seen.has(fingerprint)) {
console.log('[Error Batch] Duplicate error skipped:', fingerprint);
return;
}
this.seen.add(fingerprint);
this.queue.push(this.createEvent(error, context));
// Clear seen errors after 1 minute
setTimeout(() => {
this.seen.delete(fingerprint);
}, 60000);
this.checkFlush();
}
private getErrorFingerprint(error: Error): string {
// Create unique identifier for error type + location
const stack = error.stack?.split('\n')[1] || '';
return `${error.name}:${error.message}:${stack}`;
}
}Compression
// Compress large batches before sending
async function sendCompressedBatch(errors: ErrorEvent[]) {
const json = JSON.stringify({ errors });
// Check if CompressionStream is available
if ('CompressionStream' in window) {
const blob = new Blob([json]);
const stream = blob.stream().pipeThrough(new CompressionStream('gzip'));
const compressedBlob = await new Response(stream).blob();
await fetch('/api/errors/batch', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Encoding': 'gzip',
},
body: compressedBlob,
keepalive: true,
});
} else {
// Fallback to uncompressed
await fetch('/api/errors/batch', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: json,
keepalive: true,
});
}
}Server-Side Handler
// app/api/errors/batch/route.ts
import { NextRequest, NextResponse } from 'next/server';
import * as Sentry from '@sentry/nextjs';
export async function POST(req: NextRequest) {
try {
const { errors, priority } = await req.json();
if (!Array.isArray(errors)) {
return NextResponse.json(
{ error: 'Invalid payload' },
{ status: 400 }
);
}
console.log(`Received batch of ${errors.length} errors (${priority || 'MEDIUM'})`);
// Process each error
for (const errorEvent of errors) {
// Reconstruct error object
const error = new Error(errorEvent.error.message);
error.name = errorEvent.error.name;
error.stack = errorEvent.error.stack;
// Add context
Sentry.withScope(scope => {
scope.setContext('error_batch', {
batchId: errorEvent.id,
timestamp: new Date(errorEvent.timestamp),
...errorEvent.context,
});
// Set priority as tag
if (priority) {
scope.setTag('priority', priority);
}
Sentry.captureException(error);
});
}
// Store in database for analytics
await storeErrorBatch(errors);
return NextResponse.json({
success: true,
processed: errors.length,
});
} catch (error) {
console.error('Error processing batch:', error);
return NextResponse.json(
{ error: 'Failed to process batch' },
{ status: 500 }
);
}
}
async function storeErrorBatch(errors: any[]) {
// Store in your database
// await prisma.errorLog.createMany({ data: errors });
}Monitoring Dashboard
// components/ErrorBatchMonitor.tsx
'use client';
import { useState, useEffect } from 'react';
interface BatchStats {
queueSize: number;
sentCount: number;
failedCount: number;
lastFlush: Date | null;
}
export function ErrorBatchMonitor() {
const [stats, setStats] = useState<BatchStats>({
queueSize: 0,
sentCount: 0,
failedCount: 0,
lastFlush: null,
});
useEffect(() => {
// Get stats from error batcher
const interval = setInterval(() => {
// Access stats from your error batcher instance
const currentStats = errorBatcher.getStats();
setStats(currentStats);
}, 1000);
return () => clearInterval(interval);
}, []);
if (process.env.NODE_ENV !== 'development') {
return null;
}
return (
<div className="error-batch-monitor">
<h3>Error Batch Stats</h3>
<div className="stats-grid">
<div className="stat">
<label>Queue Size</label>
<span className={stats.queueSize > 5 ? 'warning' : ''}>
{stats.queueSize}
</span>
</div>
<div className="stat">
<label>Sent</label>
<span>{stats.sentCount}</span>
</div>
<div className="stat">
<label>Failed</label>
<span className={stats.failedCount > 0 ? 'error' : ''}>
{stats.failedCount}
</span>
</div>
<div className="stat">
<label>Last Flush</label>
<span>
{stats.lastFlush
? new Date(stats.lastFlush).toLocaleTimeString()
: 'Never'}
</span>
</div>
</div>
</div>
);
}Batching with IndexedDB
// For offline support and persistence
import { openDB, DBSchema } from 'idb';
interface ErrorDB extends DBSchema {
errors: {
key: string;
value: ErrorEvent;
};
}
class PersistentErrorBatcher {
private db = openDB<ErrorDB>('error-queue', 1, {
upgrade(db) {
db.createObjectStore('errors', { keyPath: 'id' });
},
});
async add(error: Error, context?: any) {
const db = await this.db;
const event = this.createEvent(error, context);
await db.add('errors', event);
// Try to flush
this.flush();
}
async flush() {
const db = await this.db;
const errors = await db.getAll('errors');
if (errors.length === 0) return;
try {
await fetch('/api/errors/batch', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ errors }),
keepalive: true,
});
// Clear sent errors
const tx = db.transaction('errors', 'readwrite');
await Promise.all([
...errors.map(e => tx.store.delete(e.id)),
tx.done,
]);
} catch (error) {
console.error('Failed to send batch, will retry later');
}
}
}Rate Limiting
// Prevent overwhelming the monitoring service
class RateLimitedBatcher {
private sent = 0;
private window = 60000; // 1 minute
private maxPerWindow = 100;
private windowStart = Date.now();
async flush() {
const now = Date.now();
// Reset window
if (now - this.windowStart > this.window) {
this.sent = 0;
this.windowStart = now;
}
// Check rate limit
if (this.sent >= this.maxPerWindow) {
console.warn('[Error Batch] Rate limit reached, queuing for later');
return;
}
const batch = this.getBatch();
await this.sendBatch(batch);
this.sent += batch.length;
}
}Best Practices
- Set Sensible Limits: 10-50 errors or 5-30 seconds
- Flush on Unload: Use
sendBeaconorkeepalive: true - Deduplicate: Avoid sending same error repeatedly
- Prioritize: Critical errors send immediately
- Compress Large Batches: Save bandwidth
- Handle Failures: Re-queue on network errors
- Monitor Queue Size: Alert if queue grows too large
- Test Offline: Ensure batches persist
Common Pitfalls
❌ Too large batches: Timeout or hit size limits
✅ Keep batches < 100 errors
❌ No deduplication: Same error sent 1000x
✅ Track and skip duplicates
❌ Batching critical errors: Delayed alerting
✅ Send critical immediately
❌ Not flushing on unload: Lose errors
✅ Use sendBeacon/keepalive
Configuration Recommendations
| App Type | Batch Size | Interval | Priority Levels |
|---|---|---|---|
| Low Traffic | 10-20 | 10-30s | 2-3 |
| Medium Traffic | 20-50 | 5-10s | 3-4 |
| High Traffic | 50-100 | 3-5s | 3-4 |
| Mobile App | 5-10 | 30-60s | 2-3 |
Cost Savings Example
Without Batching:
- 1000 errors/day
- 1000 network requests
- ~$10/month (monitoring service)
With Batching (10 errors/batch):
- 1000 errors/day
- 100 network requests
- ~$2/month (monitoring service)
Savings: 90% fewer requests, 80% cost reduction
Batching is essential for production apps with significant traffic—it saves money, reduces overhead, and improves performance.