Connection Pooling
Reuse connections efficiently to reduce latency and improve performance
Connection Pooling
Connection pooling reuses existing connections instead of creating new ones for each request. This eliminates the overhead of DNS lookup, TCP handshake, and TLS negotiation.
Connection Overhead
Creating a new connection is expensive:
DNS Lookup: 20-200ms
TCP Handshake: 30-100ms (3-way handshake)
TLS Negotiation: 50-200ms (SSL/TLS handshake)
Total: 100-500ms
Reusing connection: 0ms (instant)HTTP/1.1 Keep-Alive
HTTP/1.1 introduced persistent connections (keep-alive).
Without Keep-Alive (HTTP/1.0)
Request 1:
1. DNS lookup (100ms)
2. TCP handshake (50ms)
3. TLS handshake (100ms)
4. Send request + receive response
5. Close connection
Request 2:
1. DNS lookup (100ms) ← Repeat everything!
2. TCP handshake (50ms)
3. TLS handshake (100ms)
4. Send request + receive response
5. Close connection
Total overhead: 500ms per requestWith Keep-Alive (HTTP/1.1)
Request 1:
1. DNS lookup (100ms)
2. TCP handshake (50ms)
3. TLS handshake (100ms)
4. Send request + receive response
5. Keep connection open
Request 2:
1. Reuse existing connection (0ms)
2. Send request + receive response
Request 3:
1. Reuse existing connection (0ms)
2. Send request + receive response
Savings: 250ms+ per subsequent requestServer Configuration
# Nginx
http {
# Enable keep-alive
keepalive_timeout 65s;
keepalive_requests 100; # Max requests per connection
# Upstream connection pooling
upstream api {
server api1.example.com;
server api2.example.com;
keepalive 32; # Pool size
}
server {
location /api {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
}# Apache (.htaccess)
<IfModule mod_headers.c>
# Enable keep-alive
Header set Connection keep-alive
</IfModule>
# Timeout
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5HTTP/2 Connection Pooling
HTTP/2 uses a single connection per domain with multiplexing.
HTTP/1.1: 6-8 connections per domain
HTTP/2: 1 connection per domain
Benefits:
- No connection overhead after first request
- All requests share one connection
- Multiplexing: parallel requests
- No head-of-line blockingServer Configuration
# Nginx with HTTP/2
server {
listen 443 ssl http2;
# Connection reuse
http2_max_concurrent_streams 128;
http2_recv_timeout 30s;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
}Client-Side Connection Pooling
Fetch API (Browser)
// Browser automatically pools connections
// No configuration needed - just use fetch()
async function fetchData() {
const response = await fetch('https://api.example.com/data');
return response.json();
}
// Multiple requests reuse connection
await Promise.all([
fetch('https://api.example.com/users'),
fetch('https://api.example.com/posts'),
fetch('https://api.example.com/comments'),
]);
// All three reuse the same connectionAxios (Node.js)
import axios from 'axios';
import https from 'https';
// Create custom agent with keep-alive
const httpsAgent = new https.Agent({
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 50, // Max connections per host
maxFreeSockets: 10, // Max idle connections
timeout: 60000,
scheduling: 'lifo', // Last In First Out (better for hot connections)
});
// Use agent in axios
const api = axios.create({
baseURL: 'https://api.example.com',
httpsAgent,
});
// All requests reuse connections from pool
await Promise.all([
api.get('/users'),
api.get('/posts'),
api.get('/comments'),
]);Node.js HTTP Agent
import https from 'https';
// Custom agent with connection pooling
const agent = new https.Agent({
keepAlive: true,
keepAliveMsecs: 1000, // Keep connection alive for 1s
maxSockets: 50, // Max connections per host
maxFreeSockets: 10, // Max idle connections to keep
maxTotalSockets: 100, // Max total connections
timeout: 60000, // Connection timeout
scheduling: 'lifo', // Use most recently used connection
});
// Make requests with agent
https.get('https://api.example.com/data', { agent }, (res) => {
// Response handling
});React Query with Connection Pooling
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import axios from 'axios';
import https from 'https';
// Create agent
const httpsAgent = new https.Agent({
keepAlive: true,
maxSockets: 50,
maxFreeSockets: 10,
});
// Create axios instance
const api = axios.create({
baseURL: 'https://api.example.com',
httpsAgent,
});
// Use in React Query
const queryClient = new QueryClient({
defaultOptions: {
queries: {
queryFn: async ({ queryKey }) => {
const response = await api.get(queryKey[0] as string);
return response.data;
},
},
},
});
export default function App() {
return (
<QueryClientProvider client={queryClient}>
<YourApp />
</QueryClientProvider>
);
}Database Connection Pooling
PostgreSQL (pg-pool)
import { Pool } from 'pg';
// Create connection pool
const pool = new Pool({
user: 'postgres',
host: 'localhost',
database: 'mydb',
password: 'password',
port: 5432,
// Pool configuration
max: 20, // Max clients in pool
min: 5, // Min clients to keep
idleTimeoutMillis: 30000, // Close idle clients after 30s
connectionTimeoutMillis: 2000, // Fail if can't connect in 2s
});
// Use pool
async function getUsers() {
const client = await pool.connect(); // Get client from pool
try {
const result = await client.query('SELECT * FROM users');
return result.rows;
} finally {
client.release(); // Return to pool (don't close!)
}
}
// Or use pool.query (automatically manages connection)
async function getUser(id: number) {
const result = await pool.query('SELECT * FROM users WHERE id = $1', [id]);
return result.rows[0];
}Prisma (Built-in Pooling)
import { PrismaClient } from '@prisma/client';
// Prisma has built-in connection pooling
const prisma = new PrismaClient({
datasources: {
db: {
url: 'postgresql://user:pass@localhost:5432/db?connection_limit=10',
},
},
});
// Use normally - pooling handled automatically
async function getUsers() {
return await prisma.user.findMany();
}MongoDB
import { MongoClient } from 'mongodb';
const uri = 'mongodb://localhost:27017';
// Connection pool configuration
const client = new MongoClient(uri, {
maxPoolSize: 50, // Max connections
minPoolSize: 10, // Min connections to keep
maxIdleTimeMS: 30000, // Close idle after 30s
waitQueueTimeoutMS: 5000, // Wait up to 5s for connection
});
await client.connect();
// Use client - automatically uses pool
const db = client.db('mydb');
const users = await db.collection('users').find().toArray();Redis Connection Pooling
import Redis from 'ioredis';
// Create Redis client (built-in pooling)
const redis = new Redis({
host: 'localhost',
port: 6379,
maxRetriesPerRequest: 3,
retryStrategy: (times) => Math.min(times * 50, 2000),
});
// Use client
await redis.set('key', 'value');
const value = await redis.get('key');
// For multiple connections
const cluster = new Redis.Cluster([
{ host: 'localhost', port: 6379 },
{ host: 'localhost', port: 6380 },
], {
redisOptions: {
maxRetriesPerRequest: 3,
},
clusterRetryStrategy: (times) => Math.min(times * 50, 2000),
});API Gateway Connection Pooling
AWS API Gateway
# serverless.yml
provider:
name: aws
runtime: nodejs18.x
# RDS Proxy for connection pooling
environment:
DB_PROXY_ENDPOINT: !GetAtt RDSProxy.Endpoint
resources:
Resources:
RDSProxy:
Type: AWS::RDS::DBProxy
Properties:
DBProxyName: my-db-proxy
EngineFamily: POSTGRESQL
Auth:
- AuthScheme: SECRETS
IAMAuth: REQUIRED
SecretArn: !Ref DBSecret
RoleArn: !GetAtt RDSProxyRole.Arn
VpcSubnetIds:
- !Ref Subnet1
- !Ref Subnet2GraphQL DataLoader
import DataLoader from 'dataloader';
// Batch and cache database requests
const userLoader = new DataLoader(async (ids: readonly number[]) => {
const users = await db.user.findMany({
where: { id: { in: [...ids] } },
});
// Return in same order as requested
return ids.map(id => users.find(user => user.id === id));
}, {
cache: true, // Cache results
maxBatchSize: 100, // Max items per batch
batchScheduleFn: (callback) => setTimeout(callback, 10), // Wait 10ms to batch
});
// Use in resolver
async function getUser(id: number) {
return await userLoader.load(id); // Batched + cached
}
// Multiple calls are batched
const [user1, user2, user3] = await Promise.all([
getUser(1),
getUser(2),
getUser(3),
]);
// Results in single query: SELECT * FROM users WHERE id IN (1, 2, 3)Monitoring Connection Pools
Node.js Agent Monitoring
import https from 'https';
const agent = new https.Agent({
keepAlive: true,
maxSockets: 50,
maxFreeSockets: 10,
});
// Monitor pool stats
setInterval(() => {
console.log('Sockets:', agent.sockets);
console.log('Free sockets:', agent.freeSockets);
console.log('Requests:', agent.requests);
}, 5000);Database Pool Monitoring
import { Pool } from 'pg';
const pool = new Pool({ /* config */ });
// Monitor pool
pool.on('connect', () => {
console.log('New client connected');
});
pool.on('acquire', () => {
console.log('Client acquired from pool');
});
pool.on('remove', () => {
console.log('Client removed from pool');
});
// Get pool stats
setInterval(() => {
console.log('Total clients:', pool.totalCount);
console.log('Idle clients:', pool.idleCount);
console.log('Waiting requests:', pool.waitingCount);
}, 5000);Best Practices
- Enable Keep-Alive: Always use persistent connections
- Right Pool Size: Too small = waits, too large = overhead
- Idle Timeout: Close unused connections
- Monitor: Track pool usage and adjust
- Connection Limits: Set max connections per host
- Error Handling: Retry failed connections
- Graceful Shutdown: Close pools on app shutdown
- HTTP/2: Use for multiplexing benefits
- TLS Session Resumption: Cache TLS sessions
- Warm Pool: Keep minimum connections ready
Pool Sizing Guidelines
CPU-bound (API servers):
Pool size = (2 * CPU cores) + 1
I/O-bound (database):
Pool size = (2 * CPU cores) * 2
Formula:
connections = ((core_count * 2) + effective_spindle_count)
Example (4 cores, SSDs):
Min: 10 connections
Max: 20 connectionsCommon Pitfalls
❌ No keep-alive: New connection overhead
✅ Enable keep-alive everywhere
❌ Pool too small: Requests wait
✅ Size based on load
❌ No timeout: Connections leak
✅ Set idle timeout
❌ Not monitoring: Can't optimize
✅ Track pool metrics
❌ Closing connections: Wastes resources
✅ Return to pool with release()
Connection pooling is one of the easiest performance wins—configure it properly and see dramatic latency improvements!