KV, D1, R2, Durable Objects:
Choosing the Right Storage
A decision framework for Cloudflare's storage options. When to use each, real benchmarks, and migration patterns from production systems.
Cloudflare offers four distinct storage primitives. Choosing the wrong one leads to either performance problems or cost overruns. Here's the framework for making the right choice.
The Four Options
- ModelKey-Value
- Read Latency~10ms global
- Write Latency~60s propagation
- Max Value25 MB
- ConsistencyEventual
- Cost$0.50/M reads
- ModelRelational (SQL)
- Read Latency~30ms
- Write Latency~30ms
- Max DB Size10 GB
- ConsistencyStrong
- Cost$0.75/M rows read
- ModelObject/Blob
- Read Latency~50-100ms
- Write Latency~100ms
- Max Object5 TB
- ConsistencyStrong
- Cost$0 egress (!)
- ModelActor + Storage
- Read Latency~0ms (in-memory)
- Write Latency~1ms
- Max Storage50 GB/object
- ConsistencyStrong + Transactional
- Cost$0.15/M requests
The Decision Framework
Deep Dive: Workers KV
Best for: Configuration, feature flags, cached API responses, session data, any read-heavy workload where eventual consistency is acceptable.
// Basic read/write
await env.KV.put('user:123', JSON.stringify(userData));
const user = await env.KV.get('user:123', 'json');
// With expiration (TTL)
await env.KV.put('cache:api-response', data, {
expirationTtl: 3600 // 1 hour
});
// With metadata (for filtering)
await env.KV.put('lead:456', leadData, {
metadata: { status: 'new', score: 85 }
});
// List with prefix (pagination)
const { keys } = await env.KV.list({ prefix: 'lead:', limit: 100 });
Deep Dive: D1 Database
Best for: Relational data, complex queries, data that needs JOINs, reporting, anything that would traditionally use PostgreSQL or MySQL.
// Query with parameters (safe from SQL injection)
const { results } = await env.DB.prepare(`
SELECT leads.*, properties.address
FROM leads
JOIN properties ON leads.property_id = properties.id
WHERE leads.status = ? AND leads.score > ?
ORDER BY leads.created_at DESC
LIMIT 50
`).bind('new', 70).all();
// Batch operations (single round-trip)
const batch = [
env.DB.prepare('INSERT INTO logs VALUES (?)').bind(log1),
env.DB.prepare('INSERT INTO logs VALUES (?)').bind(log2),
env.DB.prepare('UPDATE stats SET count = count + 1'),
];
await env.DB.batch(batch);
// Raw execution for DDL
await env.DB.exec(`
CREATE TABLE IF NOT EXISTS leads (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
score INTEGER DEFAULT 0,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
)
`);
Deep Dive: R2 Storage
Best for: File uploads, images, PDFs, backups, any large binary data. The killer feature is zero egress feesโunlike S3.
// Upload file
await env.BUCKET.put('documents/contract-123.pdf', pdfBuffer, {
httpMetadata: { contentType: 'application/pdf' },
customMetadata: { uploadedBy: 'user-456' }
});
// Download file
const object = await env.BUCKET.get('documents/contract-123.pdf');
if (object) {
return new Response(object.body, {
headers: { 'Content-Type': object.httpMetadata.contentType }
});
}
// List objects with prefix
const { objects } = await env.BUCKET.list({ prefix: 'documents/' });
// Multipart upload for large files
const upload = await env.BUCKET.createMultipartUpload('large-file.zip');
// ... upload parts ...
await env.BUCKET.completeMultipartUpload(upload.key, upload.uploadId, parts);
Deep Dive: Durable Objects
Best for: Real-time coordination, counters, rate limiting, WebSocket state, anything requiring strong consistency or transactional guarantees.
export class RateLimiter {
state: DurableObjectState;
constructor(state: DurableObjectState) {
this.state = state;
}
async fetch(request: Request) {
const ip = request.headers.get('CF-Connecting-IP');
// Transactional read-modify-write
const count = (await this.state.storage.get<number>(ip)) || 0;
if (count >= 100) {
return new Response('Rate limited', { status: 429 });
}
await this.state.storage.put(ip, count + 1);
// Set alarm to reset counter
await this.state.storage.setAlarm(Date.now() + 60000);
return new Response('OK');
}
async alarm() {
// Reset all counters
await this.state.storage.deleteAll();
}
}
Comparison Matrix
| Use Case | KV | D1 | R2 | DO |
|---|---|---|---|---|
| Config/Feature Flags | Best | OK | No | OK |
| User Sessions | Good | OK | No | Best |
| File Storage | No | No | Best | No |
| Relational Data | No | Best | No | OK |
| Counters/Rate Limits | No | OK | No | Best |
| Real-time Collaboration | No | No | No | Best |
| Analytics/Reporting | No | Best | OK | No |
| API Response Cache | Best | OK | Good | No |
Real-World Architecture
In production, most systems use multiple storage types together:
Migration Patterns
KV โ D1 (When you need queries)
async function migrateKVtoD1(env: Env) {
// List all KV keys
let cursor: string | undefined;
do {
const { keys, list_complete, cursor: next } =
await env.KV.list({ cursor });
// Batch insert into D1
const batch = keys.map(async (key) => {
const value = await env.KV.get(key.name, 'json');
return env.DB.prepare(
'INSERT INTO migrated (key, data) VALUES (?, ?)'
).bind(key.name, JSON.stringify(value));
});
await env.DB.batch(await Promise.all(batch));
cursor = list_complete ? undefined : next;
} while (cursor);
}
Cost Optimization
- KV reads are cheap. Cache aggressively. $0.50 per million reads.
- D1 batching. Combine multiple operations into single batch calls.
- R2 egress is free. If you're paying S3 egress, migrate immediately.
- DO duration billing. Objects are billed while active. Let them hibernate.
The right storage choice isn't about featuresโit's about matching access patterns to consistency and latency requirements. Get this wrong and you'll fight the system forever.
Related Articles
Build on Cloudflare
We design and implement edge architectures for production systems.
โ Start a Project