Event-Driven Cloudflare

Real-Time Data Pipelines
at the Edge

Webhook processing at scale, event-driven architecture, fan-out patterns, and solving exactly-once delivery on Cloudflare Workers.

๐Ÿ“– 14 min read January 24, 2026

When a lead submits a form, seven things need to happen: store in database, notify Slack, send email confirmation, update CRM, trigger AI analysis, log analytics, and schedule follow-up. All in real-time. All reliably.

This is how we built event-driven data pipelines on Cloudflare Workers that process thousands of events daily with sub-100ms latency.

The Architecture

Event Fan-Out Pipeline
๐Ÿ“ฅ
Webhook
โ†“
โšก
Event Router
โ†“ Fan-Out
๐Ÿ’ฌ
Slack
๐Ÿ“Š
CRM
๐Ÿค–
AI Analysis
๐Ÿ’พ
Database

Pattern 1: The Event Router

One webhook endpoint, multiple downstream handlers. The router validates, transforms, and dispatches:

event-router.ts
interface Event { id: string; type: 'lead.created' | 'lead.updated' | 'offer.submitted'; data: Record<string, any>; timestamp: number; } type Handler = (event: Event, env: Env) => Promise<void>; const handlers: Record<string, Handler[]> = { 'lead.created': [ notifySlack, sendWelcomeEmail, createCRMContact, runAIAnalysis, storeInDatabase, scheduleFollowUp, ], 'lead.updated': [ notifySlack, updateCRMContact, storeInDatabase, ], 'offer.submitted': [ notifySlack, sendOfferEmail, runOfferAnalysis, storeInDatabase, ], }; export async function routeEvent(event: Event, env: Env) { const eventHandlers = handlers[event.type] || []; // Execute all handlers in parallel const results = await Promise.allSettled( eventHandlers.map(handler => handler(event, env)) ); // Log any failures results.forEach((result, i) => { if (result.status === 'rejected') { console.error(`Handler ${i} failed:`, result.reason); } }); return results; }
Why Promise.allSettled?
Promise.all fails fastโ€”one rejection kills everything. Promise.allSettled runs all handlers regardless of individual failures. Slack can fail without blocking email delivery.

Pattern 2: Idempotent Handlers

Webhooks retry. Your handlers must be idempotentโ€”safe to run multiple times with the same input:

idempotent-handler.ts
async function idempotentHandler( event: Event, env: Env, handlerFn: Handler ): Promise<void> { const idempotencyKey = `processed:${event.id}:${handlerFn.name}`; // Check if already processed const existing = await env.KV.get(idempotencyKey); if (existing) { console.log(`Skipping duplicate: ${event.id}`); return; } // Process the event await handlerFn(event, env); // Mark as processed (TTL: 24 hours) await env.KV.put(idempotencyKey, Date.now().toString(), { expirationTtl: 86400 }); } // Wrap handlers with idempotency async function notifySlack(event: Event, env: Env) { return idempotentHandler(event, env, async (e, env) => { await fetch(env.SLACK_WEBHOOK, { method: 'POST', body: JSON.stringify({ text: `New lead: ${e.data.name} - ${e.data.email}` }) }); }); }

Pattern 3: Dead Letter Queue

When handlers fail repeatedly, store failed events for later inspection and retry:

dead-letter.ts
interface FailedEvent { event: Event; handler: string; error: string; attempts: number; lastAttempt: number; } async function withRetryAndDLQ( event: Event, handler: Handler, env: Env, maxAttempts = 3 ): Promise<void> { const attemptKey = `attempts:${event.id}:${handler.name}`; let attempts = parseInt(await env.KV.get(attemptKey) || '0'); try { await handler(event, env); // Success - clear attempts await env.KV.delete(attemptKey); } catch (error) { attempts++; if (attempts >= maxAttempts) { // Move to dead letter queue const dlqEntry: FailedEvent = { event, handler: handler.name, error: error.message, attempts, lastAttempt: Date.now() }; await env.KV.put( `dlq:${event.id}:${handler.name}`, JSON.stringify(dlqEntry) ); // Alert on DLQ entries await sendSlackAlert(`โš ๏ธ Event ${event.id} moved to DLQ after ${attempts} failures`); } else { // Store attempt count for next retry await env.KV.put(attemptKey, attempts.toString(), { expirationTtl: 3600 }); throw error; // Re-throw to trigger webhook retry } } }

Pattern 4: Async Processing with Queues

For heavy processing, don't block the webhook response. Queue the work:

queue-processor.ts
// Webhook handler - responds immediately export async function handleWebhook(request: Request, env: Env) { const event = await request.json(); // Quick validation if (!isValidEvent(event)) { return new Response('Invalid event', { status: 400 }); } // Add to queue for async processing await env.QUEUE.send({ id: crypto.randomUUID(), type: event.type, data: event, timestamp: Date.now() }); // Respond immediately return new Response('Accepted', { status: 202 }); } // Queue consumer - processes async export async function queue(batch: MessageBatch, env: Env) { for (const message of batch.messages) { try { await routeEvent(message.body, env); message.ack(); } catch (error) { console.error('Queue processing failed:', error); message.retry(); } } }
Cloudflare Queues Limitation
Cloudflare Queues have a maximum message size of 128KB. For larger payloads, store the data in R2 or D1 and pass only a reference ID in the queue message.

Pattern 5: Exactly-Once Delivery

True exactly-once is impossible in distributed systems. We achieve "effectively once" through idempotency + deduplication:

exactly-once.ts
class ExactlyOnceProcessor { private env: Env; async process(event: Event): Promise<{ processed: boolean; reason?: string }> { const lockKey = `lock:${event.id}`; const processedKey = `done:${event.id}`; // 1. Check if already processed if (await this.env.KV.get(processedKey)) { return { processed: false, reason: 'duplicate' }; } // 2. Try to acquire lock (using Durable Objects for strong consistency) const lock = this.env.LOCKS.get( this.env.LOCKS.idFromName(event.id) ); const acquired = await lock.fetch('http://lock/acquire'); if (!acquired.ok) { return { processed: false, reason: 'locked' }; } try { // 3. Double-check not processed (inside lock) if (await this.env.KV.get(processedKey)) { return { processed: false, reason: 'duplicate' }; } // 4. Process the event await routeEvent(event, this.env); // 5. Mark as processed await this.env.KV.put(processedKey, Date.now().toString(), { expirationTtl: 604800 // 7 days }); return { processed: true }; } finally { // 6. Release lock await lock.fetch('http://lock/release'); } } }

Comparison: Edge vs Traditional

Factor Cloudflare Workers AWS Lambda + SQS
Cold Start 0ms (isolates) 100-500ms
Global Latency <50ms worldwide Varies by region
Queue Built-in Yes (Queues) Yes (SQS)
Strong Consistency Durable Objects DynamoDB
Cost at 1M events ~$5 ~$15-25
Complexity Low High (IAM, VPC, etc.)

Implementation Checklist

  • Event router with handler registry
  • All handlers are idempotent
  • Dead letter queue for failed events
  • Async processing via Cloudflare Queues
  • Distributed locking for exactly-once
  • Monitoring and alerting on failures
  • Event schema validation
  • Retry logic with exponential backoff

Real-time at the edge isn't about speedโ€”it's about reliability. Events will arrive out of order, duplicated, and during outages. Design for chaos, and your pipelines will handle anything.

Related Articles

Building a Service Mesh on Cloudflare Workers
Read more โ†’
API Gateway Patterns at the Edge
Read more โ†’
Designing for Model Failure: AI Resilience
Read more โ†’

Need Event-Driven Architecture?

We build real-time data pipelines that scale to millions of events.

โ†’ Start Your Pipeline
๐ŸŒ™