Lifecycle
evlog events follow a pipeline from creation to delivery. The pipeline differs slightly depending on which logging mode you use, but the core stages (emit, sample, enrich, drain) are shared.
Overview by Mode
| Stage | log (simple) | createLogger / createRequestLogger | Framework middleware |
|---|---|---|---|
| Create | Implicit per call | createLogger({...}) or createRequestLogger({...}) | Auto on request start |
| Accumulate | N/A (single call) | log.set() multiple times | log.set() via useLogger(event) |
| Emit | Immediate | Manual log.emit() | Auto on response end |
| Sample | Head sampling only | Head + tail sampling | Head + tail sampling |
| Enrich | Via global drain | Via global drain | Via hooks or callbacks |
| Drain | Via global drain | Via global drain | Via hooks or callbacks |
Request Logging Pipeline
For framework-managed request logging, every request follows this pipeline. The middleware creates the logger and useLogger(event) retrieves it:
Request In
│
▼
┌──────────┐ Route excluded?
│ Filter │────── yes ──▶ skip (no logging)
└──────────┘
│ no
▼
┌──────────────────┐
│ Create Logger │ requestId, method, path, startTime
└──────────────────┘
│
▼
┌──────────────────┐
│ Handler runs │ log.set() accumulates context
│ │ log.error() records errors
└──────────────────┘
│
▼
┌──────────────────┐
│ Request ends │ status + duration computed
└──────────────────┘
│
▼
┌──────────────────┐
│ Tail Sampling │ evlog:emit:keep hook
│ (keep?) │ force-keep based on outcome
└──────────────────┘
│
▼
┌──────────────────┐
│ Head Sampling │ random % per level
│ (sample?) │ skipped if tail said "keep"
└──────────────────┘
│ sampled out? ──▶ discard (no output)
│
▼
┌──────────────────┐
│ Emit │ WideEvent built + console output
└──────────────────┘
│
▼
┌──────────────────┐
│ Enrich │ evlog:enrich hook
│ │ user-agent, geo, trace, custom
└──────────────────┘
│
▼
┌──────────────────┐
│ Drain │ evlog:drain hook
│ │ Axiom, OTLP, Sentry, custom
└──────────────────┘
│
▼
Done
Step by Step
1. Route Filtering
When a request arrives, evlog checks whether the path matches the configured include / exclude patterns. If the route is excluded, no logger is created and the request proceeds without any logging overhead.
By default, all routes are logged. Use include to restrict logging to specific patterns:
export default defineNuxtConfig({
modules: ['evlog/nuxt'],
evlog: {
include: ['/api/**'],
},
})
2. Logger Creation
For matched routes, evlog creates a RequestLogger and attaches it to the request context. The logger is pre-populated with:
| Field | Source |
|---|---|
method | HTTP method (GET, POST, ...) |
path | Request path |
requestId | Auto-generated UUID (or cf-ray on Cloudflare) |
startTime | Date.now() for duration calculation |
The logger is stored on the event context. useLogger(event) is a shortcut to retrieve it, it doesn't create a new logger.
3. Context Accumulation
During the handler, you call log.set() to attach context. Each call deep-merges into the existing context, so you can call it as many times as needed:
import { useLogger } from 'evlog'
const log = useLogger(event)
const user = await getUser(event)
log.set({ user: { id: user.id, plan: user.plan } })
const cart = await getCart(user.id)
log.set({ cart: { items: cart.items.length, total: cart.total } })
If an error is thrown, evlog's error hook captures it automatically and records it on the logger with the status code.
4. Request End
When the response is sent (or an error is thrown), evlog computes:
- Status code from the response (or from the error's
status/statusCode) - Duration from
Date.now() - startTime - Level -
errorif an error was recorded,warnif status >= 400, otherwiseinfo
If an error triggered the emit, the request is marked as already emitted to prevent double-emission in the response hook.
5. Tail Sampling (evlog:emit:keep)
Before the event is sampled, evlog evaluates tail sampling rules. These run after the request completes, so they can inspect the outcome:
evlog: {
sampling: {
keep: [
{ duration: 1000 }, // slow requests
{ status: 400 }, // client/server errors
{ path: '/api/critical/**' }, // critical paths
],
},
}
The evlog:emit:keep hook also fires, letting you force-keep based on custom business logic:
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:emit:keep', (ctx) => {
if (ctx.context.user?.premium) {
ctx.shouldKeep = true
}
})
})
If any rule or hook sets shouldKeep = true, the event bypasses head sampling entirely.
6. Head Sampling
If the event wasn't force-kept by tail sampling, head sampling applies. This is a random coin flip per log level.
By default, all levels are kept at 100% (no sampling). Configure sampling.rates to reduce volume in production:
evlog: {
sampling: {
rates: { info: 10, warn: 50, debug: 0 },
},
}
info: 10- keep 10% of info-level eventswarn: 50- keep 50% of warningserrordefaults to 100% (never sampled out, even if you set a rate)
If the event is sampled out, processing stops entirely: no console output, no enrichment, no drain.
7. Emit
The WideEvent object is built from the accumulated context:
{
"timestamp": "2026-01-15T10:30:00.000Z",
"level": "info",
"service": "my-app",
"method": "POST",
"path": "/api/checkout",
"requestId": "abc-123",
"duration": 234,
"status": 200,
"user": { "id": 1, "plan": "pro" },
"cart": { "items": 3, "total": 9999 }
}
The event is printed to the console, pretty-formatted in development and as JSON in production. This is the default behavior, no configuration needed.
8. Enrich (evlog:enrich)
After emission, enrichers add derived context to the event. Built-in enrichers extract data from request headers:
| Enricher | Adds | Source |
|---|---|---|
| User Agent | userAgent (browser, OS, device) | User-Agent header |
| Geo | geo (country, region, city) | Platform headers (Vercel, Cloudflare) |
| Request Size | requestSize (request/response bytes) | Content-Length headers |
| Trace Context | traceContext (traceId, spanId) | traceparent header |
import { createUserAgentEnricher, createGeoEnricher } from 'evlog/enrichers'
export default defineNitroPlugin((nitroApp) => {
const enrichers = [createUserAgentEnricher(), createGeoEnricher()]
nitroApp.hooks.hook('evlog:enrich', (ctx) => {
for (const enricher of enrichers) enricher(ctx)
})
})
Enrichers receive the full EnrichContext with the mutable event, request metadata, safe headers, and response info.
9. Drain (evlog:drain)
The final step sends the enriched event to your observability platform. The evlog:drain hook receives a DrainContext with the complete event:
import { createAxiomDrain } from 'evlog/axiom'
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', createAxiomDrain())
})
On platforms with waitUntil (Cloudflare Workers, Vercel Edge), the drain runs after the response is sent to avoid adding latency. On traditional servers, the drain is awaited to prevent losing events in serverless cold shutdowns.
Hook Execution Order
| Order | Hook | When | Purpose |
|---|---|---|---|
| 1 | evlog:emit:keep | After request ends, before sampling | Force-keep events based on outcome |
| 2 | evlog:enrich | After emit, before drain | Add derived context to the event |
| 3 | evlog:drain | After enrichment | Send event to external services |
Error vs Success Path
Both paths converge at the same emit/enrich/drain pipeline. The only difference is when the emit is triggered:
| Success | Error | |
|---|---|---|
| Trigger | afterResponse / response hook | error hook |
| Level | info (or warn if status >= 400) | error |
| Status | From response | From error's status field (default 500) |
| Error context | None | error field with message, stack, why, fix |
| Double-emit guard | Checks _evlogEmitted flag | Sets _evlogEmitted = true |
Simple Logging Pipeline
When using the log singleton, the pipeline is shorter:
- Call:
log.info({ action: 'deploy' })orlog.info('tag', 'message') - Emit: The event is built and printed immediately
- Drain: If a global
drainwas configured viainitLogger(), the event is sent to external services
Tagged logs (log.info('tag', 'message')) are console-only in pretty mode. Object-form logs (log.info({ ... })) always flow through the drain pipeline.
Standalone Wide Event Pipeline
When using createLogger() outside a framework:
- Create:
createLogger({ jobId: 'sync-001' }) - Accumulate:
log.set(),log.info(),log.warn(),log.error()over the operation - Emit: Manual
log.emit()call - Sample: Head sampling applies based on computed level. Tail sampling via
initLogger({ sampling: { keep: [...] } }) - Drain: If a global
drainwas configured, the event is sent
import { initLogger, createLogger } from 'evlog'
import { createAxiomDrain } from 'evlog/axiom'
initLogger({
env: { service: 'worker' },
drain: createAxiomDrain(),
sampling: { rates: { info: 10 } },
})
const log = createLogger({ task: 'migrate' })
log.set({ records: 500, status: 'complete' })
log.emit()
Next Steps
- Wide Events - Design effective wide events
- Sampling - Configure head and tail sampling
- Adapters - Send events to external platforms
- Enrichers - Add derived context automatically
AI SDK Integration
Capture token usage, tool calls, model info, and streaming metrics from the Vercel AI SDK into wide events. Wrap your model and get full AI observability.
Configuration
Complete reference for all evlog configuration options including global logger settings, middleware options, environment context, and framework-specific overrides.