System Logs & Request Correlation¶
The system_log table is Floh's structured operational log: every HTTP
access entry, error, workflow engine event, connector call, and
authorization denial lands here so operators can search, filter, and trace
activity from the System Logs screen and the /api/logs API.
This page describes the columns that make a log entry useful for tracing and the conventions HTTP routes, services, and background jobs follow when emitting entries.
Correlation columns¶
Each system_log row carries enough context to follow a single request
end-to-end across the API, workflow engine, and connectors:
| Column | Source | Use |
|---|---|---|
request_id |
Fastify request.id (UUID, or echoed X-Request-Id header) |
Pivot every entry produced while handling one HTTP call |
trace_id |
W3C traceparent header (32 hex chars) when the caller sends one |
Join with upstream/downstream telemetry in a tracing backend |
user_id |
Authenticated principal id (set after auth resolves) | Filter by who triggered the activity |
route |
Fastify route template, e.g. /api/workflows/:id |
Group access entries / errors by endpoint |
method |
HTTP method | Distinguish reads from writes |
status_code |
HTTP response status (access + error entries) | Filter for 4xx / 5xx storms |
duration_ms |
Wall-clock duration of the request | Find slow endpoints |
error_code |
Stable code from the thrown error (e.g. FST_ERR_VALIDATION) |
Aggregate by failure mode |
step_id |
Workflow step instance id (engine + connector entries) | Trace one step's emissions across log lines |
workflow_id |
Workflow definition id | Filter by workflow |
run_id |
Workflow run id | Filter by run |
connector_id |
Connector id | Filter by connector |
request_id is also written onto each audit_log row produced inside an
HTTP request, so a config change can be joined back to its access entry on
request_id without parsing JSON metadata.
How correlation propagates¶
Floh uses Node's AsyncLocalStorage to make request-scoped context
available to any code path running inside the request without threading it
through every signature. The flow is:
- The Fastify
onRequesthook generates arequest.id(UUID, or echoes the inboundX-Request-Idheader), parses anytraceparentheader, and starts aRequestContextcovering the rest of the request:
runWithRequestContext(
{
requestId: request.id,
traceId: extractTraceId(request.headers.traceparent),
route: request.routeOptions?.url,
method: request.method,
},
() => done(),
);
-
The
preHandlerhook patchesuserIdonto the active context onceauthenticateresolves the principal. -
LogService.logandAuditService.logread the active context withgetRequestContext()and auto-fillrequest_id,trace_id,user_id,route,method(and the auditrequest_id) when the caller does not pass them explicitly. -
Background work that genuinely outlives the request — BullMQ jobs, the stuck-run recovery cron, the workflow engine running off a worker — starts outside this scope, so
getRequestContext()returnsundefinedand the columns are stored asNULL. This is intentional: those entries should not be misattributed to whatever HTTP request happened to be in flight when the job kicked off.
updateRequestContext() only mutates the active context in place, so the
post-auth userId patch flows to every downstream LogService.log call
without copy-pasting the value at each emission site.
Source taxonomy¶
The source column on system_log carries a short tag identifying which
subsystem emitted the entry. The shared LogSource type
(packages/shared/src/log.types.ts) lists the canonical values; new code
should reuse them rather than inventing free-form strings:
| Source | Emitter |
|---|---|
access-log |
Fastify access entries from the onResponse hook |
fastify-error |
Fastify error entries from the onError hook |
authn |
Authentication outcomes |
authz |
Authorization decisions (e.g. requirePermission) |
auth-oidc |
OIDC / Authifi callback flow |
engine |
Workflow engine |
run-creator |
Run setup / variable resolution |
scheduler |
BullMQ scheduler |
worker |
Worker mode entrypoints |
client |
Browser-emitted entries (via /api/logs/client) |
pino |
Bridged Pino entries (when enabled) |
Connector emitters use connector:<connector-name> so each connector is
its own filterable bucket without cluttering the canonical list.
API filters¶
GET /api/logs accepts the following filter parameters in addition to the
existing level / category / search / from / to / workflowId /
runId / connectorId / source set:
requestId,traceId,userId,stepId— exact matchroute—ILIKE %route%method— exact match (e.g.POST)statusCode— exact match, integer (?statusCode=403)errorCode— exact match (e.g.?errorCode=FST_ERR_VALIDATION)
The endpoint also accepts a where query parameter that the System Logs UI
uses to send compiled advanced-filter expressions (parsed server-side by
parseAdvancedFilter). It is internal-only — its grammar is not part of
the public API contract and may change without notice. Automation and
external clients should always use the typed filters above instead.
All response fields are camelCase (requestId, statusCode, durationMs,
…). Sorting accepts timestamp, level, category, source, message,
route, status_code, duration_ms.
Indexes¶
Migration 018_log_traceability adds B-tree indexes on each new
correlation column, plus a GIN index on system_log.metadata so JSONB
filtering inside metadata stays cheap. audit_log.request_id is also
indexed for the request_id join with system_log.
Adding a new emitter¶
When you write code that emits log entries:
- Inside an HTTP request — call
app.logService.log(level, category, message, opts)and let the request context fillrequestId,traceId,userId,route,method. PassstatusCode/durationMs/errorCodeexplicitly when they apply. - Inside the workflow engine — pass
workflowId,runId, and the activestepIdso step-scoped queries surface only that step's lines. - Inside a connector — use
ConnectorLogger. CallwithStep(stepId)to return a child logger bound to the step so each connector emission carriesstep_id. - Inside a background job — emit with an explicit
source(e.g.scheduler,worker) and the relevant business ids; the request-scoped fields will correctly remainNULL.