Skip to content

Replay API

Session replay endpoint. Reconstructs a session step by step with cumulative context at each point.

GET /api/sessions/:id/replay

Returns a ReplayState for the given session, containing ordered steps with paired events and reconstructed context.

Path Parameters

ParameterTypeDescription
idstringSession ID

Query Parameters

ParameterTypeDefaultRangeDescription
offsetnumber0≥ 0Start at this step index
limitnumber10001–5000Maximum steps to return
eventTypesstringComma-separated event type filter (e.g., llm_call,tool_call)
includeContextbooleantrueInclude reconstructed context per step

Valid Event Types

session_started, session_ended, llm_call, llm_response, tool_call, tool_response, error, decision, observation, custom

Response (200)

json
{
  "session": {
    "id": "ses_abc123",
    "agentId": "my-agent",
    "status": "completed",
    "startedAt": "2026-02-08T10:30:00.000Z",
    "endedAt": "2026-02-08T10:32:15.000Z"
  },
  "chainValid": true,
  "totalSteps": 42,
  "steps": [
    {
      "index": 0,
      "event": {
        "id": "evt_001",
        "eventType": "session_started",
        "timestamp": "2026-02-08T10:30:00.000Z",
        "payload": {}
      },
      "pairedEvent": null,
      "pairDurationMs": null,
      "context": {
        "eventIndex": 0,
        "totalEvents": 42,
        "cumulativeCostUsd": 0.0,
        "elapsedMs": 0,
        "eventCounts": { "session_started": 1 },
        "llmHistory": [],
        "toolResults": [],
        "pendingApprovals": [],
        "errorCount": 0,
        "warnings": []
      }
    },
    {
      "index": 1,
      "event": {
        "id": "evt_002",
        "eventType": "llm_call",
        "timestamp": "2026-02-08T10:30:01.000Z",
        "payload": { "provider": "openai", "model": "gpt-4o" }
      },
      "pairedEvent": {
        "id": "evt_003",
        "eventType": "llm_response",
        "timestamp": "2026-02-08T10:30:02.200Z"
      },
      "pairDurationMs": 1200,
      "context": {
        "eventIndex": 1,
        "totalEvents": 42,
        "cumulativeCostUsd": 0.012,
        "elapsedMs": 1000,
        "eventCounts": { "session_started": 1, "llm_call": 1 },
        "llmHistory": [
          {
            "callId": "call_001",
            "provider": "openai",
            "model": "gpt-4o",
            "messages": [],
            "response": "...",
            "costUsd": 0.012,
            "latencyMs": 1200
          }
        ],
        "toolResults": [],
        "pendingApprovals": [],
        "errorCount": 0,
        "warnings": []
      }
    }
  ],
  "pagination": {
    "offset": 0,
    "limit": 1000,
    "hasMore": false
  },
  "summary": {
    "totalCost": 0.0342,
    "totalDurationMs": 135000,
    "totalLlmCalls": 4,
    "totalToolCalls": 6,
    "totalErrors": 0,
    "models": ["gpt-4o"],
    "tools": ["read_file", "search_code", "write_file"]
  }
}

Response Fields

FieldTypeDescription
sessionobjectSession metadata
chainValidbooleanWhether the event hash chain is valid
totalStepsnumberTotal steps in the full replay
stepsarrayOrdered replay steps (may be paginated)
steps[].indexnumber0-based step index
steps[].eventobjectThe event at this step
steps[].pairedEventobject|nullPaired event (e.g., tool_call → tool_response)
steps[].pairDurationMsnumber|nullDuration between paired events (ms)
steps[].contextobjectReconstructed context at this step
steps[].context.cumulativeCostUsdnumberTotal cost up to this step
steps[].context.elapsedMsnumberTime since session start
steps[].context.eventCountsobjectEvent type counts up to this step
steps[].context.llmHistoryarrayLLM conversation history (capped at 50)
steps[].context.toolResultsarrayTool call results available at this step
steps[].context.pendingApprovalsarrayApproval statuses
steps[].context.errorCountnumberCumulative error count
steps[].context.warningsarrayWarnings triggered at this step
paginationobjectPagination info
pagination.hasMorebooleanWhether more steps are available
summaryobjectSession-level summary

Caching

Replay states are cached server-side in an LRU cache:

  • Max entries: 100
  • TTL: 10 minutes
  • LLM history cap: 50 entries per step (memory guard)

Errors

StatusCondition
400Invalid query parameter
404Session not found

Released under the MIT License.