Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.vaultgraph.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Receipt telemetry is an optional, structured object inside the signed receipt payload. Use it to attach runtime signals like run IDs, model usage, tool activity, timing, and error hints without flattening everything into free-form metadata. Telemetry is part of the canonical receipt JSON. When present, VaultGraph hashes it, signs it, validates it during ingestion, stores it with the receipt, and includes it in exports.

When to use telemetry

Use telemetry for content-free execution details that help operators and auditors understand how a run behaved:
  • model provider and model name
  • token counts
  • start and completion timing
  • run IDs and parent run IDs
  • tool names
  • finish reasons
  • error names
  • ordered execution events
Keep business annotations, billing fields, or workflow labels in receipt.metadata when they are not part of the runtime trace itself.

Top-level fields

FieldDescription
schema_versionTelemetry schema version. Current value: v1.
sourceTelemetry producer: manual, ai-sdk, or langchain.
run_kindHigh-level run type such as generate, stream, or a vendor-defined workflow label.
capture_phaseIntegration capture stage, for example response_ready or stream_start.
external_run_idFramework or vendor run identifier for the current execution.
parent_run_idParent execution identifier when the framework exposes one.
tagsShort labels emitted by the integration or caller.
flagsBoolean hints such as has_inputs, has_output, has_error, and has_action.
modelOptional model summary with provider and name.
usageOptional token counts: input_tokens, output_tokens, total_tokens.
timingOptional run timing summary with started_at, completed_at, and latency_ms.
errorOptional error summary. Current shape includes name.
eventsOrdered list of structured execution events used to build the run timeline.

Event timeline

Telemetry events let VaultGraph render a structured execution timeline in the portal. The current event kinds are:
  • run_started
  • run_finished
  • run_failed
  • llm_started
  • llm_finished
  • llm_failed
  • tool_started
  • tool_finished
  • tool_failed
Each event can also include timestamps, latency, step number, model summary, usage summary, tool name, finish reason, and error name.

Portal run detail surface

In the portal, agent and deployment receipt tables can open a receipt run detail surface for a selected receipt. That view renders the signed telemetry alongside the canonical receipt proof material. When telemetry is present, operators can inspect:
  • model name and provider
  • total, input, and output token counts
  • source, run kind, capture phase, and run IDs
  • boolean flags for inputs, output, tools, and errors
  • ordered execution timeline derived from events
  • telemetry tags
  • receipt JSON, signature, and context hash
If a receipt does not include telemetry, the receipt detail still renders, but the telemetry card shows an empty state instead of the summary and timeline signals.

Safety guidelines

Telemetry should stay content-free. Good telemetry fields describe execution structure, not the underlying conversation or tool payload. Safe examples:
  • model IDs
  • token usage
  • timestamps and latency
  • finish reasons
  • tool names
  • run IDs
  • high-level workflow labels
Avoid putting these into telemetry:
  • raw prompts
  • raw model outputs
  • tool arguments
  • transcript bodies
  • API keys, secrets, or access tokens
  • customer PII that is not already intended to live in signed receipt metadata
If you need private audit context, hash it locally and store only the resulting context_hash in the receipt.

Integration behavior

The built-in integrations populate telemetry automatically:
  • Vercel AI SDK records source, run kind, capture phase, usage, and error hints when available.
  • LangChain.js records callback-derived run type, run IDs, tags, and execution hints when available.
For manual submissions, the SDK exports createTelemetry(...) so callers can normalize telemetry before signing. See SDK for the full helper API and API Reference for the ingestion contract.

Example

{
  "schema_version": "v1",
  "source": "ai-sdk",
  "run_kind": "generate",
  "capture_phase": "response_ready",
  "external_run_id": "run_123",
  "flags": {
    "has_output": true,
    "has_action": true
  },
  "model": {
    "provider": "openai",
    "name": "gpt-4o"
  },
  "usage": {
    "input_tokens": 120,
    "output_tokens": 45,
    "total_tokens": 165
  },
  "events": [
    {
      "kind": "run_started",
      "started_at": "2026-05-03T12:34:56.000Z",
      "step": 1
    },
    {
      "kind": "llm_finished",
      "completed_at": "2026-05-03T12:34:56.700Z",
      "step": 2,
      "finish_reason": "stop"
    },
    {
      "kind": "tool_finished",
      "completed_at": "2026-05-03T12:34:57.050Z",
      "step": 3,
      "tool_name": "search_docs"
    },
    {
      "kind": "run_finished",
      "completed_at": "2026-05-03T12:34:57.250Z",
      "step": 4
    }
  ]
}