xAPI Profile v1.0
The complete specification of every xAPI statement emitted by Svelto's audit evidence engine. Defines all verbs, context fields, result fields, integrity metadata, and the export document structure delivered to customer SIEMs. Referenced in Methodology §3.1 and §4.2.
Purpose & Design Philosophy
Svelto uses the xAPI (Experience API / Tin Can) statement model as its canonical format for compliance learning evidence. Each meaningful event in the system — a scenario delivered, an answer submitted, a question asked to the chatbot, a scenario approved or rejected by an admin — is recorded as a tamper-evident, RFC 3161-timestamped xAPI statement.
Design constraints that shape this profile:
- No LRS. Svelto does not export to a Learning Record Store. Statements are stored in Svelto's own database and exported as a signed JSON document delivered directly to the customer's SIEM.
- Pseudonymization by default. Exported statements carry
pseudonymizedActorId, not the learner's real identity. The mapping is held in an isolated audit registry accessible only during an authorized export. - Every statement is timestamped at write time with an RFC 3161 token from a trusted timestamp authority (TSA). This makes retroactive modification of the evidence record cryptographically detectable.
- Control traceability is mandatory. Statements for learning events
(
experienced,answered) must carryprimaryControlIdandfocusTopicIds. Statements without these fields are rejected at write time.
exportIntegrityHash (SHA-256 over all statements)
and a wrapping RFC 3161 token covering the entire export. This means the document is
self-verifying: any deletion, insertion, or modification of a statement after export
breaks the hash and invalidates the timestamp proof.
Statement Anatomy
Every Svelto xAPI statement shares the same outer envelope:
taskId is omitted on asked statements in the exported document.
tsaProviderId and tsaProviderUrl identify the exact TSA that
issued tstToken, and isManuallyDispatched distinguishes
super-admin manual dispatches from scheduler-driven tasks.
Object URI Patterns
| Verb(s) | Object pattern | Notes |
|---|---|---|
| experienced answered | microlearning:micro_simulation:{taskId} |
The micro-simulation task UUID |
| approved rejected | microlearning:task:{taskId} |
Admin action on a generated task |
| asked | chatbot-question |
Literal string; no UUID — chatbot interactions are not task-bound |
Stamp Payload
The integrityHash and RFC 3161 token are computed over a
deterministic canonical payload that includes both the statement
envelope and the full statementData (verb, object, result, context).
This ensures an auditor can independently verify not just when an event
occurred, but what the employee actually saw and answered:
SHA-256 of the above (using stable JSON serialization) is submitted to the TSA. The TSA returns a token that proves this exact payload — including the employee response content — existed at this point in time.
Verb Reference
Five verbs are actively emitted across learner, chatbot, and admin interactions.
Emitted once per employee per broadcast when a micro-simulation is delivered to the chat platform. Records what the employee saw before they responded, including the compliance context used to generate the scenario. Has no result block.
Emitted when an employee selects an answer option. Always paired with a prior experienced statement for the same taskId. The reactionTimeInSeconds is computed as the difference between the two statement timestamps. When the session is a remediation simulation, sessionType is "remediation" and remediationTrackingId links this statement to the active remediation cycle.
Emitted each time an employee sends a message to the compliance chatbot. The question and response are truncated to 500 characters before storage. These statements are not task-bound, so taskId, primaryControlId, and focusTopicIds are omitted.
Emitted when an admin explicitly approves a generated scenario task for delivery. This statement is the Admin Attestation Gate event described in Methodology §4.1 — no scenario reaches learners without a preceding approved statement. The full scenario content is embedded in the statement so that the attestation record is self-contained: an auditor can verify exactly what was approved without platform access.
Emitted when an admin rejects a generated scenario task. The result.response field carries a structured rejection reason (see table below). An optional free-text feedbackGiven.content note may be included. A task with a rejected statement is never delivered to learners.
Rejection Reason Values (result.response)
| Value | Meaning |
|---|---|
content_inaccurate | Scenario contains factually wrong or misleading information |
content_off_topic | Scenario subject matter does not relate to the room's compliance focus |
content_too_easy | Difficulty level is too low for the target audience |
content_too_difficult | Difficulty level is too high for the target audience |
control_mapping_wrong | The AI assigned the wrong compliance control to the scenario |
other | Any other reason — should always be accompanied by a feedbackGiven note |
Context Field Reference
The context object fields and their presence per verb:
| Field | Type | Verbs | Description |
|---|---|---|---|
primaryControlId |
string | exp ans appr rej | required The primary compliance control this scenario trains (e.g. A.6.3, CC6.1). |
focusTopicIds |
string[] | exp ans appr rej | required Secondary controls reinforced by the scenario. |
mappingRationale |
string | exp ans appr rej | required Human-readable explanation of why this scenario maps to the assigned controls. AI-generated at scenario creation time. Embedded in admin statements so the attestation record is self-contained. |
difficultyLevel |
string | exp ans appr rej | required Scenario difficulty: basic, intermediate, or advanced. Embedded in admin statements so auditors can verify the difficulty of what was attested. |
riskLevel |
string | exp ans appr rej | required Compliance risk level of the scenario context: low, medium, high, or critical. Embedded in admin statements. |
questionText |
string | exp ans ask appr rej | required The question posed to the learner. Truncated to 500 chars for chatbot statements. Embedded in admin statements to make the attested scenario fully legible from the audit record alone. |
scenarioContext |
string | ans appr rej | required The full narrative/scenario text shown to the learner. Embedded in admin statements — the complete scenario text is preserved in the attestation record for auditor review. |
correctAnswerText |
string | ans | optional Full text of the correct answer option shown for audit readability. |
selectedOptionText |
string | ans | optional Full text of the option actually selected by the learner. |
optionCount |
number | exp | optional Number of answer options presented. |
allOptionsPresented |
string[] | ans | optional IDs of all options shown (e.g. ["A","B","C","D"]). |
reactionTimeInSeconds |
number | ans | optional Seconds elapsed between the experienced and answered timestamps for the same task. |
sessionType |
string | exp ans ask appr rej | required "regular" for standard assessments; "remediation" for remediation simulations. Allows auditors to distinguish remediation evidence from the baseline record. |
remediationTrackingId |
string | exp ans | optional UUID of the active remediation cycle. Present only when sessionType = "remediation". Links this statement to the remediation tracking record, providing a direct audit trail from the triggering failure event to the corrective simulation delivered. |
policyReference |
string | ans | optional The specific policy clause or section that grounded the scenario generation when organization training materials were used (e.g. "Section 3.1 — Acceptable Use of Information Systems"). |
microlearning |
object | exp ans appr rej | required Nested object — see table below. Embedded in admin statements to preserve the full scenario metadata in the attestation record. |
chatbot |
object | ask | optional Nested object — see table below. |
context.microlearning fields
| Field | Type | Description |
|---|---|---|
contentType | string | required problem_solving or scenario_based |
conceptsTested | string[] | required Human-readable concepts the scenario exercises |
deliverySequence | number | required Sequence marker within the room's microlearning lifecycle. Admin curation statements use 0; learner delivery and answer statements use the positive session ordinal (1, 2, ...). |
expectedClassification | string | required "suspicious" or "legitimate" — the intended correct answer classification |
sourceCitations | Array<{citation: string}> | Exact citations extracted from the customer's policy documents that grounded this scenario. Present on learner statements and preserved on admin attestation statements when available. |
context.chatbot fields
| Field | Type | Description |
|---|---|---|
questionCategory | string | optional AI-extracted category of the question (e.g. policy_clarification) |
questionTopic | string | optional AI-extracted topic |
responseProvided | boolean | required Whether the chatbot successfully returned a response |
Result Field Reference
| Field | Type | Verbs | Description |
|---|---|---|---|
success |
boolean | ans ask appr rej | Whether the event represents a positive outcome. true for correct answers and approvals; false for incorrect answers, chatbot failures, and rejections. |
score.scaled |
number | ans | 0.0 (incorrect) or 1.0 (correct) |
score.raw |
number | ans | 0 or 100 |
score.min / score.max |
number | ans | Always 0 / 100 |
response |
string | ans ask rej | For answered: the selected option ID (e.g. "B"). For asked: chatbot response (500 char max). For rejected: the RejectionReason enum value. |
feedbackGiven.type |
string | ans rej | "immediate" for answered; "rejection_note" for rejected. |
feedbackGiven.content |
string | ans rej | For answered: the feedback text shown to the learner. For rejected: admin's free-text note (optional). |
detailedScoring |
object | ans | Breakdown of evaluation criteria. Contains: criteriaEvaluated[], misconceptionsIdentified[], strengthsObserved[], gapsIdentified[]. |
Integrity & Timestamping
Each statement is individually stamped at write time. The process is:
- A canonical stamp payload is assembled from 7 fields:
id,actorId,verb,roomId,taskId,timestamp, and the fullstatementDataobject (verb, object, result, context). - The payload is serialized using stable JSON (key-sorted) to guarantee deterministic output.
- A SHA-256 hash is computed — stored as
integrityHash. - The hash is submitted to a trusted RFC 3161 TSA (DigiCert, Sectigo, or Freetsa.org per Methodology §6).
- The TSA returns a signed timestamp token — stored as
tstToken. - Both
integrityHashandtstTokenare stored alongside the statement and included verbatim in every export.
statementData),
re-computing SHA-256 and confirming it matches integrityHash, then
validating the RFC 3161 token against that hash using any standard TSA verification
library. A mismatch on either check indicates the statement — including its
response content — was modified after issuance.
Export Document Structure
When an auditor requests an evidence export via the admin API, the system produces
a single signed JSON document. The document ID follows the pattern
XAPI-EXPORT-{date}-{hash-prefix}.
The export itself is also RFC 3161-timestamped at generation time — the
exportIntegrityHash (SHA-256 over all statements using stable JSON)
is submitted to the TSA and the resulting rfc3161Token covers the
entire document. Every export generation is recorded in the audit report registry
with the generating user's ID, IP address, and timestamp.
Versioning Record
| Field | Value |
|---|---|
| Profile Version | 1.0 |
| Publication Date | 2026-03-12 |
| Canonical URL | docs.svelto.io/methodology/xapi-profile/v1 |
| Active Verbs | experienced, answered, asked, approved, rejected |
| LRS Export | Not supported — SIEM delivery only |
| Actor Representation | Pseudonymized ID in exports; real ID in internal DB only |
| Integrity Method | SHA-256 + RFC 3161 (per-statement and per-export) |
| Governing Methodology | Methodology Framework v1.0 |
| Status | Public — Approved for External Audit Review |