Ask Concord

Answers from our documentation

Ask anything about Concord. Every answer comes from our actual documentation.

Capability: Semantic Translation Engine

Make every vendor speak one schema. Know when to trust the mapping.

Concord by IaxaI takes raw events from any tool in your stack and emits OCSF-aligned output with a calibrated confidence score on every field decision. The reverse transpiler turns those OCSF events back into the vendor's native dialect so detections written once run everywhere. Patent-pending.

Want to see the engine on your stack?

The Problem

OCSF won the schema war. That doesn't mean your pipeline is done.

Every credible platform now claims OCSF support. The hard part was never publishing a schema. It's mapping a thousand vendor field shapes into it correctly, knowing which mappings you can trust, and keeping the pipeline alive when a vendor quietly renames a field on a Tuesday. Most pipelines hide that uncertainty. Analysts find out at incident time.

The Impact

Detection rules rewritten per vendor. Mappings nobody verified. Surprises in production.

The same detection has to be authored five times across SPL, KQL, LogScale, Sigma, and whatever the EDR ships this quarter. Field mappings live in spreadsheets. Confidence in the translation lives nowhere. When an exam asks "how do you know that's the same user across these three tools?" the answer is a screenshot.

What Concord Does Differently

Concord ships every translated field with a calibrated probability. Not a guess. Mappings above the auto-approve threshold flow straight through. Mappings in the middle band queue for human review. Mappings below the floor get rejected with a reason. The reverse transpiler then takes OCSF detections back to vendor-native query languages so one rule runs across the whole stack.

The Outcome

Cross-tool truth analysts can defend, and detections that travel.

One schema everything funnels into. One detection authored once, deployed across every vendor surface. Every mapping decision signed and written to the audit ledger so you can show your work to an examiner without rebuilding the trail from logs.

How the Engine Works

Forward translation, calibrated confidence, reverse transpilation.

Forward translation: vendor to OCSF

Every event flowing through Concord lands in OCSF-canonical form with the original payload preserved verbatim for drill-down. The hot path is deterministic: a vendor mapping pack matches the source, fields snap to their OCSF targets, the event ships. Sub-millisecond, no model inference, no surprises.

Calibrated confidence on every field decision

Concord runs Platt scaling on top of the underlying similarity score so what you see is a real probability, not a raw cosine number with no scale. Three tiers govern what happens next (auto-approve, human-review, or reject) with thresholds the operator controls. OCSF-only translation is table stakes. The calibration is the part you trust at 3am.

Reverse Transpiler: OCSF back to vendor-native

The V1 addition. Concord compiles OCSF field references and detection logic back into Splunk SPL, Sentinel KQL, CrowdStrike LogScale, and Sigma YAML. Author a detection once against the canonical schema; the transpiler emits the vendor-specific version that runs unchanged on the target platform. This is the engine-side foundation for the Detection Portability Layer.

Provenance threaded through every translation

Each translated field carries a tuple (source field, target field, mapping method, confidence, model version) signed with Ed25519 and written to the append-only audit ledger. When a regulator or an exec asks how a number was derived, the answer is replayable. Not reconstructed from logs after the fact.

Hot Path Discipline

No machine learning in the live ingest path. On purpose.

Concord's live translation path is deterministic: cached mappings or rule-based extractors. Embedding generation, calibration training, and language-model-assisted mapping all run out of band: during onboarding for a new vendor, during drift-triggered repair cycles, or in overnight batch jobs that propose updates for human approval. Audit-grade systems can't have non-deterministic inference between the event and the alert. The hot path doesn't.

Hot path

Deterministic vendor pack lookup. Cached schema-hash match. Sub-millisecond per event. Same input, same output, every time.

Onboarding path

Embedding-based alignment plus optional language-model inference for unknown sources. Proposed mappings queue for human approval before they touch live traffic.

Repair path

When drift detection fires, the auto-repair worker proposes a new mapping, runs it in shadow, and only promotes after operator approval. Every step lands in the ledger.

Status

Shipped today versus on the V1 build list.

Shipped

  • Forward translation engine with 768-dimensional embeddings, bidirectional alignment, and Platt-calibrated confidence.
  • Three-tier risk classification (auto-approve, human-review, reject) with configurable thresholds and per-decision audit records.
  • 30+ vendor mappings and 6 production-ready connectors covering EDR, SIEM, identity, network, and cloud.
  • Schema-hash cached translation for repeat sources. Deterministic fallback when no model is available.
  • Ed25519-signed translation records ready to thread into the audit ledger.

V1 build list

  • Reverse Transpiler: OCSF detections back to SPL, KQL, LogScale, and Sigma. The engine-side enabler for the Detection Portability Layer.
  • Wire the deterministic vendor packs into the live ingest path so the language model never touches hot traffic.
  • Train calibration on real labeled alignment pairs; replace the seed coefficients with a model checkpoint.
  • Park-and-flag onboarding for unknown vendors. Analysts approve a mapping once, the engine remembers it forever.

What the translation layer measures today.

Numbers below come from internal benchmarks. Security-domain validation is in progress and will be published as the engine picks up labeled pairs from the first MSSP deployments.

83.4%

Avg semantic similarity, unseen vendor formats

30+

Vendor mappings, 6 production-ready connectors

<1ms

Cached deterministic translation, p99 target

Why It Matters

Translation is table stakes. Calibrated translation is the difference.

Every credible SOC platform now ships an OCSF pipe. That part of the moat closed. What separates Concord is the surrounding apparatus: calibrated probabilities so analysts know when a mapping is honest, deterministic vendor packs so the live path never depends on a model, drift detection so silent vendor schema changes never quietly break the pipeline, and a signed audit trail so every translation is replayable.

The reverse transpiler closes the loop. Detections written once against OCSF run on every vendor surface a customer cares about. For a multi-tenant security practice this is the difference between maintaining the same rule in five dialects and shipping a single canonical detection across the whole client base.

For a regulated end-customer the difference shows up the first time an examiner asks a question. The lineage from raw vendor event to OCSF field to dashboard number is one query against the ledger. The mapping that produced the number was either auto-approved above the calibration floor, reviewed by a named analyst, or rejected. There's no missing step. No screenshot reconstruction. No "trust us."

Where It Sits in the Stack

One engine, four downstream consumers.

Translation is the foundation everything else in Concord depends on. Once events are in canonical form with a confidence score, the rest of the platform can do its job.

Entity Resolution

Reads canonical actor, source, and destination fields to build entity match candidates. Translation confidence flows into the resolver as a prior on the conformal score, so a shaky mapping never anchors a high-confidence identity claim.

Drift Detection

Translation outputs are one of three streams the drift detector watches. When a vendor quietly renames a field, drift fires, the auto-repair loop proposes an updated mapping, and the new version pins after operator approval.

Audit Ledger

Every translation decision (rule pack version, calibration model version, signature) appends to the append-only ledger. The same ledger every other Concord engine writes to. One spine, one source of truth.

Compliance Auto-Packets

FFIEC, SOC 2, HIPAA, and PCI evidence packets draw from the ledger that the translation engine writes to. Control evidence inherits the calibration and signature trail at no extra cost to the analyst.

Runs where regulated customers need it to.

The translation layer is air-gap deployable. The embedding model ships in the container. Language-model-assisted mapping runs locally by default. No external API calls in the translation path. Cloud inference is available, off by default. Customer telemetry never has to leave the customer network for the engine to do its job.

Stop reconciling. Start trusting one timeline.

30-minute walkthrough. Your tools. Your tenants. Your audit cycle. We will show you exactly where Concord earns its keep.