Ask Concord

Answers from our documentation

Ask anything about Concord. Every answer comes from our actual documentation.

Core Engine: Drift Detection + Auto-Repair

Every competitor detects drift. Concord closes the loop.

Concord by IaxaI watches every vendor stream for schema and distribution drift, re-derives the mapping the moment it breaks, calibrates the new mapping, runs it in shadow, and only promotes after the numbers say it's safe. Every step ledgered. Every change reversible.

Tired of vendor releases breaking your pipeline?

The Problem

Vendors silently change schemas. Static mappings rot.

Splunk renames a field in a point release. CrowdStrike adds a key. An MSP swaps Defender for SentinelOne on a Tuesday and forgets to file a ticket. The pipeline keeps running. The dashboards keep updating. Nobody notices until an audit asks why you stopped seeing logon failures from the file servers in March.

The Impact

Detection coverage rots silently. The MSSP eats the cleanup.

Mid-market MSPs run dozens of customer stacks. Every vendor release is a potential silent break. When the alert finally fires it's a Slack ping and a human ticket, and the ticket sits in a queue while the customer's coverage gap widens. Drift becomes a backlog, the backlog becomes silent failure, and silent failure becomes the audit finding that ends a CISO's career.

How Concord Helps

Concord runs three drift detectors continuously on every live stream: input distribution, output distribution, and schema shape. When any of them fires, an auto-repair worker pulls a sample of the new shape, re-derives the mapping out-of-band, calibrates it against held-out events, and either ships it automatically or escalates with the diff pre-rendered. Every state transition is hash-chained into the audit ledger.

The Outcome

The pipeline catches itself when it breaks.

High-confidence repairs ship without paging anyone. Low- confidence repairs land in an analyst queue with a two-click approve or reject and the rendered diff. The mapping table is versioned and reversible. And every drift event, proposal, shadow run, and promotion is signed evidence the next time an examiner asks how you knew the pipeline was working.

How It Works: Detection

Three detectors. One correlator. No ML in the hot path.

Detection is statistical, deterministic, and lives in the live pipeline. The expensive work (re-deriving the mapping) runs on a worker queue and never blocks ingest.

Input distribution drift

Streaming Maximum Mean Discrepancy on raw events from each source, with a rolling reference window and a rolling test window. Median-heuristic RBF kernel, full permutation test for the p-value, severity binned from none through critical. Cheap, statistical, sub-100ms per check on small windows.

Output distribution drift

The same MMD test on the post-Translation OCSF event stream. When the input distribution and the schema look stable but the output starts shifting (null rates climbing, field values bunching) the engine catches it before the analyst notices the dashboard going dark.

Schema-shape drift

Per-event field-path hashing against the known-good baseline for that source. Added fields, removed fields, renamed paths, all caught in sub-millisecond cost on the event itself. This is the detector that catches a CrowdStrike point release the first time the new shape arrives.

Three-channel correlation

Drift events on the same source within a 15-minute window collapse into one logical event. An input shift, a schema change, and a downstream output divergence become one row with three pieces of evidence, not three separate alerts on the analyst's queue.

How It Works: Repair

Detect. Propose. Calibrate. Shadow. Promote. Ledger.

The auto-repair worker subscribes to drift events. It runs entirely off the live path. The Translation engine never waits on it.

Step 1: Propose

Worker pulls 50 sample events of the new shape from the rolling source buffer. Calls the Universal Adapter with the previous mapping plus the new sample as context. Gets a proposed mapping back. The LLM never touches the live event stream. It only ever sees a held-out batch on a worker.

Step 2: Calibrate

The proposed mapping is run against a separate held-out batch. Coverage and consistency are scored through the existing Platt calibrator, the same statistical machinery Concord uses everywhere else for confidence calibration. No new ML. The output is one number between zero and one.

Step 3: Confidence gate

A configurable floor decides what happens next. Above the floor, the proposed mapping is promoted to shadow mode. Below the floor, it's pushed to an analyst queue with the rendered diff: added fields in green, removed in red, sample input and sample output under both the old and new mappings. Two clicks to approve. Two clicks to reject.

Step 4: Shadow

The proposed mapping runs in parallel with the active mapping for a configurable window. Output divergence is logged per event. If divergence stays under threshold and no errors fire, the proposed mapping promotes to active and the old one retires with a timestamp. If divergence spikes, shadow promotion blocks and an analyst gets the divergent events surfaced.

Step 5: Promote and ledger

Promotion is a single transaction. The Translation engine reads through a versioned mapping table. The next event for that schema picks up the new mapping with no restart and no cache invalidation API. Every step in the loop (detected, proposed, calibrated, shadow-started, promoted, retired, rolled back, escalated, approved, rejected) writes a signed bundle to the audit ledger.

The Post-OCSF Differentiator

Detection is table stakes. The loop is the moat.

Maximum Mean Discrepancy has been in the academic literature since 2012. Every credible competitor in the May 2026 landscape ships a drift detector: Stellar Cyber, Arctic Wolf Aurora, Splunk anomaly engines, Datadog Cloud SIEM. They all surface "this distribution looks different from last week." That statement is now a commodity.

What no one ships is the rest of the sentence. Detect, and then what? In every other platform the answer is a Slack alert and a human. The mapping stays broken until an engineer picks it up. For an MSSP running thirty regulated end-clients on different stacks, that ticket queue is the entire problem.

Concord closes the loop. Detect, re-derive, calibrate, shadow, ledger, ship. A human sits in the loop only when the confidence gate kicks the proposal out. The pipeline catches itself when it breaks. That sentence is the headline. Everything else on this page exists to make it true and defensible.

Self-healing, not self-alerting

The closed loop is the novelty. Detection alone is a commodity. Repair without a human in the loop, ledgered and reversible, is what no other SOC platform ships in 2026.

Less ops drag for the MSSP

Vendor releases stop being a Tuesday-morning fire drill. The mapping table is versioned, reversible, and the audit trail explains exactly why each version exists.

Fewer false negatives for the end-client

Coverage gaps close in minutes, not weeks. Regulated banks, healthcare payers, and insurance carriers stop discovering silent failures during an exam.

Honest Status

What ships today. What we're building for V1.

Shipped

  • Streaming MMD detector with RBF kernel and full permutation testing.
  • Schema-shape drift wired into the live ingestion path on every event.
  • Distribution drift on entity baselines with reference-window snapshotting.
  • Drift governance workflow: alerts, SLAs, acknowledgement, resolution.
  • Ed25519-signed provenance bundles for drift events, ready to thread into the audit ledger.

V1 Build: Closing the Loop

  • Three-channel correlator collapsing input, output, and schema events into one logical drift event.
  • Auto-repair worker that re-derives the mapping off the live path.
  • Calibrated confidence gate, shadow-mode runner, reversible promotion.
  • Versioned mapping table with full provenance and one-row rollback.
  • Backpressure and circuit breakers to prevent a haywire upstream from thrashing the mapping table.

Architecture and Trust

Built for regulated environments and air-gapped deployments.

No ML in the hot path

The MMD test is a cheap statistical check that runs in process on a one-minute schedule per source. The LLM-driven re-mapping happens out-of-band on a worker queue. Promotion to a live mapping requires a confidence threshold and a ledger entry. The live event stream never waits on a model.

Air-gapped deployable

Statistical detector is numpy plus scipy. The auto-repair worker runs against a local Ollama model by default. Cloud LLMs are an optional fallback when an API key is set, never required. The whole loop runs without leaving the customer's network.

Determinism for the audit story

Every MMD result records the kernel parameter, the permutation seed, and the reference and test window event IDs. Repair proposals record the LLM model and a hash of the inference prompt. The ledger entry is enough to replay any decision the engine made.

Reversible by design

Mapping versions are append-only conceptually. Promotion is a transaction. Rollback is a single-row update. The Translation engine always reads through the active row and picks up version flips on the next event for that schema. No restart, no cache invalidation API.

Patent posture

Drift Detection + Auto-Repair is a patent candidate; the filing strategy is in active development as a likely continuation on the existing Drift Detection family. The novelty is the closed loop (three-channel correlation, confidence-gated propose-shadow-promote workflow, and ledger-anchored reversibility), not any one component on its own.

Stop reconciling. Start trusting one timeline.

30-minute walkthrough. Your tools. Your tenants. Your audit cycle. We will show you exactly where Concord earns its keep.