Skip to main content

Documentation Index

Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Status: Stage 1 — shadow mode in production. The behavioral engine runs alongside the rule-based engine but enforcement decisions are still made by Stage 0 (deterministic rules). See the ML Roadmap for the stage ladder and advancement triggers.

Behavioral Intelligence

Quint’s behavioral engine builds a per-agent behavioral envelope — a compact probabilistic fingerprint of what each agent normally does. Every action is evaluated against this envelope in real time. The system never classifies intent — it classifies deviation from established behavior.

Core Principle: Envelopes, Not Intent

The behavioral model never classifies intent. Intent is unknowable. Instead, it classifies whether an action fits the agent’s established behavioral envelope — the region of capability-space it normally operates in. A backup agent that reads all files and sends them to S3 every night? After a week, that’s inside its envelope. Zero signal. The same behavior from a code-review agent? Outside its envelope. Strong signal. Same structure. Same capability pair. Different envelope. Different outcome.

Architecture Components

Scoring Pipeline

4-gate fast-rejection pipeline: 95% of actions produce zero output in under 300ns

Agent Fingerprint

~3.1KB probabilistic data structure per agent using Bloom filters, Count-Min Sketch, HyperLogLog, EWMA, and Markov chains

Session Relationships

128-entry ring buffer detecting temporal adjacency, resource sharing, causality, and data flow between actions

Confidence Bands

Three-state classification (KNOWN_SAFE / UNCERTAIN / ANOMALOUS) with configurable enforcement per security profile

Deviation Signals

6 independent statistical signals computed in ~433ns, requiring corroboration before alerting

Group Envelopes

New agents inherit group baselines from day one — eliminates cold-start noise until the agent builds its own history

Threat Detections

Persistent detection records with severity classification, signal context, and resolution workflow

Envelope Lifecycle

From cold start to mature baseline — how envelopes learn, evolve, sync across proxies, and detect drift

Baseline Floors

Minimum divergence thresholds that agent learning can never lower — the anti-poisoning layer

Threat Signatures

Known-dangerous structural shapes matched via JSD — updatable from local file or global intelligence

BI Service

Per-tenant cloud brain — consumes events, scores via rules, computes baselines, pushes corrections at 780K events/sec

Shadow Mode

Observe-only mode for calibration — behavioral scorer runs alongside the existing risk engine without enforcing

Performance

Path% of ActionsLatency
Gate 1 -> KNOWN_SAFE~95%~114ns
Gate 1 -> Gate 2 -> UNCERTAIN~4%~550ns
Gate 1 -> Gate 2 -> Gate 3 -> ANOMALOUS<0.5%~850ns
Gate 0 -> BLOCK (deny list)~0.1%~200ns
Scale tested: 100K fingerprint updates (numerically stable), 100 concurrent agents (no races, 99% cache hit rate), memory flat at 2.0MB across 50K actions (zero leaks). False positive rate: 0.07%.

Cloud Intelligence (Tier 2)

While the proxy scores every action locally in under 1 microsecond, the BI Service provides deeper analysis via Memgraph and GNN:

Multi-Level Detection (4 levels)

The GNN doesn’t rely on a single model. Four independent detection levels run in parallel:
LevelMethodWhat it catchesLearned weight
Node scoringPer-action VGAE reconstruction errorSuspicious individual actions8.3%
GAT classifierSupervised graph attention networkKnown attack patterns (10 types, 35 variants)47.7%
Mahalanobis distanceSession embedding distance from normal centroidNovel/zero-day attacks never seen in training30.5%
Node-maxHighest single-node anomaly score in sessionSessions with any extreme outlier action13.5%
Weights are learned via logistic regression on validation data — not hand-tuned. The ensemble combines levels non-linearly so a strong signal from any single level can raise the alarm.

Detection Quality

MetricValue
AUROC1.000 (synthetic), proven > baseline (0.982)
FPR @0.650.0%
Separation (attack - normal score)+0.437
Novel/zero-day detection100% (via Mahalanobis)
Features143-dim (capability, temporal, structural, n-gram, rule bits)
Training data50K sessions, 8 archetypes, 10 attack types

Rule Engine (Stage 1)

Before graph analysis, every event passes through the GraphReasoner — 90 inference rules across 7 categories, mapped to 11 compliance frameworks (SOC2, NIST 800-53, ISO 27001, OWASP LLM Top 10, MITRE ATT&CK, GDPR, EU AI Act, NIST AI RMF, PCI DSS, HIPAA, CCPA). Baseline-aware scoring eliminates ~60% of false positives: each rule’s score is modulated by how surprising the action is for the specific agent. A DevOps agent running exec scores 0 (normal for them). A coding assistant running exec scores full (never does this). Capability-based tool detection prevents evasion: rules classify tools by their 12-capability type, not by name. Renaming bash to custom_executor doesn’t help — the capability is still exec. Rule firing bits (90-dim) are wired directly into the GNN as node features, extending the feature vector from 38-dim to 143-dim. The GNN learns which rules correlate with attacks and how they interact.
ComponentLatency
Rule evaluation (90 rules)0.133ms per event
Phase 4 encoding (143-dim)8.1ms per 1000 nodes
GNN inference per session0.44ms avg

Deployment Tiers

TierMemgraphGNN ModelSignature Learning
LocalNoneNone5 built-in FlowMatrix signatures
TeamShared (1-2GB)Quint pre-trainedReceives learned signatures
EnterpriseDedicated (8-32GB)Custom-trained per tenantDistills + receives signatures
GlobalAggregated (anonymized)Universal modelCross-org threat intelligence
The intelligence loop completes in ~30 seconds: attack detected at one proxy becomes a learned signature pushed to every proxy in the fleet.