Documentation Index
Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Status: Stage 1 — shadow mode in production. The behavioral engine runs alongside the rule-based engine but enforcement decisions are still made by Stage 0 (deterministic rules). See the ML Roadmap for the stage ladder and advancement triggers.
Behavioral Intelligence
Quint’s behavioral engine builds a per-agent behavioral envelope — a compact probabilistic fingerprint of what each agent normally does. Every action is evaluated against this envelope in real time. The system never classifies intent — it classifies deviation from established behavior.Core Principle: Envelopes, Not Intent
The behavioral model never classifies intent. Intent is unknowable. Instead, it classifies whether an action fits the agent’s established behavioral envelope — the region of capability-space it normally operates in. A backup agent that reads all files and sends them to S3 every night? After a week, that’s inside its envelope. Zero signal. The same behavior from a code-review agent? Outside its envelope. Strong signal. Same structure. Same capability pair. Different envelope. Different outcome.Architecture Components
Scoring Pipeline
4-gate fast-rejection pipeline: 95% of actions produce zero output in under 300ns
Agent Fingerprint
~3.1KB probabilistic data structure per agent using Bloom filters, Count-Min Sketch, HyperLogLog, EWMA, and Markov chains
Session Relationships
128-entry ring buffer detecting temporal adjacency, resource sharing, causality, and data flow between actions
Confidence Bands
Three-state classification (KNOWN_SAFE / UNCERTAIN / ANOMALOUS) with configurable enforcement per security profile
Deviation Signals
6 independent statistical signals computed in ~433ns, requiring corroboration before alerting
Group Envelopes
New agents inherit group baselines from day one — eliminates cold-start noise until the agent builds its own history
Threat Detections
Persistent detection records with severity classification, signal context, and resolution workflow
Envelope Lifecycle
From cold start to mature baseline — how envelopes learn, evolve, sync across proxies, and detect drift
Baseline Floors
Minimum divergence thresholds that agent learning can never lower — the anti-poisoning layer
Threat Signatures
Known-dangerous structural shapes matched via JSD — updatable from local file or global intelligence
BI Service
Per-tenant cloud brain — consumes events, scores via rules, computes baselines, pushes corrections at 780K events/sec
Shadow Mode
Observe-only mode for calibration — behavioral scorer runs alongside the existing risk engine without enforcing
Performance
| Path | % of Actions | Latency |
|---|---|---|
| Gate 1 -> KNOWN_SAFE | ~95% | ~114ns |
| Gate 1 -> Gate 2 -> UNCERTAIN | ~4% | ~550ns |
| Gate 1 -> Gate 2 -> Gate 3 -> ANOMALOUS | <0.5% | ~850ns |
| Gate 0 -> BLOCK (deny list) | ~0.1% | ~200ns |
Cloud Intelligence (Tier 2)
While the proxy scores every action locally in under 1 microsecond, the BI Service provides deeper analysis via Memgraph and GNN:Multi-Level Detection (4 levels)
The GNN doesn’t rely on a single model. Four independent detection levels run in parallel:| Level | Method | What it catches | Learned weight |
|---|---|---|---|
| Node scoring | Per-action VGAE reconstruction error | Suspicious individual actions | 8.3% |
| GAT classifier | Supervised graph attention network | Known attack patterns (10 types, 35 variants) | 47.7% |
| Mahalanobis distance | Session embedding distance from normal centroid | Novel/zero-day attacks never seen in training | 30.5% |
| Node-max | Highest single-node anomaly score in session | Sessions with any extreme outlier action | 13.5% |
Detection Quality
| Metric | Value |
|---|---|
| AUROC | 1.000 (synthetic), proven > baseline (0.982) |
| FPR @0.65 | 0.0% |
| Separation (attack - normal score) | +0.437 |
| Novel/zero-day detection | 100% (via Mahalanobis) |
| Features | 143-dim (capability, temporal, structural, n-gram, rule bits) |
| Training data | 50K sessions, 8 archetypes, 10 attack types |
Rule Engine (Stage 1)
Before graph analysis, every event passes through the GraphReasoner — 90 inference rules across 7 categories, mapped to 11 compliance frameworks (SOC2, NIST 800-53, ISO 27001, OWASP LLM Top 10, MITRE ATT&CK, GDPR, EU AI Act, NIST AI RMF, PCI DSS, HIPAA, CCPA). Baseline-aware scoring eliminates ~60% of false positives: each rule’s score is modulated by how surprising the action is for the specific agent. A DevOps agent runningexec scores 0 (normal for them). A coding assistant running exec scores full (never does this).
Capability-based tool detection prevents evasion: rules classify tools by their 12-capability type, not by name. Renaming bash to custom_executor doesn’t help — the capability is still exec.
Rule firing bits (90-dim) are wired directly into the GNN as node features, extending the feature vector from 38-dim to 143-dim. The GNN learns which rules correlate with attacks and how they interact.
| Component | Latency |
|---|---|
| Rule evaluation (90 rules) | 0.133ms per event |
| Phase 4 encoding (143-dim) | 8.1ms per 1000 nodes |
| GNN inference per session | 0.44ms avg |
Deployment Tiers
| Tier | Memgraph | GNN Model | Signature Learning |
|---|---|---|---|
| Local | None | None | 5 built-in FlowMatrix signatures |
| Team | Shared (1-2GB) | Quint pre-trained | Receives learned signatures |
| Enterprise | Dedicated (8-32GB) | Custom-trained per tenant | Distills + receives signatures |
| Global | Aggregated (anonymized) | Universal model | Cross-org threat intelligence |