Skip to main content

Documentation Index

Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Platform Coverage

Quint is not a product per platform. It is one behavioral intelligence engine fed by platform-specific collectors that all produce the same event envelope into the same cloud pipeline. The detection engine, the divergence detector, the dashboard, and the data model are identical regardless of collector. Adding a new platform is adding a new collector, not rebuilding the product.

The source-agnostic architecture

Every event Quint processes is a QuintEvent — a flat structure with roughly thirty fields. None of them are platform-specific. A tool call intercepted by the macOS forward proxy becomes a QuintEvent. A process exec observed by the Linux eBPF collector will become a QuintEvent. A request intercepted by the Kubernetes sidecar will become a QuintEvent. The source field identifies the collector that produced the event, and the pipeline uses it for routing, but downstream services do not branch on source. This is not a default — it is a load-bearing architectural constraint. If any top-level QuintEvent field were platform-specific, every consumer would need to handle the “what if this is null because the event came from a different source” case. Platform-specific data goes into Labels (a string-to-string map for forensics display) or Arguments (a JSON blob for raw context). The top-level fields remain universally meaningful.

The five categories of AI agent deployment

A: Desktop apps

AI coding agents on developer workstations: Claude Code, Cursor, Copilot, Windsurf, Aider, Continue, Cline, and dozens of others.

B: Web chats

Browser-based AI: ChatGPT web, Claude web, Gemini web, Perplexity.

C: Cloud agents

AI agents running in Lambda, ECS, Kubernetes, or serverless platforms. LangChain, LangGraph, Bedrock Agents, OpenAI Assistants.

D: CI/CD agents

AI agents running in GitHub Actions, GitLab CI, Jenkins, or other build pipelines.

E: API-integrated AI

Vendor-managed AI in SaaS tools: customer support copilots, sales AI, code review bots.

Collector per category

Category A: macOS and Windows desktop

The macOS collector is the current shipping implementation. It consists of the Go daemon (forward proxy, risk engine, forwarder), the Swift Endpoint Security extension, and the Swift Network Extension. The daemon runs as a LaunchDaemon; the extensions activate via MDM-managed system extension approval. The CA certificate for HTTPS interception is distributed by MDM or installed during setup. The Windows collector reuses the Go daemon unchanged. Windows Filtering Platform replaces the macOS Network Extension for transparent HTTPS interception. Event Tracing for Windows replaces Endpoint Security for process and file events. The daemon runs as a Windows Service. Installer packaging is MSIX for Intune, MSI for SCCM and Workspace ONE.

Category B: browsers

Browser AI sessions on machines with the macOS (or eventually Windows) collector installed are already covered — the Network Extension transparent proxy intercepts browser HTTPS traffic to LLM API endpoints and the LLM parsers extract tool calls and prompts. The gap is attribution: sessions appear as the browser process rather than a specific web agent. A dedicated browser extension provides tighter attribution and coverage on machines without an endpoint collector. The extension intercepts fetch and XHR traffic to configured LLM API domains, extracts prompts and responses, and forwards events to the cloud ingest endpoint. Deployed via ExtensionInstallForcelist policy through MDM on managed browsers.

Category C: cloud agents

Cloud-deployed AI agents run without a macOS or Windows endpoint to protect. The collector is a Kubernetes sidecar for K8s workloads or a Lambda layer for serverless functions. The sidecar runs the same forward proxy as the macOS daemon, intercepting outbound LLM API calls from the agent container. An eBPF DaemonSet on each node provides the OS truth stream for containers on that node. Helm chart installation: a mutating webhook injects the Quint sidecar into pods annotated for monitoring, a DaemonSet installs the eBPF collector, and a ConfigMap carries the deploy token and ingest endpoint.

Category D: CI/CD agents

CI/CD agents are ephemeral — they spawn at the start of a build and terminate when the build completes. The collector is a container image or a GitHub Action that wraps the agent’s execution: starts the Quint proxy, redirects LLM API traffic through it via iptables rules, and streams events to the ingest endpoint for the duration of the job. Because runners are Linux, CI/CD coverage depends on the Linux collector. Once Linux ships, a GitHub Action wrapper is a few weeks of work.

Category E: API-integrated AI

Vendor-managed AI tools (Intercom Fin, Zendesk AI, Gong) expose audit APIs rather than running on infrastructure Quint can instrument. Integration is a SaaS-to-SaaS API hook: Quint consumes the vendor’s audit feed, normalizes events into the QuintEvent envelope, and runs them through the same detection pipeline. Each vendor integration is bespoke; this category has a long tail of potential integrations and is lowest priority for initial deployment.

Current status and advancement

CategoryCollectorStatusAdvancement trigger
A (macOS)Go daemon + Swift extensionsLive
A (Windows)Go daemon + WFP + ETWPlannedCustomer requirement
A (Linux)Go daemon + eBPF + iptablesPlannedUnlocks C and D simultaneously
B (browser)Chrome/Edge extensionDesignedCustomer without endpoint coverage
C (K8s)Sidecar + DaemonSet + HelmDesignedProduction cloud agent customer
C (Lambda)Lambda layerDesignedServerless AI customer
D (CI/CD)GitHub Action wrapperDesignedDepends on Linux
E (SaaS)Per-vendor webhook adapterDeferredNot priority for initial deployment

What stays constant across collectors

Every collector, regardless of platform or delivery model, produces QuintEvents into the same cloud pipeline. The detection engine is unchanged. The divergence detector is unchanged. The multi-tenancy and row-level security model is unchanged. The dashboard is unchanged. The API is unchanged. This is the architectural decision that makes platform expansion possible for a small team. Detection logic written today for macOS tool calls will run unchanged on Linux, Kubernetes, and CI/CD agent events in the future. Model training data collected today from macOS deployments will train models that run against all future collectors. The data moat compounds across every platform because the data schema is one schema.

When to say no to a new platform

Not every AI deployment surface justifies a collector. The decision framework:
  1. Does the new surface map to an existing category, or is it a genuinely new category?
  2. How many customers ask for it? One is a request; three is a pattern; five is a requirement.
  3. Is it blocking revenue or nice-to-have?
  4. Does it require a new collector type, or just fingerprint additions to an existing collector?
  5. Does it break the source-agnostic architecture?
New fingerprints within an existing collector (e.g., adding support for a new AI coding agent on macOS) cost roughly a week and should almost always ship. New collector types require a full platform implementation and are tier decisions, not feature requests. Anything that would require platform-specific fields in the QuintEvent envelope is pushed back — there is always a way to represent the data in Labels or Arguments without forking the schema.