Documentation Index
Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Platform Test Matrix
As of 2026-05-03, Quint has only ever been end-to-end tested against Claude Code. Every other “supported” platform is a fingerprint entry that has never been exercised. This document is the plan to change that.
The Six Detection Layers
Every platform Quint claims to detect is detectable through some combination of these signals. The strength of detection = which layers fire.
| Layer | Source | Strength | Spoofable? |
|---|
| 1. Code-signing | macOS ES (teamID, signingID) | Cryptographic | No (verified by macOS kernel) |
| 2. Dedicated domain | MITM destination host | Cryptographic | No (agent API endpoint) |
| 3. Custom HTTP headers | MITM request headers | Strong | Yes (trivially) |
| 4. System prompt fingerprint | Parsed LLM request body | Strong | Yes (just a string) |
| 5. Process name / path | ES exec event | Weak | Yes (rename binary) |
| 6. User-Agent string | MITM request header | Weak | Yes (any HTTP client) |
A strong detection is at least one cryptographic signal + one supporting signal. Everything else is best-effort.
The Registry, Audited
Inventory of every platform in internal/agentdetect/fingerprints.go:44-258:
| # | Platform | TeamID | SigningID | Domain | Headers | Prompt | UA | Strength |
|---|
| 1 | claude-code | ✅ Q6L2SF6YDW | ✅ com.anthropic.claude-code | — | — | ✅ | ✅ | 🟢 strong |
| 2 | cursor | ✅ VDXQ22DGB9 | ✅ | ✅ *.cursor.sh | ✅ 5 headers | ✅ | ✅ | 🟢 strong |
| 3 | copilot | ✅ VS Code UBF8T346G9 | ✅ | ✅ api.githubcopilot.com | ✅ 5 headers | ✅ | ✅ | 🟢 strong |
| 4 | windsurf | — | — | ✅ *.codeium.com | ✅ 2 headers | ✅ | ✅ | 🟡 medium |
| 5 | aider | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 6 | cline | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 7 | continue | — | — | — | ✅ 1 header | ✅ | ✅ | 🟡 medium |
| 8 | codex | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 9 | goose | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 10 | gemini-cli | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 11 | amp | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 12 | kiro | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 13 | augment | — | — | — | — | ✅ | ✅ | 🟡 medium |
| 14 | zed | — | — | ✅ zed.dev | — | — | ✅ | 🔴 weak |
| 15 | opencode | — | — | — | — | — | ✅ | 🔴 weak |
| 16 | pearai | — | — | — | — | — | ✅ | 🔴 weak |
| 17 | trae | — | — | — | — | — | ✅ | 🔴 weak |
| 18 | void | — | — | — | — | — | ✅ | 🔴 weak |
| 19 | devin | — | — | — | — | — | ✅ | 🔴 weak |
Desktop AI applications — all missing
Not in the registry, but expected by every CISO after CrowdStrike/Microsoft Agent 365 marketing:
| Platform | Signal available | Priority |
|---|
| Claude Desktop (Anthropic) | code-signing + anthropic.com domain | 🔴 must-have |
| ChatGPT Desktop (OpenAI) | code-signing + chatgpt.com domain | 🔴 must-have |
| Gemini Desktop (if exists yet) | code-signing | 🟡 nice-to-have |
| M365 Copilot Desktop (Microsoft) | code-signing + office.com | 🟡 nice-to-have |
The Verification Checklist
For each platform, verify all seven checks. A platform is validated only when all seven are green.
| # | Check | How to verify | Pass criteria |
|---|
| 1 | ES sees process exec | curl -s localhost:8080/debug/es-events | jq '.[] | select(.pid==<PID>)' | PID appears in event stream within 5s of launch |
| 2 | Agent identified | curl -s localhost:8080/debug/sessions | jq '.[] | select(.pid==<PID>) | .platform' | Returns expected platform name, not unknown |
| 3 | Code-signing verified (if applicable) | .signingID field on session | Matches registry entry |
| 4 | NE intercepts LLM traffic | curl -s localhost:8080/debug/flows | jq '.[] | select(.pid==<PID>)' | Flow exists with relay: true |
| 5 | LLM parser succeeds | curl -s localhost:8080/api/sessions/timeline?pid=<PID> | Returns tool_call entries, not PARSE_FAILED warnings |
| 6 | Session reaches cloud | curl https://api.quintai.dev/v1/sessions -H 'Authorization: Bearer $TOK' | jq '.[] | select(.session_id==\"<PID>-<TS>\")' | Session row exists in Postgres |
| 7 | Events attributed | Same endpoint + /events | All tool calls have the correct agent_id stamped |
Per the audit, the three most strategic platforms to validate right now are:
1. Cursor 🟢 strong detection claim
Why first: Second-most common developer choice, strongest competitive story (we have TeamID + headers + domain + prompt = 4 signals), and Cursor’s Connect RPC protocol is a distinct test of the llmparse router’s protocol detection.
Install path: Download from https://cursor.com/download, sign in, open a folder, trigger an AI chat.
Known unknowns:
- Does Cursor use Connect RPC or plain HTTPS in May 2026? (May have changed from the
identifyFromProtocol code path in fingerprints.go:444-467)
- Do the 5 custom
x-cursor-* headers still exist?
- Is
cursor.com or *.cursor.sh the current API domain?
2. GitHub Copilot in VS Code 🟢 strong detection claim
Why second: Literally every Fortune 500 enterprise dev has this. If Quint can’t see Copilot, the pitch to 80% of buyers falls apart. Strong signal claims (VS Code TeamID + api.githubcopilot.com + 5 headers + UA) — maximum validation value.
Install path: VS Code + GitHub Copilot extension + valid subscription, open a file, trigger a Copilot completion or Copilot Chat.
Known unknowns:
- Does Copilot Chat use the same endpoints as inline completions?
- Are the
copilot-integration-id / editor-version headers still present in 2026 versions?
- Does the VS Code teamID
UBF8T346G9 attribute correctly, or does it over-match to any VS Code extension?
3. Claude Desktop 🔴 NOT IN REGISTRY — must add before testing
Why third: Quint claims “20+ platforms” but has zero desktop AI coverage. CrowdStrike and Microsoft both explicitly support Claude Desktop. This is the biggest demo-day gap. Adding Claude Desktop is ~1 hour of fingerprint work + validation.
Install path: Download from https://claude.ai/download, sign in, open a chat (this is NOT the CLI).
What to add to fingerprints.go:
{
name: "claude-desktop",
teamID: "Q6L2SF6YDW", // Same Anthropic team as claude-code
signingIDs: []string{"com.anthropic.claudefordesktop"},
processNames: []string{"claude"},
processPathPatterns: []string{
"Claude.app", "/Claude",
},
// System prompt likely differs from claude-code — needs capture
uaPatterns: []string{"claude-desktop"},
},
Known unknowns: Entire signal set. This is discovery work, not verification.
The Output — What to Log
Every test run should produce a test-results.yml with:
platform: cursor
version: "0.43.2"
macos_version: "15.5"
tested_at: "2026-05-03T14:22:11Z"
checks:
es_process_exec: pass
agent_identified: pass
code_signing: pass
ne_intercepts: pass
llm_parser: pass # or "fail: PARSE_FAILED on openai_responses format"
cloud_session: pass
events_attributed: pass
signals_fired:
- teamID
- signingID
- domain
- headers (3/5)
- prompt
- UA
notes: "x-cursor-client-version no longer present in v0.43"
Check these results into docs/operations/test-results/ — they become the source of truth for the platform coverage table, replacing the aspirational list.
Escalation Path When Something Fails
| Failure | Likely cause | File to investigate |
|---|
| ES doesn’t see exec | ES buffer overflow, ColdStart race | internal/eslistener/ + memory project_ingestion_gaps.md |
Agent identified as unknown | Fingerprint mismatch (platform changed) | internal/agentdetect/fingerprints.go |
| NE doesn’t intercept | Not in NE domain list, or HTTP/2 reject | internal/netdetect/ + memory project_ingestion_gaps.md |
| Parser returns PARSE_FAILED | Wrong format router, new API shape | internal/llmparse/router.go + specific parser file |
| Cloud session missing | Session ID mismatch, ingest 4xx | Check /debug/flows and CloudWatch ingest logs |
| Wrong agent_id attributed | Detection priority ordering, parent-cascade | internal/agentdetect/detector.go |
Expand the Matrix, Honestly
Once the first 3 platforms are validated, the dashboard “Platform Coverage” page should stop listing 19 names and start listing:
- ✅ Validated — test passed within the last 30 days (link to test-results.yml)
- 🟡 Aspirational — fingerprint exists, never tested
- 🔴 Known broken — tested and failed, has an open bug
This is what CISOs want to see. “We actually tested these 5 platforms yesterday” is infinitely more credible than “we support 20+.”