Skip to main content

Documentation Index

Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Platform Test Matrix

As of 2026-05-03, Quint has only ever been end-to-end tested against Claude Code. Every other “supported” platform is a fingerprint entry that has never been exercised. This document is the plan to change that.

The Six Detection Layers

Every platform Quint claims to detect is detectable through some combination of these signals. The strength of detection = which layers fire.
LayerSourceStrengthSpoofable?
1. Code-signingmacOS ES (teamID, signingID)CryptographicNo (verified by macOS kernel)
2. Dedicated domainMITM destination hostCryptographicNo (agent API endpoint)
3. Custom HTTP headersMITM request headersStrongYes (trivially)
4. System prompt fingerprintParsed LLM request bodyStrongYes (just a string)
5. Process name / pathES exec eventWeakYes (rename binary)
6. User-Agent stringMITM request headerWeakYes (any HTTP client)
A strong detection is at least one cryptographic signal + one supporting signal. Everything else is best-effort.

The Registry, Audited

Inventory of every platform in internal/agentdetect/fingerprints.go:44-258:
#PlatformTeamIDSigningIDDomainHeadersPromptUAStrength
1claude-codeQ6L2SF6YDWcom.anthropic.claude-code🟢 strong
2cursorVDXQ22DGB9*.cursor.sh✅ 5 headers🟢 strong
3copilot✅ VS Code UBF8T346G9api.githubcopilot.com✅ 5 headers🟢 strong
4windsurf*.codeium.com✅ 2 headers🟡 medium
5aider🟡 medium
6cline🟡 medium
7continue✅ 1 header🟡 medium
8codex🟡 medium
9goose🟡 medium
10gemini-cli🟡 medium
11amp🟡 medium
12kiro🟡 medium
13augment🟡 medium
14zedzed.dev🔴 weak
15opencode🔴 weak
16pearai🔴 weak
17trae🔴 weak
18void🔴 weak
19devin🔴 weak

Desktop AI applications — all missing

Not in the registry, but expected by every CISO after CrowdStrike/Microsoft Agent 365 marketing:
PlatformSignal availablePriority
Claude Desktop (Anthropic)code-signing + anthropic.com domain🔴 must-have
ChatGPT Desktop (OpenAI)code-signing + chatgpt.com domain🔴 must-have
Gemini Desktop (if exists yet)code-signing🟡 nice-to-have
M365 Copilot Desktop (Microsoft)code-signing + office.com🟡 nice-to-have

The Verification Checklist

For each platform, verify all seven checks. A platform is validated only when all seven are green.
#CheckHow to verifyPass criteria
1ES sees process execcurl -s localhost:8080/debug/es-events | jq '.[] | select(.pid==<PID>)'PID appears in event stream within 5s of launch
2Agent identifiedcurl -s localhost:8080/debug/sessions | jq '.[] | select(.pid==<PID>) | .platform'Returns expected platform name, not unknown
3Code-signing verified (if applicable).signingID field on sessionMatches registry entry
4NE intercepts LLM trafficcurl -s localhost:8080/debug/flows | jq '.[] | select(.pid==<PID>)'Flow exists with relay: true
5LLM parser succeedscurl -s localhost:8080/api/sessions/timeline?pid=<PID>Returns tool_call entries, not PARSE_FAILED warnings
6Session reaches cloudcurl https://api.quintai.dev/v1/sessions -H 'Authorization: Bearer $TOK' | jq '.[] | select(.session_id==\"<PID>-<TS>\")'Session row exists in Postgres
7Events attributedSame endpoint + /eventsAll tool calls have the correct agent_id stamped

The First-Pass Manual Test (3 platforms)

Per the audit, the three most strategic platforms to validate right now are:

1. Cursor 🟢 strong detection claim

Why first: Second-most common developer choice, strongest competitive story (we have TeamID + headers + domain + prompt = 4 signals), and Cursor’s Connect RPC protocol is a distinct test of the llmparse router’s protocol detection. Install path: Download from https://cursor.com/download, sign in, open a folder, trigger an AI chat. Known unknowns:
  • Does Cursor use Connect RPC or plain HTTPS in May 2026? (May have changed from the identifyFromProtocol code path in fingerprints.go:444-467)
  • Do the 5 custom x-cursor-* headers still exist?
  • Is cursor.com or *.cursor.sh the current API domain?

2. GitHub Copilot in VS Code 🟢 strong detection claim

Why second: Literally every Fortune 500 enterprise dev has this. If Quint can’t see Copilot, the pitch to 80% of buyers falls apart. Strong signal claims (VS Code TeamID + api.githubcopilot.com + 5 headers + UA) — maximum validation value. Install path: VS Code + GitHub Copilot extension + valid subscription, open a file, trigger a Copilot completion or Copilot Chat. Known unknowns:
  • Does Copilot Chat use the same endpoints as inline completions?
  • Are the copilot-integration-id / editor-version headers still present in 2026 versions?
  • Does the VS Code teamID UBF8T346G9 attribute correctly, or does it over-match to any VS Code extension?

3. Claude Desktop 🔴 NOT IN REGISTRY — must add before testing

Why third: Quint claims “20+ platforms” but has zero desktop AI coverage. CrowdStrike and Microsoft both explicitly support Claude Desktop. This is the biggest demo-day gap. Adding Claude Desktop is ~1 hour of fingerprint work + validation. Install path: Download from https://claude.ai/download, sign in, open a chat (this is NOT the CLI). What to add to fingerprints.go:
{
    name:         "claude-desktop",
    teamID:       "Q6L2SF6YDW",   // Same Anthropic team as claude-code
    signingIDs:   []string{"com.anthropic.claudefordesktop"},
    processNames: []string{"claude"},
    processPathPatterns: []string{
        "Claude.app", "/Claude",
    },
    // System prompt likely differs from claude-code — needs capture
    uaPatterns: []string{"claude-desktop"},
},
Known unknowns: Entire signal set. This is discovery work, not verification.

The Output — What to Log

Every test run should produce a test-results.yml with:
platform: cursor
version: "0.43.2"
macos_version: "15.5"
tested_at: "2026-05-03T14:22:11Z"
checks:
  es_process_exec: pass
  agent_identified: pass
  code_signing: pass
  ne_intercepts: pass
  llm_parser: pass     # or "fail: PARSE_FAILED on openai_responses format"
  cloud_session: pass
  events_attributed: pass
signals_fired:
  - teamID
  - signingID
  - domain
  - headers (3/5)
  - prompt
  - UA
notes: "x-cursor-client-version no longer present in v0.43"
Check these results into docs/operations/test-results/ — they become the source of truth for the platform coverage table, replacing the aspirational list.

Escalation Path When Something Fails

FailureLikely causeFile to investigate
ES doesn’t see execES buffer overflow, ColdStart raceinternal/eslistener/ + memory project_ingestion_gaps.md
Agent identified as unknownFingerprint mismatch (platform changed)internal/agentdetect/fingerprints.go
NE doesn’t interceptNot in NE domain list, or HTTP/2 rejectinternal/netdetect/ + memory project_ingestion_gaps.md
Parser returns PARSE_FAILEDWrong format router, new API shapeinternal/llmparse/router.go + specific parser file
Cloud session missingSession ID mismatch, ingest 4xxCheck /debug/flows and CloudWatch ingest logs
Wrong agent_id attributedDetection priority ordering, parent-cascadeinternal/agentdetect/detector.go

Expand the Matrix, Honestly

Once the first 3 platforms are validated, the dashboard “Platform Coverage” page should stop listing 19 names and start listing:
  • Validated — test passed within the last 30 days (link to test-results.yml)
  • 🟡 Aspirational — fingerprint exists, never tested
  • 🔴 Known broken — tested and failed, has an open bug
This is what CISOs want to see. “We actually tested these 5 platforms yesterday” is infinitely more credible than “we support 20+.”