Skip to main content

Documentation Index

Fetch the complete documentation index at: https://quintsecurity.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Shadow AI Discovery

Last updated: 2026-05-03. Research sources cited inline.

The Market: What Exists Today

CrowdStrike Falcon — Shadow AI Discovery + AI-SPM

CrowdStrike’s approach is sensor-first. The Falcon agent already sits on every managed endpoint, so they extended it to fingerprint AI processes. As of their May 2026 announcements, Falcon detects 1,800+ unique AI applications across enterprise devices (160M+ installations). Their Shadow AI Discovery surfaces:
  • AI applications and agents (ChatGPT, Claude, Cursor, GitHub Copilot, DeepSeek, Gemini)
  • LLM runtimes (local model servers, inference frameworks)
  • MCP servers (explicitly called out as a discovery target)
  • IDE extensions and dev tooling with AI capabilities
Detection works by analyzing Falcon sensor telemetry in real-time — process signatures, command-line behaviors, package manifests, network connections, and DNS queries. Discovered components are classified and linked into Falcon Exposure Management’s Enterprise Graph, where they gain context: privilege level, network connectivity, proximity to critical assets, blast radius of a potential compromise. The dashboard lives inside Falcon Exposure Management (requires Falcon for IT add-on). A single toggle activates AI Discovery. The UI ties each AI asset to its host, user, and risk priority. CrowdStrike also ships a Shadow AI Visibility Service — a professional-services engagement where their team audits your environment. Their finding: “One customer counted 150 agents in its inventory. We found over 500.” Cloud-side, Falcon Cloud Security AI-SPM extends discovery to AWS SageMaker, Azure Cognitive Services, GCP Vertex AI — scanning for misconfigurations, exposed endpoints, and shadow AI workloads. Sales pitch: “You can’t secure what you can’t see. Falcon discovers every AI agent, runtime, and MCP server across your endpoints, SaaS, and cloud — and links them to the blast radius of a compromise.” Sources: CrowdStrike blog — Secure AI Agents, Govern Shadow AI, CrowdStrike Shadow AI Visibility Service, Techzine coverage

Microsoft Defender for Cloud Apps + Entra Global Secure Access

Microsoft takes a network-traffic approach. Their Shadow AI discovery in Entra Global Secure Access inspects internet and Microsoft 365 traffic to detect connections to known generative AI applications, SaaS MCP servers, and AI Model Provider APIs (explicitly naming ChatGPT, Claude, DeepSeek, Anthropic Claude API). Discovered apps are matched against the Defender for Cloud Apps catalog — now 31,000+ apps, scored on 90+ risk factors across security, compliance, and legal categories. The catalog includes dedicated AI categories:
  • AI — MCP Server: public cloud services implementing Model Context Protocol
  • AI — Model Provider: platforms/APIs delivering access to foundation models
The dashboard (Cloud Discovery > Discovered Apps > Generative AI filter) shows: app name, risk score (1-10), user count, bytes sent/received, and a drill-down with security details (encryption, audit logging, compliance certifications). Admins can sanction/unsanction apps and create automated policies. For agentic AI specifically, Defender now discovers Copilot Studio agents and Azure AI Foundry agents, ingesting their audit logs into Advanced Hunting for custom threat queries. Microsoft Purview DSPM for AI layers on top — monitoring actual user interactions with AI apps, detecting sensitive data in prompts, and flagging policy violations. Sales pitch: Network-level discovery (no endpoint agent needed for SaaS AI), massive app catalog, native integration with M365 governance. Weakness: blind to local AI agents and runtimes that don’t make network calls to known endpoints. Sources: Microsoft Learn — Shadow AI discovery in Global Secure Access, Defender for Cloud Apps risk scores, Defender release notes — AI Agent Protection

Wiz AI-SPM

Wiz takes the agentless cloud-scanning route. Their AI-SPM discovers AI infrastructure across AWS, GCP, and Azure without deploying agents:
  • AI services: SageMaker, Vertex AI, Bedrock, Azure Cognitive Services
  • Libraries/SDKs: Hugging Face, OpenAI SDK, LangChain in deployed workloads
  • Training data: sensitive data feeding ML pipelines
  • Inference endpoints: deployed models and serving infrastructure
The output is an AI-BOM (Bill of Materials) — a full-stack inventory visible in the Wiz Security Graph. Each resource is classified as approved, unwanted, or unreviewed. Misconfiguration checks run automatically (unencrypted SageMaker endpoints, public Vertex notebooks). Attack path analysis correlates AI-specific risks with the broader cloud context. Positioning: Cloud-native, agentless, integrated into their existing CNAPP. Blind spot: zero visibility into developer laptops, local agents, or IDE-based AI tools. Source: Wiz AI-SPM blog

Other Vendors

VendorFeature NameApproachDifferentiator
Harmonic SecurityShadow AI DetectionBrowser extension + endpoint agent + MCP GatewayPurpose-built SLMs that evaluate prompt sensitivity in milliseconds; inline blocking
Noma SecurityAI Asset DiscoveryMaps “every model, every agent, every MCP server, every data source”Full dependency chain + approved AI supply chain concept; continuous red teaming
Portal26Shadow AI EngineNetwork-based, 30-minute activation35+ risk detectors; GenAI audit vault (NIST/SOC2); intent analysis on prompts
Reco AIShadow AI DiscoverySaaS-layer discoveryFound OpenAI = 53% of shadow AI usage; 400+ day persistence of unsanctioned tools
Lasso SecurityAI Agents DiscoveryNetwork signals + CrowdStrike Falcon integrationUnified inventory across SaaS agents, copilots, and homegrown AI apps

Analyst Framing

Gartner’s category is AI TRiSM (AI Trust, Risk and Security Management), defined in their February 2025 Market Guide. The framework has four layers:
  1. AI Governance — policy, roles, accountability
  2. AI Runtime Inspection & Enforcement — real-time monitoring of prompts/outputs
  3. Information Governance — data classification, DLP for AI
  4. Infrastructure & Stack — securing the AI supply chain
Within this, the AI Catalog is explicitly called out: “an inventory of all AI entities (models, agents, and applications) used in the organization.” Gartner’s January 2026 “Emerging Tech: Top-Funded Startups in AI TRiSM” report organizes the landscape into five categories: AI Security Platforms, Agentic AI Security, Information Governance, AI Governance, AI Security Testing. Gartner predicts >60% of enterprises will secure the AI lifecycle through AI security platforms by 2030 (up from <10% in 2025). The AI Security market is projected at 0.69Bin2025,growingto0.69B in 2025, growing to 2.48B by 2030 (~29% CAGR). The term to use in sales: “AI Asset Discovery” or “AI Catalog” when speaking Gartner. “Shadow AI Discovery” when speaking to CISOs who feel the pain.

What CISOs Actually Ask For

The research is unambiguous:
  • 92% lack full visibility into AI identities operating in their environment (Cybersecurity Insiders/Saviynt, April 2026)
  • 86% don’t enforce formal access policies for AI identities
  • Only 5% feel confident they could contain a compromised AI agent
  • 75% have already found unsanctioned AI tools running in their environment
  • 44% struggle with business units deploying AI without involving security (Delinea 2025)
  • 54% have experienced data privacy incidents from Gen AI adoption (ISC2 2024)
  • 39% of CISOs plan to increase DLP spending specifically because of Shadow AI (Cribl 2025)
The ask boils down to three things: What AI is running? Who’s using it? What data is it touching?

Quint’s Design: Shadow AI Discovery

Why Quint Wins This

Every other vendor is bolting AI discovery onto an existing product (EDR, CASB, CNAPP). Quint is the only product that already sits at the execution layer of AI agents — intercepting every process, file operation, network call, and tool invocation. We don’t need to fingerprint AI apps from DNS logs or scan cloud APIs. We watch them work. What we already capture today per session:
  • Agent identity (Claude Code, Cursor, GitHub Copilot, Aider, etc.)
  • PID, parent PID, process tree
  • MCP tool calls (tool name, arguments, results)
  • Files read/written/modified with sensitivity classification
  • Network destinations (model API endpoints)
  • Session duration, command count, risk score
Shadow AI Discovery is packaging what we already have.

API: /v1/inventory

GET /v1/inventory?org_id={org}&group_by=machine|agent|team

{
  "org_id": "uuid",
  "generated_at": "2026-05-03T12:00:00Z",
  "summary": {
    "total_machines": 47,
    "total_agents": 12,
    "total_mcp_servers": 8,
    "total_sessions_30d": 2341,
    "ungoverned_agents": 3,
    "ungoverned_mcp_servers": 2,
    "risk_score_p95": 7.2
  },
  "machines": [
    {
      "machine_id": "uuid",
      "hostname": "amer-mbp.local",
      "user": "amerabbadi",
      "team": "engineering",
      "agents": [
        {
          "agent_type": "claude_code",
          "version": "1.0.26",
          "first_seen": "2026-03-15T09:00:00Z",
          "last_seen": "2026-05-03T11:45:00Z",
          "sessions_30d": 89,
          "status": "sanctioned",
          "risk_score_avg": 3.1,
          "models_used": ["claude-sonnet-4-20250514"],
          "mcp_servers": [
            {
              "name": "github",
              "transport": "stdio",
              "tools_available": 34,
              "tools_invoked_30d": ["create_pull_request", "search_code"],
              "status": "sanctioned",
              "first_seen": "2026-04-01T10:00:00Z"
            }
          ],
          "data_classifications_touched": ["source_code", "config_secrets"],
          "frameworks": ["anthropic-sdk"],
          "network_destinations": ["api.anthropic.com", "api.github.com"]
        }
      ]
    }
  ]
}
Key design decisions:
  • status field: sanctioned | unsanctioned | unreviewed — mirrors Wiz’s classification model
  • data_classifications_touched: derived from our existing file-path sensitivity heuristics, upgraded to proper classification tiers
  • MCP servers as first-class objects: we’re the only vendor that can enumerate tool names and invocation frequency per server
  • risk_score_avg: aggregate of per-session risk scores we already compute

Dashboard: “Shadow AI” Tab

Navigation: Dashboard sidebar gets a new “AI Inventory” tab between “Sessions” and “Settings”. Top-level view — Fleet Heatmap:
  • Grid of machines (rows) x agent types (columns), color-coded by risk score (green/yellow/red)
  • Summary cards at top: Total Agents | Ungoverned Agents | MCP Servers | Ungoverned MCP Servers | Sessions (30d)
  • Filter bar: team, machine, agent type, status (sanctioned/unsanctioned), date range
Drill-down — Agent Detail:
  • Click any cell to see: agent version, all MCP servers connected, tools used, models called, data classifications accessed, session timeline
  • “Mark as Sanctioned/Unsanctioned” action button per agent and per MCP server
  • Risk trend sparkline (30-day)
Drill-down — MCP Server Detail:
  • Tools available vs. tools actually invoked
  • Which agents connect to this server
  • Data flow summary: what data types pass through
Alert Rules:
  • New unsanctioned agent detected (fires on first-seen of unreviewed agent type)
  • MCP server with sensitive data access exceeds risk threshold
  • Agent connecting to unapproved model endpoint

Sales Pitch

“Your developers are running AI agents with root-level access to your codebase, your secrets, and your production configs — and you have zero visibility into which agents, which tools, or which data they’re touching. Quint discovers every AI agent, every MCP server, and every tool invocation across your fleet in real time. Not from DNS logs. Not from cloud scans. From the execution layer — where the agent actually works.”
Three sentences for a CISO who has never heard of Quint:
Quint is an endpoint sensor purpose-built for AI agents. It discovers every coding assistant, every MCP server, and every model API call across your developer fleet — showing you exactly what AI is running, what data it touches, and whether it’s sanctioned. Think CrowdStrike Falcon, but designed from day one for the AI attack surface.

What It Takes to Ship

Work ItemOwnerDaysDependencies
/v1/inventory API endpointAmer2Existing session + event data; new SQL aggregation queries
Sanctioned/unsanctioned status modelAmer1New ai_asset_status table, migration
MCP server inventory extractionAmer2Parse existing MCP_TOOL_CALL events to extract server identity
Agent type + version normalizationAmer1Extend existing agent fingerprinting in session processor
Dashboard: AI Inventory tab (heatmap + summary)Hamza3API endpoint complete; reuse dashboard-v2 primitives
Dashboard: Agent detail drill-downHamza2API endpoint complete
Dashboard: MCP server detail viewHamza1API endpoint complete
Dashboard: Sanctioned/unsanctioned toggleHamza1Status model API
Alert rules (new agent, risk threshold)Amer1Existing risk scoring pipeline
Data classification tier upgradeAmer2Replace path heuristics with proper 4-tier classification
Total: ~16 engineering-days. Backend and frontend can run in parallel after day 2.

Sprint Plan: Week of 2026-05-05

Monday (Day 1)

  • Amer: ai_asset_status migration + table. Sanctioned/unsanctioned CRUD API.
  • Hamza: Wireframe AI Inventory tab. Set up route + nav entry in dashboard-v2.

Tuesday (Day 2)

  • Amer: /v1/inventory endpoint — machine-level aggregation from existing session/event data.
  • Hamza: Heatmap component (machines x agents, color by risk). Summary cards.

Wednesday (Day 3)

  • Amer: MCP server inventory extraction — parse MCP_TOOL_CALL events, deduplicate servers, build tool usage stats. Agent type normalization.
  • Hamza: Wire heatmap to live API. Filter bar (team, status, agent type).

Thursday (Day 4)

  • Amer: Data classification tier upgrade. Alert rules for new-agent and risk-threshold.
  • Hamza: Agent detail drill-down view. MCP server detail view.

Friday (Day 5)

  • Amer: Integration testing. Edge cases (machines with no agents, agents with no MCP).
  • Hamza: Sanctioned/unsanctioned toggle UI. Polish + responsive.
  • Both: End-of-week demo with real fleet data. Screenshot for sales deck.