AI Coding Agents Are Insider Threats: Prompt Injection, MCP Exploits, and Supply Chain Attacks
Your AI coding agent has the same file system access, shell execution privileges, and database credentials that you do. A systematic analysis of 78 studies published in January 2026 (arXiv:2601.17548 ) found that every tested coding agent - Claude Code, GitHub Copilot, Cursor - is vulnerable to prompt injection, with adaptive attack success rates exceeding 85%. This is not a theoretical concern. CVE-2026-23744 gave attackers remote code execution on MCPJam Inspector (CVSS 9.8). A crafted PDF triggered physical pump activation through a Claude MCP integration at an industrial facility. GitHub’s MCP server was exploited to exfiltrate private repository data via malicious issues . And 47 enterprise deployments were compromised through a poisoned plugin ecosystem that went undetected for six months.







