<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Mcp - Tag - Botmonster Tech</title><link>https://botmonster.com/tags/mcp/</link><description>Mcp - Tag - Botmonster Tech</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Mon, 20 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://botmonster.com/tags/mcp/" rel="self" type="application/rss+xml"/><item><title>MCP vs. A2A: The Two Protocols Powering the Agentic Web</title><link>https://botmonster.com/posts/mcp-vs-a2a-protocols-agentic-web/</link><pubDate>Mon, 20 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/posts/mcp-vs-a2a-protocols-agentic-web/</guid><description><![CDATA[<div class="featured-image">
                <img src="/layered-city-infrastructure.png" referrerpolicy="no-referrer">
            </div><p><a href="https://modelcontextprotocol.io" target="_blank" rel="noopener noreferrer ">Model Context Protocol (MCP)</a>
 and <a href="https://a2a-protocol.org" target="_blank" rel="noopener noreferrer ">Agent-to-Agent Protocol (A2A)</a>
 aren&rsquo;t rivals. They solve different layers of the same problem. MCP sets how an AI agent connects to tools and data. A2A sets how agents talk to each other and pass off tasks. Together they form the base plumbing of the agentic web.</p>
<p>If you&rsquo;re building past a single chatbot in 2026, you need to grasp both.</p>
<h2 id="the-fragmentation-problem">The Fragmentation Problem</h2>
<p>Before these protocols, the AI tooling space was a mess of clashing integrations. Every major framework had its own way to plug into outside tools: <a href="https://www.langchain.com/" target="_blank" rel="noopener noreferrer ">LangChain</a>
, <a href="https://www.crewai.com/" target="_blank" rel="noopener noreferrer ">CrewAI</a>
, and <a href="https://microsoft.github.io/autogen/" target="_blank" rel="noopener noreferrer ">AutoGen</a>
. Giving a LangChain agent access to the Slack API meant writing a LangChain-only tool wrapper. Wanting the same in a CrewAI workflow meant starting over. None of the adapters carried across.</p>]]></description></item><item><title>AI Coding Agents Are Insider Threats: Prompt Injection, MCP Exploits, and Supply Chain Attacks</title><link>https://botmonster.com/posts/ai-coding-agent-insider-threat-prompt-injection-mcp-exploits/</link><pubDate>Tue, 07 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/posts/ai-coding-agent-insider-threat-prompt-injection-mcp-exploits/</guid><description><![CDATA[<div class="featured-image">
                <img src="/ai-coding-agent-insider-threat-prompt-injection-mcp-exploits.png" referrerpolicy="no-referrer">
            </div><p>Your AI coding agent has the same file access, shell rights, and database keys you do. A review of 78 studies from January 2026 (<a href="https://arxiv.org/abs/2601.17548" target="_blank" rel="noopener noreferrer ">arXiv:2601.17548</a>
) tested every big coding agent. The list ran <a href="/posts/claude-code-vs-cursor-vs-github-copilot-ai-coding-tool-workflow/" rel="">Claude Code, GitHub Copilot, Cursor</a>
. All fell to prompt injection. Adaptive attacks landed more than 85% of the time. This isn&rsquo;t theory. CVE-2026-23744 gave attackers remote code execution on <a href="https://www.practical-devsecops.com/mcp-security-vulnerabilities/" target="_blank" rel="noopener noreferrer ">MCPJam Inspector</a>
 at CVSS 9.8. A booby-trapped PDF tripped a physical pump through a Claude <a href="https://modelcontextprotocol.io/" target="_blank" rel="noopener noreferrer ">MCP</a>
 link at a plant. Attackers hit GitHub&rsquo;s MCP server to <a href="https://www.docker.com/blog/mcp-horror-stories-github-prompt-injection/" target="_blank" rel="noopener noreferrer ">exfiltrate private repository data via malicious issues</a>
. And 47 firms fell to a poisoned plugin ecosystem that hid for six months.</p>]]></description></item><item><title>Claude Code Is Built Entirely on MCP - What the Source Leak Revealed</title><link>https://botmonster.com/posts/how-claude-code-uses-mcp-under-the-hood/</link><pubDate>Fri, 03 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/posts/how-claude-code-uses-mcp-under-the-hood/</guid><description><![CDATA[<div class="featured-image">
                <img src="/how-claude-code-uses-mcp-under-the-hood.png" referrerpolicy="no-referrer">
            </div><p>Claude Code doesn&rsquo;t use <a href="https://modelcontextprotocol.io/" target="_blank" rel="noopener noreferrer ">MCP</a>
 as a plugin system. It <em>is</em> MCP. On March 31, 2026, Anthropic shipped a 59.8 MB source map by accident in npm package <code>@anthropic-ai/claude-code</code> v2.1.88. Developers got a rare look at how a real AI coding agent works. Every capability in Claude Code (file reads, bash, web fetches, Computer Use, IDE bridges) runs as a single permission-gated MCP tool call. There is no special internal API. Third-party MCP servers you connect get the same execution path, permission checks, and error handling as Anthropic&rsquo;s own built-in tools.</p>]]></description></item><item><title>Claude Code with MCP: Local Agent for Files, SQL, APIs</title><link>https://botmonster.com/posts/build-local-ai-coding-agent-claude-code-mcp/</link><pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/posts/build-local-ai-coding-agent-claude-code-mcp/</guid><description><![CDATA[<div class="featured-image">
                <img src="/build-local-ai-coding-agent-claude-code-mcp.png" referrerpolicy="no-referrer">
            </div><p>Claude Code combined with custom <a href="https://modelcontextprotocol.io/" target="_blank" rel="noopener noreferrer ">MCP</a>
 (Model Context Protocol) servers creates a local AI coding agent that can read and write files, query databases, call APIs, and execute shell commands - all orchestrated by Claude through a standardized tool-use interface. You set up the Claude Code CLI, configure MCP servers in your project or user settings, and the agent automatically discovers and uses the tools you expose. The result is a development workflow where you describe tasks in natural language and Claude executes multi-step coding operations with full access to your project context.</p>]]></description></item><item><title>MCP Server Development: Build Custom Tools for Claude and Local LLMs</title><link>https://botmonster.com/posts/mcp-server-development-build-custom-tools/</link><pubDate>Tue, 31 Mar 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/posts/mcp-server-development-build-custom-tools/</guid><description><![CDATA[<div class="featured-image">
                <img src="/retro-switchboard-operator.png" referrerpolicy="no-referrer">
            </div><p>The <a href="https://modelcontextprotocol.io" target="_blank" rel="noopener noreferrer ">Model Context Protocol</a>
 gives LLMs a standard way to call external tools, read files, and query databases. You skip the rewrite each time you switch models. You can build a working MCP server in Python with the official <code>mcp</code> SDK in under 100 lines. It runs with Claude Desktop or Claude Code in minutes. This guide walks the full path, from a tiny first server to production.</p>
<h2 id="what-mcp-is-and-why-it-changes-tool-use">What MCP Is and Why It Changes Tool Use</h2>
<p>MCP is a JSON-RPC 2.0 protocol. It lets an LLM client (like <a href="https://claude.ai/download" target="_blank" rel="noopener noreferrer ">Claude Desktop</a>
, Claude Code, or Cursor) find and call tools exposed by a server process. The big shift from older function-calling is the discovery step. Instead of hard-coding tool defs into every prompt, the client sends a <code>tools/list</code> request when it connects. It gets back the full schema for everything the server exposes. Add a new tool, restart the server, and any client sees it on the next connect.</p>]]></description></item></channel></rss>