Practical guides on Linux, AI, self-hosting, and developer tools

AI Coding Agents Are Insider Threats: Prompt Injection, MCP Exploits, and Supply Chain Attacks

Your AI coding agent has the same file system access, shell execution privileges, and database credentials that you do. A systematic analysis of 78 studies published in January 2026 (arXiv:2601.17548 ) found that every tested coding agent - Claude Code, GitHub Copilot, Cursor - is vulnerable to prompt injection, with adaptive attack success rates exceeding 85%. This is not a theoretical concern. CVE-2026-23744 gave attackers remote code execution on MCPJam Inspector (CVSS 9.8). A crafted PDF triggered physical pump activation through a Claude MCP integration at an industrial facility. GitHub’s MCP server was exploited to exfiltrate private repository data via malicious issues . And 47 enterprise deployments were compromised through a poisoned plugin ecosystem that went undetected for six months.

Best USB-C Docking Stations for a Dual-Monitor Linux Desk Setup in 2026

The best USB-C docking stations for a dual-monitor Linux setup in 2026 are the CalDigit TS4 (Thunderbolt 4, dual 4K@60Hz, rock-solid kernel 7.0 support) and the Anker 777 (USB4 Gen 2, excellent driver compatibility, more affordable at $149). The deciding factor is whether your laptop supports Thunderbolt 4 or only USB4. Thunderbolt provides guaranteed DisplayPort alt-mode bandwidth for dual 4K; USB4 solutions share that bandwidth with USB traffic and may require Multi-Stream Transport (MST) support from both the dock and the kernel.

Claude Code Skills Ecosystem: 1,340+ Installable Agent Skills for AI Coding Assistants

The Claude Code skills ecosystem passed 1,340 installable skills in early 2026, and the number keeps climbing. These skills use the universal SKILL.md format - folders of structured instructions that teach AI coding assistants how to complete specialized tasks. They work across Claude Code, Cursor, Codex CLI, Gemini CLI, and other tools without modification. Official contributions have come from teams at Anthropic, Trail of Bits, Vercel, Stripe, Cloudflare, and dozens of independent developers. Installation takes a single npx command.

Home Assistant MQTT: How to Control Custom DIY Devices Without ESPHome

You can integrate any microcontroller with Home Assistant over MQTT by publishing sensor data to discovery-compatible topics and subscribing to command topics. This gives you complete control over the firmware without ESPHome’s abstraction layer. The approach works with any language and any chip - ESP32, RP2040, STM32, or even a Raspberry Pi Pico W - and it is the right choice when your device needs custom protocols, bare-metal timing, or firmware features that ESPHome simply does not support.

How to Build a Local Package Registry for Python and Node.js

You can self-host a private PyPI registry with pypiserver and a private npm registry with Verdaccio , both running on a single machine or inside Docker containers. This gives you three things that relying on public registries alone cannot: faster installs by caching packages on your local network, a place to publish proprietary packages without exposing them to the public internet, and protection against upstream outages, typosquatting, and supply chain attacks. Both tools are free, open-source, and take under 30 minutes to configure.

How to Write Effective Integration Tests with Testcontainers

Testcontainers lets you spin up real databases, message queues, and services as Docker containers directly inside your test suite. Your integration tests run against the same PostgreSQL, Redis, or Kafka that your application uses in production instead of flaky mocks or in-memory substitutes. In Python, testcontainers-python (currently at v4.14.2) integrates with pytest fixtures that start a container before tests and tear it down after. You get isolated, reproducible, and parallelizable integration tests that catch bugs that unit tests and mocks cannot.

Running Gemma 4 Locally with Ollama: All Four Model Sizes Compared

Google’s Gemma 4 is not one model - it is a family of four, each targeting different hardware and different use cases. The smallest runs on a Raspberry Pi. The largest ranks #3 on LMArena across all models, open and closed. All four ship under the Apache 2.0 license, a first for the Gemma family. This guide walks through installing each variant with Ollama (currently at v0.20.2), benchmarks them on real consumer hardware, and helps you decide which one fits your setup.

Self-Hosted AI Search: Combine SearXNG and a Local RAG Pipeline

You can build a private AI search engine modeled on Perplexity by combining SearXNG with a local language model running through Ollama . The stack is: SearXNG aggregates results from multiple search engines simultaneously, a Python scraper fetches and cleans the actual page content, and the LLM synthesizes everything into a cited answer with inline references like [1], [2]. No API keys, no telemetry, no query logging to third-party AI services. A machine with 12 GB VRAM handles the whole pipeline, and most queries come back in 5-15 seconds.