Botmonster Tech
Posts jQuery Bootpag Image2SVG Categories Tags
Botmonster Tech
PostsjQuery BootpagImage2SVGCategoriesTags
Hands-on experience with AI, self-hosting, Linux, and the developer tools I actually use

Newest

Editorial diagram showing three industrial cranes labeled Google, Bing, and Brave scooping web pages from layered strata, with chatbot robots tethered to them by colored hoses.

AI Web Search Backends: Who Owns, Who Rents

Only Google Gemini and Microsoft Copilot run on a search index their parent actually crawls. Anthropic Claude rents Brave Search , Mistral Le Chat rents Brave too, OpenAI ChatGPT rents Bing plus its own crawler, and Meta AI rents both. The non-obvious tell: Claude’s web_search tool exposes a literal BraveSearchParams field, and citation overlap with Brave runs around 86.7%.

Key Takeaways

  • Only Google and Microsoft own a true web-scale search index.
  • Claude and Mistral both reportedly run on the Brave Search API.
  • ChatGPT pulls from Bing, OpenAI’s own crawler, and publisher licensing deals.
  • IndexNow tells Bing about new pages but not Brave or Google.
  • Brave is now AI’s third search pole, alongside Google and Bing.

Only Five Companies Actually Crawl the Open Web

Before mapping each AI lab to its backend, the underlying constraint matters: the open web at scale is crawled by exactly five operators. Everything else marketed as a “search engine” is a reseller of one of those five indexes. The five are Google, Microsoft Bing, Yandex, Baidu, and Brave Search, with Mojeek sometimes counted as a niche sixth that maintains its own (much smaller) index.

 Ai, Search, Claude, Llm
Claude Code vs COBOL: The AI Migration Controversy That Crashed IBM's Stock 13%

Claude Code vs COBOL: The AI Migration Controversy That Crashed IBM's Stock 13%

On February 23, 2026, Anthropic published a blog post titled “How AI Helps Break the Cost Barrier to COBOL Modernization” alongside a Code Modernization Playbook. By market close that day, IBM’s stock had fallen 13.2% to $223.35 per share - its worst single-day performance since October 2000 - wiping more than $31 billion in market capitalization. Accenture fell 6.5%, Cognizant dropped 6%. The entire legacy modernization consulting sector was rattled by a single marketing document.

 Ai, Claude, Ai-Coding, Llm
Editorial infographic of an engineer at a control panel splitting glowing data flow between a sealed OAuth gate and an open brass pipe feeding a glowing terminal monolith

OpenClaw on Your $20 Claude Sub After Anthropic Banned It

OpenClaw’s bundled claude-cli backend is officially sanctioned by Anthropic, while OAuth-token extraction tools stay blocked. The carve-out works because shelling out to claude -p preserves prompt caching, so a $20 Pro or $200 Max sub routes through OpenClaw without four-figure API bills. The catch: a roughly 5-hour cap that cron jobs exhaust in minutes.

Key Takeaways

  • OpenClaw’s CLI backend is allowed by Anthropic; the older OAuth-token tools are not.
  • The reason it is allowed: it preserves Anthropic’s prompt caching exactly like Claude Code does.
  • Pro and Max plans cap usage near 5 hours per window, so cron jobs need a cheaper backup.
  • Use Claude for planning and chat, route automated tasks to GLM, MiniMax, or Codex.
  • Setup is three commands and one config edit on any Mac or Linux host running Claude Code.

What Changed in Anthropic’s Third-Party Tool Policy?

Most users found out about the policy change when their Anthropic bill jumped, not from a press release. Heavy agentic workflows that previously billed against a flat Pro or Max subscription suddenly tracked toward $1,500 a month on Opus 4.6 once Anthropic forced third-party orchestrators onto the pay-per-token API. The original concern was narrower than the community read it as. Anthropic’s target was a specific class of tool that extracts the OAuth token from a local Claude Code install and calls the Anthropic API directly under that identity. That pattern bypasses Anthropic’s prompt caching and pushes load to the API tier without the caching benefit Anthropic gets when Claude Code itself runs the request.

 Ai-Agents, Claude, CLI, Automation
URL Shortener in 200 Lines of Python

URL Shortener in 200 Lines of Python

You can build a fully functional, production-ready URL shortener in under 200 lines of Python. The ingredients are FastAPI for the HTTP layer, SQLite for persistence, and base62 encoding to convert auto-incremented database IDs into short codes. Add a redirect endpoint that issues 301 or 302 responses, a click counter, and rate limiting through SlowAPI middleware, and you have a service that handles millions of URLs on a single server.

 Python, Databases, Docker, Automation
Zig 1.0 Tutorial: Build a Systems Programming Project Without C

Zig 1.0 Tutorial: Build a Systems Programming Project Without C

Zig is a modern systems programming language designed to replace C while keeping manual memory management and zero hidden control flow โ€” no garbage collector, no runtime, and a single statically-linked binary that runs anywhere. You can install Zig from ziglang.org/download , scaffold a project with zig init, and have a working command-line tool in about 50 lines that takes advantage of Zig’s comptime, error unions, and first-class C interop. The killer feature: zig build-exe -target x86_64-linux-musl cross-compiles to any target from any host with zero toolchain setup.

 Developer-Tools, Linux, Optimization, Webassembly
Towering brass clockwork robot on a cracked pedestal leaking forgotten paper notes from its memory chamber while handing down a tidy morning news briefing

1,000 OpenClaw Deploys Later

After publishing a 7-minute OpenClaw deploy video and watching roughly 1,000 isolated VMs spin up afterward, one r/LocalLLaMA cloud-infra operator concluded the only OpenClaw workflow that survives unsupervised execution is a daily news digest. Memory is the load-bearing failure mode, not a fixable bug. OpenClaw sits at 370K+ GitHub stars, but the working-workflow count has barely moved.

Key Takeaways

  • A cloud-infra operator watched roughly 1,000 OpenClaw deploys and found one reliable use case.
  • Memory unreliability is built into how the agent works, not a bug a patch can fix.
  • Daily news digests are the exception because they keep no state between runs.
  • The same digest can be built with a cron job and any LLM API in about ten lines.
  • OpenClaw’s founder admitted that recent releases were a “rough week”.

The 1,000-Deploy Post That Broke the Consensus

The contrarian thesis is anchored to one specific source: an r/LocalLLaMA post titled “OpenClaw has 250K GitHub stars. The only reliable use case I’ve found is daily news digests” , with 335 comments and 891 votes. The OP is not a casual skeptic. He runs cloud infrastructure where strangers spin up Linux VMs, published a deploy walkthrough that took off, and now has a dataset most reviewers do not have access to.

 Ai-Agents, Llm, Automation, Production-Ai
Deploy Ceph with cephadm: 3-node, 12 OSD storage cluster

Deploy Ceph with cephadm: 3-node, 12 OSD storage cluster

Yes, you can build a self-healing, redundant distributed storage cluster using Ceph across three Linux nodes, and it is less painful than its reputation suggests - especially with the modern cephadm deployment tool. The result gives you block storage (RBD) for VMs, a shared POSIX filesystem (CephFS) for multiple clients, and even S3-compatible object storage if you need it later. Your data survives the loss of any single node, rebalances automatically when hardware changes, and scales from a homelab experiment to petabyte-class production by adding more disks.

 Linux, Storage, Homelab, Networking
Best Lightweight Tactile Switches for Thocky Sound: Under 45g Mechanical Keyboard Guide

Best Lightweight Tactile Switches for Thocky Sound: Under 45g Mechanical Keyboard Guide

You don’t have to sacrifice a deep, thocky sound to get a light typing feel. The best lightweight tactile switches under 45g include the Input Club Hako Violet (28g), Akko V3 Creamy Purple Pro (30g), Chilkey Sprout Green (35g), HMX Valerian Light (48g actuation but exceptionally light feel), and TTC Bluish White (42g) - all of which deliver satisfying tactile feedback with a deep bottom-out sound when paired with the right housing materials and lubing technique.

 Hardware, Keyboards, Mechanical-Keyboards
  • 1
  • 2
  • 3
  • …
  • 28

Most Popular

Gemma 4 vs Qwen 3.5 vs Llama 4: Which Open Model Should You Actually Use? (2026)

Gemma 4 vs Qwen 3.5 vs Llama 4: Which Open Model Should You Actually Use? (2026)

A head-to-head comparison of Gemma 4, Qwen 3.5, and Llama 4 across benchmarks, licensing, inference speed, multimodal capabilities, and hardware requirements. Covers the full model families from edge to datacenter scale.

5 Open Source Repos That Make Claude Code Unstoppable

5 Open Source Repos That Make Claude Code Unstoppable

Five GitHub repositories released in March 2026 push Claude Code into new territory. From autonomous ML experiments running overnight to multi-agent communication and full Google Workspace access, these open source tools solve real workflow gaps that Claude Code cannot handle alone.

Cross-section of a translucent crystal brain threaded by red, gold, and teal attention ribbons resting on a doubly-stochastic matrix pedestal beside a guitar-tuning lab figure.

DeepSeek V4 Tech Report: 3 Tricks That Cut Compute 73%

DeepSeek V4 ships 1.6T parameters and 1M context using only 27% of V3.2's inference FLOPs. Inside the hybrid attention, mHC residuals, and Muon optimizer.

Cracked stone tablet engraved with a bulleted system prompt, four crossed-out goblin silhouettes repeated, a tiny goblin escaping with upvote-arrow sparks, a giant dollar-sign price tag, and figures refusing to step onto a glossier pedestal.

GPT 5.5 Reddit Reception: Goblins and the Cost Backlash

A two-week Reddit reception snapshot of GPT-5.5 covering the launch window from April 23 to May 8, 2026. Drawn from primary threads on r/OpenAI, r/ChatGPT, and r/ChatGPTPro, with verifiable upvote counts. Three fault lines emerge: a leaked Codex system prompt about goblins, doubled output pricing, and a stubborn 5.4 holdout faction.

What X and Reddit Users Are Saying about Claude Opus 4.7

What X and Reddit Users Are Saying about Claude Opus 4.7

A 48-hour snapshot of how power users on X and Reddit reacted to Anthropic's Claude Opus 4.7 release on April 16, 2026. Covers the dominant praise for agentic coding and the new Claude Design tool, the three loudest complaints, token-burn economics, and the practical prompting habits teams are already adopting.

Qwen3.6-35B-A3B: Alibaba's Open-Weight Coding MoE

Qwen3.6-35B-A3B: Alibaba's Open-Weight Coding MoE

Alibaba's Qwen3.6-35B-A3B is a sparse Mixture-of-Experts model with 35B total and 3B active parameters, released April 2026 under Apache 2.0. It scores 73.4 on SWE-bench Verified, matches Claude Sonnet 4.5 on vision, and runs locally as a 20.9GB Q4 quantization on an M5 MacBook. A close look at the architecture, benchmarks, features, and honest trade-offs.

Alacritty vs. Kitty: Best High-Performance Linux Terminal

Alacritty vs. Kitty: Best High-Performance Linux Terminal

A practical comparison of Alacritty and Kitty for high-performance Linux terminal workflows in 2026, including latency, startup time, memory use, and heavy-output responsiveness. The analysis covers design philosophy differences between minimalist and feature-rich terminal environments, plus Wayland behavior and real-world configuration trade-offs. It also situates Ghostty and WezTerm in the current landscape and explains when each terminal model fits best for daily development.

MiniMax M2.7: Model That Almost Matches Claude Opus 4.6

MiniMax M2.7: Model That Almost Matches Claude Opus 4.6

A practical review of MiniMax M2.7: the 230B-parameter Mixture-of-Experts reasoning model that scores 50 on the Artificial Analysis Intelligence Index, runs on a 128GB Mac Studio, and costs roughly a tenth of Claude Opus 4.6. Covers benchmarks, self-hosting hardware, the license catch, and when to pick the API over local inference.

Best OLED Monitors for Coding 2026: WOLED Beats QD-OLED for Text

Best OLED Monitors for Coding 2026: WOLED Beats QD-OLED for Text

OLED monitors for coding in 2026: LG 32GS95UE leads on text clarity, WOLED beats QD-OLED at 140 PPI, and KDE Plasma 6.3 finally fixes Linux HDR.

Like what you read?

Get new posts on Linux, AI, and self-hosting delivered to your inbox weekly.

Privacy Policy  ยท  Terms of Service
2026 Botmonster