<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title>Coding - Category - Botmonster Tech</title><link>https://botmonster.com/coding/</link><description>Coding - Category - Botmonster Tech</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Thu, 30 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://botmonster.com/coding/" rel="self" type="application/rss+xml"/><item><title>Defensive Coding in Rust: Error Handling Patterns That Scale</title><link>https://botmonster.com/coding/defensive-coding-rust-error-handling-patterns/</link><pubDate>Thu, 30 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/defensive-coding-rust-error-handling-patterns/</guid><description><![CDATA[<div class="featured-image">
                <img src="/defensive-coding-rust-error-handling-patterns.png" referrerpolicy="no-referrer">
            </div><p>Rust&rsquo;s error handling ecosystem in 2026 centers on four patterns: <code>Result&lt;T, E&gt;</code> with custom enums for libraries, <a href="https://docs.rs/thiserror/latest/thiserror/" target="_blank" rel="noopener noreferrer ">thiserror</a>
 for ergonomic enum derivation, <a href="https://docs.rs/anyhow/latest/anyhow/" target="_blank" rel="noopener noreferrer ">anyhow</a>
 for application-level error propagation, and <a href="https://docs.rs/miette/latest/miette/" target="_blank" rel="noopener noreferrer ">miette</a>
 or <a href="https://docs.rs/color-eyre/latest/color_eyre/" target="_blank" rel="noopener noreferrer ">color-eyre</a>
 for human-friendly diagnostic reports. The right choice depends on whether you are writing a library (where callers need to match on specific error variants) or an application (where you need to propagate errors with context and print them readably). Most non-trivial Rust projects use both thiserror in their library crates and anyhow in their binary crates.</p>]]></description></item><item><title>Custom Linter Rules: JavaScript, Python, Go ASTs</title><link>https://botmonster.com/coding/write-custom-linter-rule-ast-parsing/</link><pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/write-custom-linter-rule-ast-parsing/</guid><description><![CDATA[<div class="featured-image">
                <img src="/write-custom-linter-rule-ast-parsing.png" referrerpolicy="no-referrer">
            </div><p>You can catch domain-specific anti-patterns that <a href="https://eslint.org/" target="_blank" rel="noopener noreferrer ">ESLint</a>
, <a href="https://docs.astral.sh/ruff/" target="_blank" rel="noopener noreferrer ">Ruff</a>
, or <a href="https://golangci-lint.run/" target="_blank" rel="noopener noreferrer ">golangci-lint</a>
 miss by writing custom linter rules that parse your code into an Abstract Syntax Tree (AST), walk the tree to match specific node patterns, and report violations with auto-fix suggestions. The process is the same regardless of language: parse source into a tree, define the pattern you want to catch, walk the tree to find matches, and emit diagnostics. In JavaScript/TypeScript, this means writing an ESLint plugin with a visitor-pattern rule. In Python, you write a flake8 plugin using the <code>ast</code> module or a Ruff plugin in Rust. In Go, you use the <code>go/ast</code> and <code>go/analysis</code> packages.</p>]]></description></item><item><title>Redis Streams vs Kafka: 100K-500K ops/sec alternative</title><link>https://botmonster.com/coding/build-real-time-data-pipeline-redis-streams/</link><pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/build-real-time-data-pipeline-redis-streams/</guid><description><![CDATA[<div class="featured-image">
                <img src="/build-real-time-data-pipeline-redis-streams.png" referrerpolicy="no-referrer">
            </div><p>Redis Streams give you a light, self-hosted option versus <a href="https://kafka.apache.org/" target="_blank" rel="noopener noreferrer ">Apache Kafka</a>
 for event-driven data pipelines. You get append-only log semantics, consumer groups with ack tracking, and sub-millisecond latency on a single <a href="https://redis.io/" target="_blank" rel="noopener noreferrer ">Redis</a>
 7.4+ instance. Producers <code>XADD</code> events to a stream. Consumer groups read with <code>XREADGROUP</code> in Python via <a href="https://github.com/redis/redis-py" target="_blank" rel="noopener noreferrer ">redis-py</a>
. Manual <code>XACK</code> calls plus a pending entry list (PEL) give you at-least-once processing.</p>
<p>What follows covers stream basics, consumer groups with failure recovery, a full producer and consumer pipeline with a dead-letter queue, and the ops practices to keep Redis Streams healthy in production.</p>]]></description></item><item><title>Git Worktrees: The Underused Feature for Multi-Branch Development</title><link>https://botmonster.com/coding/git-worktrees-multi-branch-development/</link><pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/git-worktrees-multi-branch-development/</guid><description><![CDATA[<div class="featured-image">
                <img src="/parallel-seasonal-tree-roots.png" referrerpolicy="no-referrer">
            </div><p><code>git worktree</code> lets you check out multiple branches of the same repository simultaneously into separate directories - no stashing, no cloning, no context switching overhead. Each worktree shares the same <code>.git</code> object store, so you get independent working trees instantly without re-downloading any history. Run <code>git worktree add ../my-repo-hotfix hotfix/urgent-fix</code> and you have a fully functional working tree on a separate branch, ready to build and test while your feature branch stays untouched in the original directory.</p>]]></description></item><item><title>FastAPI Webhook Bot: GitHub and Gitea Automation</title><link>https://botmonster.com/coding/write-github-gitea-bot-python-webhooks/</link><pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/write-github-gitea-bot-python-webhooks/</guid><description><![CDATA[<div class="featured-image">
                <img src="/write-github-gitea-bot-python-webhooks.png" referrerpolicy="no-referrer">
            </div><p>You can build a bot that labels issues, enforces PR naming, posts review comments, and triggers workflows. Write a <a href="https://fastapi.tiangolo.com/" target="_blank" rel="noopener noreferrer ">FastAPI</a>
 app that takes webhooks from GitHub or <a href="https://gitea.com/" target="_blank" rel="noopener noreferrer ">Gitea</a>
, checks the signature, and calls back to the right API. The same handler works for both forges. Header names and payload shape differ a bit, so one codebase can serve both.</p>
<h2 id="how-repository-webhooks-work-on-github-and-gitea">How Repository Webhooks Work on GitHub and Gitea</h2>
<p>Both GitHub and Gitea let you set up webhooks at the repo, org, or (for Gitea) system level. When an event fires (someone opens an issue, pushes a commit, opens a PR) the forge sends an HTTP POST to a URL you control. The body is JSON and describes what happened.</p>]]></description></item><item><title>Rust for Python Developers: Rewrite Your Hot Paths for 10x Speed</title><link>https://botmonster.com/coding/rust-for-python-developers-rewrite-hot-paths/</link><pubDate>Sun, 19 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/rust-for-python-developers-rewrite-hot-paths/</guid><description><![CDATA[<div class="featured-image">
                <img src="/racecar-engine-replacement.png" referrerpolicy="no-referrer">
            </div><p>Python is excellent for most of what developers throw at it - API servers, data pipelines, automation scripts, machine learning glue code. But CPU-bound work is a different story. When you&rsquo;re parsing 500MB log files, running simulation loops, or crunching millions of rows in a tight inner loop, you&rsquo;re going to hit a wall. Not always, but often enough that it becomes a real problem.</p>
<p>The solution is not to rewrite your entire application in Rust. That&rsquo;s dramatic and usually unnecessary. The better approach is to profile your code, find the 5-10% that consumes most of the CPU time, and rewrite just that part in Rust. The rest of your codebase stays Python. Your interfaces stay Python. You just swap out the slow function for a fast one.</p>]]></description></item><item><title>SQLite Scales to Production: 10K TPS, WAL Mode, Real Benchmarks</title><link>https://botmonster.com/coding/sqlite-application-database-when-how-to-use/</link><pubDate>Thu, 16 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/sqlite-application-database-when-how-to-use/</guid><description><![CDATA[<div class="featured-image">
                <img src="/sqlite-application-database-when-how-to-use.png" referrerpolicy="no-referrer">
            </div><p>SQLite is the right default database for most applications. With WAL mode enabled, it supports unlimited concurrent readers alongside a single writer that can sustain thousands of transactions per second on modern NVMe storage, handles databases up to 281 TB, and requires zero configuration, zero separate processes, and zero network latency. Unless your application specifically needs horizontal write scaling, multi-node replication, or concurrent writes from multiple processes exceeding roughly 50,000 writes per second, you should start with SQLite and migrate to <a href="https://www.postgresql.org/" target="_blank" rel="noopener noreferrer ">PostgreSQL</a>
 only when you hit a concrete, measured limitation - not a theoretical one.</p>]]></description></item><item><title>Feature Flags DIY: 100-Line SDK vs. LaunchDarkly Cost</title><link>https://botmonster.com/coding/implement-feature-flags-from-scratch/</link><pubDate>Wed, 15 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/implement-feature-flags-from-scratch/</guid><description><![CDATA[<div class="featured-image">
                <img src="/implement-feature-flags-from-scratch.png" referrerpolicy="no-referrer">
            </div><p>You can build a fully functional feature flag system using a JSON configuration file, environment variable overrides, and a single evaluation function in roughly 100 lines of Python. This gives you gradual rollouts, kill switches, and per-environment toggles without paying for <a href="https://launchdarkly.com/" target="_blank" rel="noopener noreferrer ">LaunchDarkly</a>
, <a href="https://www.getunleash.io/" target="_blank" rel="noopener noreferrer ">Unleash</a>
, or any other SaaS platform. The core pattern is straightforward: define each flag with a name, a boolean or percentage-based rule, and a list of target environments, then evaluate it at runtime through a thin SDK you own and control completely.</p>]]></description></item><item><title>Build Powerful TUI Apps in Python with Textual and Rich</title><link>https://botmonster.com/coding/build-tui-apps-python-textual-rich/</link><pubDate>Tue, 14 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/build-tui-apps-python-textual-rich/</guid><description><![CDATA[<div class="featured-image">
                <img src="/mechanical-terminal-neural-web.png" referrerpolicy="no-referrer">
            </div><p>Terminal apps used to mean raw <code>curses</code> calls and a lot of pain. Today, Python&rsquo;s <a href="https://textual.textualize.io/" target="_blank" rel="noopener noreferrer ">Textual</a>
 and <a href="https://rich.readthedocs.io/" target="_blank" rel="noopener noreferrer ">Rich</a>
 libraries have flipped that experience entirely. In under 50 lines of Python you can have a full-screen app with styled layouts, interactive widgets, keyboard navigation, and live data updates - no web browser, no Electron, no JavaScript. This post walks through both libraries, shows you how they fit together, and builds up to a complete working example you can extend immediately.</p>]]></description></item><item><title>Python Memory Optimization: 50-80% Reduction with memray</title><link>https://botmonster.com/coding/profile-optimize-python-memory-usage/</link><pubDate>Wed, 08 Apr 2026 00:00:00 +0000</pubDate><author>Botmonster</author><guid>https://botmonster.com/coding/profile-optimize-python-memory-usage/</guid><description><![CDATA[<div class="featured-image">
                <img src="/profile-optimize-python-memory-usage.png" referrerpolicy="no-referrer">
            </div><p>You can find and fix Python memory leaks with three tools that pair well: <a href="https://github.com/bloomberg/memray" target="_blank" rel="noopener noreferrer ">memray</a>
 for flame graphs, <a href="https://docs.python.org/3/library/tracemalloc.html" target="_blank" rel="noopener noreferrer ">tracemalloc</a>
 for line-level tracking, and <a href="https://mg.pov.lt/objgraph/" target="_blank" rel="noopener noreferrer ">objgraph</a>
 for object reference maps. Start with memray to spot the hungry functions. Drop into tracemalloc to find the exact lines. End with objgraph to see why objects won&rsquo;t get collected. Pair this with generators, <code>__slots__</code>, memory-mapped files, and chunked reads to cut peak memory by 50-80% in data-heavy apps.</p>]]></description></item></channel></rss>