Contents

SQLite as Your Application Database: When and How to Use It

SQLite is the right default database for most applications. With WAL mode enabled, it supports unlimited concurrent readers alongside a single writer that can sustain thousands of transactions per second on modern NVMe storage, handles databases up to 281 TB, and requires zero configuration, zero separate processes, and zero network latency. Unless your application specifically needs horizontal write scaling, multi-node replication, or concurrent writes from multiple processes exceeding roughly 50,000 writes per second, you should start with SQLite and migrate to PostgreSQL only when you hit a concrete, measured limitation - not a theoretical one.

Most developers reflexively reach for PostgreSQL or MySQL even for applications that will never need their complexity. What follows covers why SQLite deserves to be your first choice, how to configure it for production workloads, what its actual concurrency limits are, and how to handle operational concerns like backups and migrations.

Why SQLite Deserves to Be Your Default

The case for SQLite starts with deployment simplicity. SQLite is a C library linked directly into your application, not a separate server process. You skip the daemon management, port configuration, authentication setup, connection pool tuning, and backup cron jobs because it auto-checkpoints. Your entire database is a single file. Backups are cp database.db database-backup.db (or better, the .backup API for consistency). Deployments mean copying a binary and a file. Atomic operations are guaranteed by the filesystem.

Performance for typical workloads is more than sufficient. A single SQLite database on an NVMe SSD handles 50,000+ reads per second and 10,000+ writes per second in WAL mode. That is more than enough for applications serving up to 100,000 daily active users. Most applications never come close to these limits.

Every major language has battle-tested SQLite bindings. Python ships with sqlite3 in its standard library. Go has modernc.org/sqlite (pure Go, no CGo required). Rust has rusqlite . Node.js has better-sqlite3 with a synchronous API that runs roughly 3x faster than the older sqlite3 package. Java has xerial/sqlite-jdbc . You are not going to hit a language compatibility wall.

SQLite internal architecture diagram showing the SQL compiler, virtual machine, B-tree, pager, and OS interface layers
SQLite's layered architecture — SQL compiles to bytecode, runs in a virtual machine backed by B-tree storage and a page cache
Image: SQLite

SQLite already powers more deployments than all other databases combined. Every Android phone, every iOS device, every Firefox and Chrome browser, every macOS system runs SQLite. It is the most tested, most deployed database engine in existence. The code has 100% branch test coverage, which is something no other database can claim.

Then there is the financial argument. PostgreSQL on a managed service - RDS, Supabase, Neon - costs $25 to $500+ per month depending on your tier. SQLite costs nothing. The server you are already paying for has more than enough resources to run it. For a solo developer or a small team, that cost difference compounds fast over the lifetime of a project.

Configuring SQLite for Production Performance

Out-of-the-box SQLite settings are conservative, tuned for embedded devices and backward compatibility. Changing a handful of PRAGMAs gets you production-grade performance. Apply these on every connection:

PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
PRAGMA cache_size=-64000;
PRAGMA mmap_size=268435456;
PRAGMA busy_timeout=5000;
PRAGMA foreign_keys=ON;
PRAGMA journal_size_limit=67108864;

Here is what each one does and why it matters.

journal_mode=WAL switches from the default rollback journal to Write-Ahead Logging, enabling concurrent reads during writes. This is a one-time, persistent setting that survives database close and reopen. Set it once and forget about it. This single change is the most important performance improvement you can make.

synchronous=NORMAL reduces fsync calls during commits. The default FULL mode fsyncs after every transaction. In WAL mode, NORMAL is safe against power loss for all committed transactions - only uncommitted WAL entries can be lost during a crash, which is acceptable for essentially all web applications. Expect 2-5x faster writes.

cache_size=-64000 sets a 64MB page cache (negative values are in kilobytes). The default is roughly 2MB, which is too small for any real workload. Size this to 10-25% of your available RAM. If your server has 4GB of RAM, a 512MB cache is reasonable.

mmap_size=268435456 enables a 256MB memory-mapped region. SQLite reads the database file via mmap instead of read() system calls, reducing kernel context switches by 50% or more for read-heavy workloads. Adjust based on your database file size and available memory.

busy_timeout=5000 makes writers wait up to 5 seconds for the write lock instead of immediately returning SQLITE_BUSY. Without this, you will see sporadic write failures under any concurrent load.

foreign_keys=ON must be set on every connection because it is not persistent. SQLite ships with foreign key enforcement disabled by default for backward compatibility. Enforce this in your connection initialization code, ORM configuration, or a connection factory function.

journal_size_limit=67108864 caps the WAL file at 64MB. Without this limit, the WAL file can grow unbounded during write-heavy bursts. SQLite truncates it after checkpoint when it exceeds this size.

In Python, a production-ready connection setup looks like this:

import sqlite3

def get_connection(db_path: str, readonly: bool = False) -> sqlite3.Connection:
    mode = "ro" if readonly else "rwc"
    conn = sqlite3.connect(
        f"file:{db_path}?mode={mode}",
        uri=True,
        check_same_thread=False,
    )
    conn.execute("PRAGMA journal_mode=WAL;")
    conn.execute("PRAGMA synchronous=NORMAL;")
    conn.execute("PRAGMA cache_size=-64000;")
    conn.execute("PRAGMA mmap_size=268435456;")
    conn.execute("PRAGMA busy_timeout=5000;")
    conn.execute("PRAGMA foreign_keys=ON;")
    conn.execute("PRAGMA journal_size_limit=67108864;")
    return conn

Concurrent Access: What Actually Works

The biggest misconception about SQLite is that it cannot handle concurrency. It can, but with constraints you need to understand.

WAL mode gives you unlimited concurrent readers plus one writer at a time. Readers never block writers. Writers never block readers. The single-writer constraint means only one transaction can write at any given moment, but that constraint is less limiting than it sounds.

A typical INSERT or UPDATE transaction completes in 0.1 to 1 millisecond on NVMe storage. That means the theoretical write throughput is 1,000 to 10,000 transactions per second with a single writer. Most web applications generate a few hundred write transactions per second at peak load. You have significant headroom.

For multi-threaded applications, the practical architecture is straightforward. Use multiple read-only connections in a pool for SELECT queries. Each read connection can execute concurrently with every other read connection and with the writer. The write connection should be a singleton with a queue of pending write operations. In Python, this means a dedicated writer thread with a queue.Queue feeding it transactions.

import threading
import queue

class SQLiteWriter:
    def __init__(self, db_path: str):
        self._conn = get_connection(db_path)
        self._queue = queue.Queue()
        self._thread = threading.Thread(target=self._run, daemon=True)
        self._thread.start()

    def _run(self):
        while True:
            func, event, result_holder = self._queue.get()
            try:
                result_holder["result"] = func(self._conn)
            except Exception as e:
                result_holder["error"] = e
            finally:
                event.set()

    def execute(self, func):
        event = threading.Event()
        result_holder = {}
        self._queue.put((func, event, result_holder))
        event.wait()
        if "error" in result_holder:
            raise result_holder["error"]
        return result_holder.get("result")

A common gotcha: use BEGIN IMMEDIATE for write transactions instead of BEGIN DEFERRED. The default deferred mode acquires the write lock only on the first write statement, which means a transaction can do several reads, then fail with SQLITE_BUSY when it tries to write. BEGIN IMMEDIATE grabs the lock upfront, so you know right away if the writer is busy.

When You Actually Need PostgreSQL

SQLite does not cover every use case. You need PostgreSQL (or another client-server database) when:

  • Multiple application instances need to write to the same database. SQLite does not support network access. If you are running three replicas of your API server behind a load balancer and they all need write access, SQLite is not the answer.
  • Your write workload consistently exceeds 10,000 transactions per second. This is rare for most applications but does happen with high-frequency data ingestion, real-time analytics, or large-scale event processing.
  • You need features like LISTEN/NOTIFY for real-time push notifications, logical replication for data pipelines, row-level security for multi-tenant isolation, or advanced query features like CTEs with recursive materialization hints.

All of these are requirements you should be able to point to in your current architecture, not things you might hypothetically need in two years. Premature migration to PostgreSQL is just as much of an anti-pattern as premature optimization.

Migrations, Backups, and Operational Concerns

If you are going to run SQLite in production, you need a plan for schema migrations, backups, and monitoring. The good news is that the tooling has caught up.

Schema Migrations

DB Browser for SQLite showing a database table with columns, data browser, and SQL editor tabs
DB Browser for SQLite — a visual tool for inspecting and editing SQLite databases during development
Image: DB Browser for SQLite

Use Dbmate (language-agnostic, uses plain SQL files), Goose (Go), or Alembic (Python) to manage migrations. SQLite 3.35.0 and later supports ALTER TABLE DROP COLUMN, so if you are running SQLite 3.46+ (released in 2024) you have full ALTER TABLE support including ADD COLUMN, RENAME COLUMN, and DROP COLUMN. For older versions, the standard workaround is creating a new table, copying data, dropping the old table, and renaming.

Backups and Replication

The SQLite backup API (sqlite3_backup_init(), exposed as .backup() in Python) creates a consistent snapshot while the database is in use. This is the correct way to back up a running SQLite database - do not just copy the file while it is open, as the WAL file might not be checkpointed.

For continuous replication, Litestream v0.4 (2025) streams WAL changes to S3, GCS, or SFTP in near-real-time with sub-second replication lag. It provides point-in-time recovery without manual backup scheduling. Setup is minimal:

# litestream.yml
dbs:
  - path: /data/app.db
    replicas:
      - url: s3://my-bucket/app.db

Restore with a single command: litestream restore -o restored.db s3://my-bucket/app.db.

For multi-node read replicas, LiteFS (by Fly.io) is a FUSE-based filesystem that replicates a primary SQLite database to read-only replicas across multiple nodes. The primary handles writes and replicas serve reads with less than 100ms replication lag. This gives you horizontal read scaling while keeping the simplicity of SQLite.

Monitoring

There are four metrics worth tracking in production.

Query PRAGMA wal_checkpoint(PASSIVE); periodically to check WAL checkpoint status. If the WAL file exceeds 100MB, your checkpointing is falling behind and you need to investigate what is blocking it.

Monitor database file size growth for unexpected bloat, which could indicate missing vacuuming or an unintended data retention pattern.

Instrument your writer queue to track how long transactions wait for the write lock. Alert if the P95 wait time exceeds 100ms - that is your early warning sign that you are approaching SQLite’s write concurrency limits.

For vacuuming, set PRAGMA auto_vacuum=INCREMENTAL; at database creation time. Then run PRAGMA incremental_vacuum(1000); periodically to free pages from deleted rows. For full compaction, run VACUUM during off-peak hours since it locks the entire database for the duration.

Real-World SQLite in Production

SQLite in production is not a fringe choice. Plenty of well-known companies and profitable products run on it.

Expensify runs its entire backend on SQLite, serving millions of users with an architecture where each user’s data lives in its own SQLite database file. This eliminates multi-tenant query complexity and enables per-user backup and restore.

Fly.io’s LiteFS powers their platform’s internal services. They report SQLite handling 200,000+ reads per second per node with P99 latency under 1ms for cached queries.

The Tailscale coordination server uses SQLite as its primary datastore, handling millions of connected devices with a single-writer architecture.

Pieter Levels runs multiple profitable SaaS products (over $2M ARR) on single-server SQLite setups, demonstrating that SQLite scales well beyond what most applications will ever need. No managed database service, no connection pooling layer, no replica management.

For concrete benchmarks: on a 2026 AMD Ryzen 9 9900X with NVMe Gen5, SQLite in WAL mode achieves roughly 85,000 point reads per second, 18,000 range queries per second (100 rows each), and 12,000 inserts per second with a single writer.

The Migration Decision Framework

Instead of guessing when to move off SQLite, measure. Monitor three things:

MetricThresholdAction
Write lock contention>5% of writes wait >100msInvestigate query optimization first
Write throughputSustained >10K TPSEvaluate connection architecture
Multi-process writesMore than one app instance needs writesConsider PostgreSQL

Migrate to PostgreSQL when you hit two or more of these thresholds simultaneously. Not before. The odds are good that you never will.

Start with SQLite

The default choice for a new application’s database should be the one that adds the least operational complexity while meeting your performance requirements. For the vast majority of applications - from personal projects to SaaS products with tens of thousands of users - that choice is SQLite. Configure WAL mode, set the right PRAGMAs, use a single-writer pattern with pooled readers, and you have a production database that costs nothing, requires no infrastructure management, and outperforms what your application actually needs.

For most projects, SQLite is the pragmatic default. Migrate later if and when your metrics tell you to - not because a blog post (other than this one) scared you into it.