Private Package Registries: PyPI, npm, Supply Chain Control

You can self-host a private PyPI registry with pypiserver and a private npm registry with Verdaccio . Both run on a single box or inside Docker containers. You get three wins that public registries cannot match: faster installs from a LAN cache, a safe home for private packages, and cover against outages, typosquatting, and supply chain attacks. Both tools are free, open-source, and take under 30 minutes to set up.
Why Self-Host a Package Registry
Public registries go down. PyPI had several partial outages across 2024 and 2025. The npm registry has had its own incidents that slowed installs or knocked them out. When either one is down, every CI/CD pipeline and dev box that depends on it stops installing packages. If your deploys lean on pip install or npm install, a registry outage becomes your outage.
Then there are supply chain attacks. Typosquatting attacks publish bad packages with names that look like popular libraries. Dependency confusion attacks abuse how package managers resolve names across public and private registries. Hijacked maintainer accounts can push backdoored versions of real packages. A local registry with an allowlist cuts out these attack types. Your tools never reach the public registry for anything off your list.
Network speed is the other win. A registry on your LAN serves packages in single-digit milliseconds per request. A CDN takes hundreds. The gap looks small for one install. CI pipelines that run pip install or npm install dozens or hundreds of times a day pile up real savings from a local cache.
Private packages are another common driver. Internal libraries, shared utilities, and company-only tools can ship to a local registry without ever touching a public server. That skips the pain of git-based dependency URLs or vendored code checked into repos. In regulated fields like finance, healthcare, and government contracting, the rules can require that all third-party code is reviewed and stored in an audited internal repo before it ships. A local registry meets that bar.
The software cost is zero. pypiserver and Verdaccio both run on tiny hardware. A Raspberry Pi 5 or any small VPS with 1 GB of RAM and enough disk for your cached packages is plenty.
Setting Up pypiserver for Python Packages
pypiserver (now at version 2.4.x) is the simplest self-hosted PyPI server. It serves packages from a plain directory. It supports uploads and proxying from upstream PyPI. It needs Python 3.10 or newer.
Installation and Basic Usage
Install it with pip, plus the passlib extra for auth support:
pip install pypiserver[passlib]Create a directory to hold your packages and start the server:
mkdir -p /data/packages
pypi-server run -p 8080 /data/packagesThis serves all .tar.gz and .whl files in /data/packages as a PEP 503 simple index. Any pip client pointed at this server can install from it.
For Docker, the equivalent command is:
docker run -p 8080:8080 \
-v /data/packages:/data/packages \
pypiserver/pypiserver:latest run /data/packagesAuthentication
By default, pypiserver allows open access for everything. To force login for uploads while keeping downloads open, make an htpasswd file and start the server with auth flags:
# Requires apache2-utils (apt install apache2-utils)
htpasswd -sc /data/.htpasswd admin
pypi-server run -p 8080 \
-P /data/.htpasswd \
-a update \
/data/packagesThe -a update flag means auth is needed only for uploads. Downloads stay open.
Uploading and Installing Packages
Upload packages with twine
. First, set up ~/.pypirc:
[distutils]
index-servers =
local
[local]
repository = http://localhost:8080
username = admin
password = yourpasswordThen upload:
twine upload --repository local dist/*To install from your registry, pass the URL directly:
pip install --index-url http://localhost:8080/simple/ \
--trusted-host localhost \
mypackageOr set it for good in pip.conf (Linux: ~/.config/pip/pip.conf, macOS: ~/Library/Application Support/pip/pip.conf):
[global]
index-url = http://localhost:8080/simple/
trusted-host = localhostCaching Upstream Packages
pypiserver has a --fallback-url flag. It proxies requests for packages it does not have to an upstream registry:
pypi-server run -p 8080 \
--fallback-url https://pypi.org/simple/ \
/data/packagesWhen a client asks for a package that does not exist in /data/packages, pypiserver redirects the client to PyPI. This does not cache the package on disk: the client downloads from PyPI directly. To build a real local cache, pre-download packages with pip download:
pip download -d /data/packages -r requirements.txtThis pulls every package and its deps into your local directory. pypiserver will then serve them on later requests.
Systemd Service for Non-Docker Deployments
If you run pypiserver on a Linux host instead of Docker, a systemd unit file keeps it up across reboots:
[Unit]
Description=pypiserver - Private PyPI Repository
After=network.target
[Service]
Type=simple
User=www-data
Group=www-data
ExecStart=/usr/local/bin/pypi-server run -p 8080 -P /data/.htpasswd -a update /data/packages
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.targetSave this as /etc/systemd/system/pypiserver.service, then enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable --now pypiserverSetting Up Verdaccio for npm Packages
Verdaccio (now at version 6.3.x) is the top self-hosted npm registry. It acts as a caching proxy for the public npm registry. It also handles private package publishing, scoped package rules, and fine-grained access control. It needs Node.js 18 or higher; Node.js 20 LTS is the safe pick.
Installation and Basic Usage
Install globally and run:
npm install -g verdaccio
verdaccioVerdaccio starts on port 4873 by default. It writes its config to ~/.config/verdaccio/config.yaml.
For Docker:
docker run -p 4873:4873 \
-v /data/verdaccio/storage:/verdaccio/storage \
-v /data/verdaccio/conf:/verdaccio/conf \
verdaccio/verdaccio:6Configuration
The config file controls storage path, upstream registry links, package access rules, and auth. Here is a working config.yaml with scoped package support:
storage: /verdaccio/storage
auth:
htpasswd:
file: /verdaccio/conf/htpasswd
max_users: 100
uplinks:
npmjs:
url: https://registry.npmjs.org/
packages:
'@mycompany/*':
access: $authenticated
publish: $authenticated
unpublish: $authenticated
# Empty proxy means these packages are never fetched from upstream
proxy: ''
'**':
access: $all
publish: $authenticated
proxy: npmjs
listen: 0.0.0.0:4873
middlewares:
audit:
enabled: true
log:
type: stdout
format: pretty
level: warnThe @mycompany/* block makes sure any package in your org’s scope is never proxied to the public npm registry. The ** catch-all proxies everything else to npmjs and caches the result on disk. Once a package version is cached, later installs are served from disk.
User Management and Client Configuration
Add a user to your registry:
npm adduser --registry http://localhost:4873This uses Verdaccio’s built-in htpasswd auth plugin. For larger teams, Verdaccio has LDAP and GitLab auth plugins too.

Point npm at your registry globally:
npm set registry http://localhost:4873Or use a project-level .npmrc file:
registry=http://localhost:4873/Publish packages to the local registry:
npm publish --registry http://localhost:4873Docker Compose Setup for Both Registries
Running both registries as Docker services makes the setup easy to copy and move. Here is a full docker-compose.yml:
services:
pypiserver:
image: pypiserver/pypiserver:latest
container_name: pypiserver
ports:
- "8080:8080"
volumes:
- pypi-data:/data/packages
- ./pypi-auth:/data/auth
command: run -p 8080 -P /data/auth/.htpasswd -a update /data/packages
restart: unless-stopped
deploy:
resources:
limits:
memory: 128M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/"]
interval: 30s
timeout: 5s
retries: 3
verdaccio:
image: verdaccio/verdaccio:6
container_name: verdaccio
ports:
- "4873:4873"
volumes:
- verdaccio-storage:/verdaccio/storage
- ./verdaccio-conf:/verdaccio/conf
restart: unless-stopped
deploy:
resources:
limits:
memory: 256M
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4873/-/ping"]
interval: 30s
timeout: 5s
retries: 3
volumes:
pypi-data:
verdaccio-storage:pypiserver usually sits at around 50 MB of RAM. Verdaccio runs at 100 to 200 MB, based on cached packages and active users. The memory caps above give each service room to breathe.
Reverse Proxy with Caddy
To serve both registries over HTTPS with auto TLS certs, add Caddy as a reverse proxy:
caddy:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy-data:/data
depends_on:
- pypiserver
- verdaccioWith a Caddyfile like:
pypi.internal.example.com {
reverse_proxy pypiserver:8080
}
npm.internal.example.com {
reverse_proxy verdaccio:4873
}Caddy fetches TLS certs from Let’s Encrypt on its own. For internal-only setups, use self-signed certs or a private CA, then point pip and npm to trust them.
Storage Estimates and Backup
Storage needs depend on your dependency tree. A typical Python web project (Django or Flask with 50 to 100 transitive deps) takes about 200 to 500 MB of wheel files. A typical Node.js project can be much larger. A React app with its full dependency tree might use 500 MB to 1 GB of cached tarballs. If you cache packages for many projects, plan for 5 to 20 GB of disk space and watch growth over time.
Backup is simple. Both registries store packages as plain files on disk. Mount the storage volumes to host directories and add them to your usual backup tool, whether that is restic, borgbackup, rsync, or anything else. Restoring is just copying the files back. To watch both registries and get a ping when either drops, a light tool like Gatus can poll your endpoints and alert over Slack, Discord, or email.
Integrating with CI/CD and Developer Workflows
Setting up a registry is the easy part. The harder part is making sure devs and CI pipelines use it instead of hitting the public registries out of habit.
CI/CD Configuration
In GitHub Actions, point pip and npm at your registries:
steps:
- name: Install Python dependencies
run: |
pip install --index-url https://pypi.internal.example.com/simple/ \
--trusted-host pypi.internal.example.com \
-r requirements.txt
- name: Install Node dependencies
run: npm install
env:
NPM_CONFIG_REGISTRY: https://npm.internal.example.com/For npm, you can also check a .npmrc file into the repo root:
registry=https://npm.internal.example.com/Pre-Populating the Cache
Before CI pipelines hit the registry for the first time, warm the cache on the host:
# Python: download all packages from requirements.txt
pip download -d /data/packages -r requirements.txt
# Node: install and let Verdaccio cache automatically
cd your-project && npm install --registry http://localhost:4873For Verdaccio, the first npm install against it pulls and caches packages from the upstream registry on its own. Later installs by any user or pipeline are served from the local cache.
Publishing Internal Packages from CI
Add a publish step to your CI pipeline. It runs after tests pass on the main branch:
- name: Publish to internal PyPI
if: github.ref == 'refs/heads/main'
run: twine upload --repository local dist/*
- name: Publish to internal npm
if: github.ref == 'refs/heads/main'
run: npm publish --registry https://npm.internal.example.com/If you want to check that your app installs and runs against the local registry in your tests, Testcontainers can spin up its own pypiserver or Verdaccio for each test run, without touching your shared registry. For the version-parsing and metadata code in your own internal packages, property-based testing with Hypothesis generates randomized inputs that catch edge cases handwritten tests miss.
Dependency Allowlisting
For tighter supply chain control, set Verdaccio to block all upstream packages unless approved. In config.yaml, drop the proxy: npmjs line from the catch-all rule. Then add one entry per approved package:
packages:
'@mycompany/*':
access: $authenticated
publish: $authenticated
'react':
access: $all
proxy: npmjs
'express':
access: $all
proxy: npmjs
'**':
access: $all
# No proxy - blocks all unapproved packagesFor pypiserver, you control the allowlist by only dropping approved packages into the packages directory. If a package is not on disk and no fallback URL is set, pip will get a 404.
Air-Gapped Environments
For systems with no internet at all, pre-load every dep on a connected machine. Then move the files across:
# Python: download wheels for the target platform
pip download -d ./offline-pypi \
--platform manylinux2014_x86_64 \
--python-version 3.12 \
--only-binary=:all: \
-r requirements.txt
# Node: pack tarballs
npm pack react express lodash # produces .tgz files
# Transfer the files to the air-gapped system
# Then point pip/npm at the local directory or registryCopy the downloaded packages into pypiserver’s package directory and Verdaccio’s storage directory on the air-gapped system. Developers and CI pipelines on that network install the same way they would from any registry. The tools do not know or care that the registry has no internet.
Comparison of Registry Options
If pypiserver and Verdaccio do not fit, here are the other picks worth a look:
| Feature | pypiserver | devpi | Nexus | Artifactory |
|---|---|---|---|---|
| Language focus | Python only | Python only | Multi-format | Multi-format |
| Caching proxy | Redirect only | Full cache | Full cache | Full cache |
| Setup time | 5 minutes | 15 minutes | 30+ minutes | 30+ minutes |
| Resource usage | ~50 MB RAM | ~200 MB RAM | 1+ GB RAM | 1+ GB RAM |
| Cost | Free | Free | Free (OSS) / Paid (Pro) | Free (limited) / Paid |
| Multiple indexes | No | Yes | Yes | Yes |
| Web UI | Basic | Yes | Yes | Yes |
| Feature | Verdaccio | Nexus | GitHub Packages |
|---|---|---|---|
| Language focus | npm only | Multi-format | Multi-format |
| Caching proxy | Full cache | Full cache | No |
| Setup time | 5 minutes | 30+ minutes | 0 (hosted) |
| Resource usage | ~150 MB RAM | 1+ GB RAM | N/A |
| Cost | Free | Free (OSS) / Paid | Free (public) / Paid |
| Scoped packages | Yes | Yes | Scoped only |
| Auth plugins | htpasswd, LDAP, GitLab | LDAP, SAML | GitHub auth |
For small to medium teams, pypiserver and Verdaccio are hard to beat. They use almost no resources, take minutes to set up, and cost nothing. If you need one tool that handles Python, npm, Docker images, Maven, and more, Sonatype Nexus or JFrog Artifactory are the big-ticket options. They need at least 1 GB of RAM, and full licenses can run several hundred dollars a month.
Either way, once your registries are up and your clients are set, you stop fretting about public registry outages. You also stop wondering whether that new transitive dep is what it claims to be.
Botmonster Tech