How to Use systemd-analyze to Fix Slow Boot Times

Slow Linux boots are rarely caused by one dramatic failure. Most of the time, they come from a handful of small delays that stack up: firmware taking longer than expected, an oversized initramfs, a wait-online unit blocking the session, or hardware drivers initializing long before you need them. The good news is that modern Linux gives you first-class tooling to diagnose this precisely, and systemd-analyze is still the best starting point.
By the end of this guide, you will have a repeatable workflow to profile your boot path, identify true bottlenecks, and apply fixes that are safe for daily systems, including Secure Boot-enabled machines. You will also see distro-specific commands for Debian/Ubuntu, Arch, and Fedora families so you can apply the same method regardless of packaging ecosystem.
Prerequisites and Distro-Specific Setup
Before optimizing, confirm you have the right tooling and a recovery plan. Boot optimization is usually safe, but service masking or initramfs changes can leave systems unbootable when done carelessly.
Minimum prerequisites:
- You are running a
systemd-based distro. - You have admin access (
sudo). - You can recover from boot issues using a live USB or a known-good fallback boot entry.
- You keep at least one older kernel installed.
Install or verify tools by distro family:
| Distro family | Core tools | Optional helpers |
|---|---|---|
| Debian/Ubuntu | sudo apt update && sudo apt install -y systemd systemd-sysv graphviz linux-tools-common | sudo apt install -y initramfs-tools |
| Arch/Manjaro | sudo pacman -Syu --needed systemd graphviz | sudo pacman -S --needed mkinitcpio |
| Fedora/RHEL/openSUSE | sudo dnf install -y systemd graphviz (Fedora/RHEL) or sudo zypper install -y systemd graphviz (openSUSE) | sudo dnf install -y dracut |
Quick baseline capture commands:
systemd-analyze time
systemd-analyze blame | head -n 20
systemd-analyze critical-chain
systemd-analyze plot > boot-baseline.svgIf plot output is huge, compress and archive it with your notes. When tuning, you need hard before/after evidence rather than memory.
Understanding the Linux Boot Sequence in 2026
You get better results when you know what you are optimizing. A modern Linux boot path has five stages:
- Firmware (UEFI/BIOS): hardware initialization and handoff to the EFI loader.
- Bootloader: GRUB
or
systemd-bootloads kernel and initramfs. - Kernel and initramfs: early device bring-up and root filesystem handoff.
systemduserspace startup: targets, services, sockets, mounts.- Session initialization: display manager and desktop/user services.
systemd-analyze time summarizes this as firmware, loader, kernel, and userspace. The key detail: these clocks are sequential but different subsystems dominate each segment.
- A high
firmwaretime points to motherboard settings, USB probing, PXE checks, or TPM measurements. - A high
loadertime often reflects boot menu timeout, cryptographic verification work, or loader scans. - A high
kerneltime usually means heavy initramfs, module probing, storage waits, or microcode overhead. - A high
userspacetime typically means blocking service dependencies.
Unified Kernel Images (UKIs) are common in 2026 and can simplify the boot path by bundling kernel, initramfs, and command line into one EFI binary. That often reduces loader complexity and makes Secure Boot workflows cleaner, but it does not automatically eliminate slow service chains later in userspace.
It is also worth resetting expectations: NVMe Gen5 throughput does not guarantee a fast boot. Startup delay is more often coordination latency than raw I/O bandwidth. Waiting for a network-online target, a slow crypto unlock prompt, or an unnecessary service dependency can cost more time than reading gigabytes from disk.
Profiling the NPU Init Delay
A 2026-specific delay class is AI accelerator bring-up. Systems with Intel Core Ultra (intel_vpu) and AMD Ryzen AI (amdxdna) may spend measurable time initializing NPU paths during boot even if you do not run AI workloads at startup.
Start by collecting data:
journalctl -b -k | grep -Ei "intel_vpu|amdxdna|npu|vpu"
systemd-analyze time
systemd-analyze plot > boot-npu-check.svgIn the SVG timeline, look for long early kernel segments around module loading. If the delay appears consistently and your workload does not require instant NPU availability after login, defer module loading.
Example deferred-load approach for Intel VPU:
echo "blacklist intel_vpu" | sudo tee /etc/modprobe.d/blacklist-intel-vpu.conf
sudo update-initramfs -u # Debian/Ubuntu
# or: sudo dracut --force # Fedora
# or: sudo mkinitcpio -P # ArchLoad on demand after login:
sudo modprobe intel_vpuTrade-off: first inference task pays module init latency later. For systems where AI tasks are occasional, this is usually acceptable. For local AI workstations, keep default early initialization and optimize elsewhere.
Reading systemd-analyze blame Like a Pro
systemd-analyze blame is useful, but easy to misread. It sorts units by activation time, not by true impact on the final “desktop ready” moment.
Use these commands together:
systemd-analyze blame
systemd-analyze critical-chain
systemd-analyze dot --to-pattern='*.service' | dot -Tsvg > services-graph.svgHow to interpret correctly:
- A service that takes 5 seconds is only a top priority if it is on the critical path.
- A long-running service off the critical path may start in parallel and not delay login.
critical-chaintells you which chain actually gates your target.
Common culprits on contemporary desktops:
NetworkManager-wait-online.serviceplymouth-quit-wait.servicesnapd.serviceapt-daily.service/dnf-makecache.service
Example: if NetworkManager-wait-online.service adds 4 seconds and your desktop does not require full network online before graphical login, you can often trim that safely:
sudo systemctl disable NetworkManager-wait-online.serviceIf you need it for specific services, keep it enabled and scope dependencies precisely instead of forcing a global wait.
Trimming the Initramfs with dracut
A generic initramfs includes drivers and hooks for hardware you may never use. On fast systems, decompression and probing can become the dominant part of kernel-stage boot.
Measure first:
du -sh /boot/initr* /boot/initramfs* 2>/dev/null
systemd-analyze timeThen rebuild with host-specific content:
- Fedora/RHEL/openSUSE (
dracut):
sudo dracut --hostonly --force- Arch (
mkinitcpio):
sudo sed -i 's/^HOOKS=.*/HOOKS=(base udev autodetect modconf block filesystems keyboard fsck)/' /etc/mkinitcpio.conf
sudo mkinitcpio -P- Debian/Ubuntu (
initramfs-tools):
echo 'MODULES=dep' | sudo tee /etc/initramfs-tools/conf.d/driver-policy
sudo update-initramfs -u -k allCompression strategy matters. lz4 generally decompresses faster than zstd at the cost of a larger image. On modern NVMe systems, this is often a favorable trade for boot latency.
Check your setting and tune if supported:
# dracut example
cat /etc/dracut.conf.d/*.conf 2>/dev/null | grep -i compress
# mkinitcpio example
grep '^COMPRESSION=' /etc/mkinitcpio.confReboot and compare kernel time in systemd-analyze time against baseline.
Disabling and Masking Non-Essential Services
Once profiling identifies real blockers, apply the least-destructive change first.
Three levels of service control:
disable: remove autostart symlinks.mask: hard block any start attempt by linking unit to/dev/null.- Remove package: uninstall software and its service units.
Check reverse dependencies before changes:
systemctl list-dependencies --reverse bluetooth.serviceDesktop services frequently safe to disable when unused:
bluetooth.serviceModemManager.servicecups.service(if no printing)geoclue.service
Examples:
sudo systemctl disable bluetooth.service
sudo systemctl disable ModemManager.serviceUse masking sparingly:
sudo systemctl mask cups.serviceDeferring instead of disabling is often cleaner. You can use overrides to relax ordering:
sudo systemctl edit my-heavy.serviceOverride example:
[Unit]
After=graphical.target
Wants=graphical.targetThis keeps functionality while moving startup later.
GRUB and systemd-boot Command-Line Optimization
Bootloader tuning will not save massive time by itself, but it removes avoidable noise and logging overhead.
Useful kernel parameters for quieter/faster practical boot:
quietloglevel=3rd.udev.log_level=3(especially relevant on dracut-based systems)
For GRUB (Debian/Ubuntu/Fedora variants):
sudo editor /etc/default/grub
# Example:
# GRUB_CMDLINE_LINUX_DEFAULT="quiet loglevel=3 rd.udev.log_level=3"Regenerate config:
# Debian/Ubuntu
sudo update-grub
# Fedora/RHEL (BIOS/UEFI paths vary)
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
# or
sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfgFor systemd-boot:
sudo editor /etc/kernel/cmdline
# Add: quiet loglevel=3 rd.udev.log_level=3
sudo bootctl updateIf using UKIs managed by kernel-install, rebuild the relevant image after command-line changes so the embedded arguments update.
fstab Mount Option Optimization
Filesystem mount policy can add background write pressure and metadata churn that indirectly affects boot and early session responsiveness. Two options worth testing:
noatime: do not update file access time on reads.lazytime: cache inode timestamp updates in memory and flush later.
Example /etc/fstab line for ext4 root:
UUID=xxxx-xxxx / ext4 defaults,noatime,lazytime 0 1Apply safely:
sudo cp /etc/fstab /etc/fstab.bak.$(date +%F)
sudo editor /etc/fstab
sudo mount -aIf mount -a returns cleanly, reboot and re-measure. Do not use aggressive options blindly on databases or workloads that rely on strict timestamp semantics.
Secure Boot and Signed Chain Safety
Performance tweaks must not break trust guarantees. On Secure Boot systems, prioritize changes that preserve signed binaries and measured boot.
Safe practices:
- Prefer service-level tuning and initramfs slimming over unsigned custom kernels.
- If you build custom UKIs, sign them with enrolled Machine Owner Keys (MOK) where required.
- Keep a known-good signed boot entry.
- Avoid disabling Secure Boot purely for convenience unless this is an offline lab machine.
Why this matters: reducing boot time by bypassing verification steps can compromise platform integrity. For many users, especially laptops with TPM2-backed disk unlock workflows, preserving signed chain correctness is more important than shaving one extra second.
Benchmarking and Verifying Your Results
Optimization without measurement is guessing. Run repeated trials and compare aggregate numbers.
Suggested workflow:
# Run after each change batch
systemd-analyze time
systemd-analyze blame | head -n 15
systemd-analyze critical-chain
systemd-analyze plot > boot-after.svgUse at least 5 cold boots for each configuration. Single-run improvements can be noise from cache state, firmware variance, or network race conditions.
For stubborn cases, add boot-time diagnostics:
- Kernel cmdline:
systemd.log_level=debug - Inspect with:
journalctl -b -o short-monotonic
Target expectations for 2026 hardware:
- Desktop NVMe Gen4/Gen5: under 8 seconds to usable session is realistic.
- Laptop with Secure Boot + TPM2 attestation: under 12 seconds is a practical target.
Realistic Before/After Results
The table below shows representative improvements from typical desktop tuning. Your exact numbers depend on firmware, distro defaults, and installed services, but these are realistic ranges seen across recent systems.
| Change | Before impact | After impact | Typical savings |
|---|---|---|---|
Disable NetworkManager-wait-online on desktop | 3.8s block in critical path | 0.2s residual | 3.6s |
| Rebuild host-only initramfs | 1.9s kernel/initramfs stage | 1.1s stage | 0.8s |
| Defer unused NPU module init | 0.7s early kernel overhead | near 0s at boot | 0.6-0.7s |
Disable unused ModemManager + bluetooth | 0.9s aggregate userspace delays | 0.2s | 0.7s |
Add quiet loglevel=3 rd.udev.log_level=3 | heavy early log churn | reduced chatter | 0.1-0.3s |
fstab with noatime,lazytime | higher metadata updates during early session | lower metadata churn | 0.1-0.4s |
Combined gains of 4-7 seconds are common on systems that started with 12-18 second boots.
A Repeatable Optimization Checklist
If you want a practical sequence you can reuse on every machine, use this order:
- Capture baseline (
time,blame,critical-chain, SVG plot). - Remove obvious critical-path blockers (
wait-online, unused services). - Slim initramfs with distro-appropriate tooling.
- Tune kernel command line and bootloader settings.
- Evaluate NPU defer strategy if applicable.
- Apply conservative
fstabtimestamp tuning. - Reboot multiple times and compute averages.
- Keep rollback notes and a known-good boot entry.
This sequencing minimizes risk while prioritizing high-yield changes first.
A fast boot is less about one magic command and more about evidence-driven trimming. systemd-analyze gives you that evidence: where time is actually spent, which services really block boot, and whether a change helped or only looked helpful. Treat each optimization as a measured experiment and your Linux system will get both faster and easier to maintain.