Skip to main content

ZFS RAIDZ Calculator

Enter the RAIDZ Level, Number of Drives per vdev, Drive Capacity, and Number of vdevs in your ZFS storage pool. Click Calculate to see total pool usable capacity, raw capacity, overhead percentage, fault tolerance per vdev, and estimated usable capacity after ZFS metadata overhead. A RAIDZ level comparison table is also shown for the current drive configuration.

ZFS Pool Configuration

Multiple vdevs stripe data across them

RAIDZ Level Comparison

RAIDZ Level Min Drives/vdev Parity Drives Fault Tolerance Usable per vdev (current config) Overhead % Best For
Stripe 10None 0% Temp data, scratch pools
Mirror 2N-1N-1 drives Critical data, high read performance
RAIDZ1 311 drive per vdev General storage, low-cost
RAIDZ2 422 drives per vdev Enterprise storage, large drives
RAIDZ3 533 drives per vdev Highest redundancy, archive storage

Understanding ZFS RAIDZ Storage

ZFS RAIDZ is a software RAID implementation built into the ZFS filesystem that eliminates the RAID-5 write hole vulnerability present in traditional hardware and software RAID. Unlike standard RAID 5 or RAID 6, RAIDZ uses variable-width stripes that always span all drives in the vdev, with each stripe containing exactly one parity unit. This means ZFS RAIDZ never writes partial stripes, eliminating the need for a non-volatile cache (NVRAM) to protect against partial stripe writes during a power failure. ZFS usable capacity calculations follow the same basic math as traditional RAID, but RAIDZ has specific performance characteristics and optimal vdev sizing rules that differ from hardware RAID arrays. Learn more: compare ZFS RAIDZ vs traditional RAID overhead.

ZFS storage pool capacity is organised into vdevs (virtual devices), and multiple vdevs can be combined in a zpool to increase total capacity. A ZFS pool can contain RAIDZ1 vdevs (1 parity drive), RAIDZ2 vdevs (2 parity drives), RAIDZ3 vdevs (3 parity drives), mirror vdevs (N-way mirroring), and stripe vdevs (no redundancy, like RAID 0). The total pool usable capacity equals the sum of usable capacity across all vdevs. For optimal RAIDZ performance, vdev size should follow a power-of-2-plus-parity pattern — RAIDZ1 vdevs work best at 3, 5, or 9 drives; RAIDZ2 at 4, 6, or 10 drives; RAIDZ3 at 5, 7, or 11 drives. Related: Plan total storage capacity requirements.

ZFS metadata and internal bookkeeping consume approximately 2–5% of raw pool capacity for typical workloads. Snapshots, deduplication tables, and ZFS Intent Log (ZIL) can consume additional space beyond what is shown as usable in a ZFS pool. When planning ZFS storage capacity, add at least 20% headroom above your expected data volume — ZFS performance degrades significantly when pool utilisation exceeds 80%, because the block allocator struggles to find contiguous free space for new writes, increasing fragmentation and reducing write throughput on spinning media. See also: estimate RAID rebuild time and data risk window.

ZFS-Specific Notes

ashift Setting: ZFS ashift should match the physical sector size of your drives: ashift=9 for 512-byte sectors, ashift=12 for 4K native drives (most modern HDDs and SSDs). Wrong ashift causes write amplification — set it correctly at pool creation as it cannot be changed later.

80% Rule: Never fill a ZFS pool beyond 80% capacity. ZFS uses copy-on-write and needs free space for its block allocator to work efficiently. Above 80%, fragmentation causes severe write performance degradation on spinning media.

Snapshot Space: ZFS snapshots share blocks with the live filesystem. As data changes, snapshots grow. A heavily snapshotted pool can silently fill to capacity even when apparent data usage appears low.

RAIDZ vs Mirror: Mirror vdevs offer significantly better random read and rebuild performance than RAIDZ. RAIDZ offers better capacity efficiency. For databases and VMs, mirror vdevs are preferred. For archival NAS, RAIDZ2 is preferred.

dRAID: OpenZFS 2.1+ introduces dRAID, a distributed spare RAIDZ variant that enables faster rebuilds by spreading the hot spare across all drives in the vdev, enabling parallel rebuild. Use dRAID for large vdevs with many drives.

StorageMath.org — Free data storage calculators and unit converters for storage professionals. Convert GB to TB, Mbps to MB/s, calculate RAID capacity, IOPS, transfer time, storage cost per TB, and deduplication ratios. Supports decimal (SI) and binary (IEC) standards.