RAID Rebuild Time Calculator
Enter the Number of Drives, Drive Capacity, Drive Type, and RAID Level. Set the I/O Load During Rebuild to account for production workload competition. Click Calculate Rebuild Time to see estimated rebuild duration, data-at-risk window, Uncorrectable Read Error (URE) probability, and recommended actions based on your configuration.
Array Configuration
Auto-set by drive type; editable for custom
0% = no load, 30% = moderate, 70% = heavy
Typical Rebuild Speed by Drive Type
| Drive Type | Base Read Speed | Rebuild Speed (30% load) | 8 TB Drive Rebuild | URE Rate |
|---|---|---|---|---|
| 7200 RPM HDD | 150 MB/s | ~105 MB/s | ~21 hrs | 10^14–10^15 |
| 10K RPM HDD | 200 MB/s | ~140 MB/s | ~16 hrs | 10^15 |
| 15K RPM HDD | 250 MB/s | ~175 MB/s | ~13 hrs | 10^15 |
| SATA SSD | 550 MB/s | ~385 MB/s | ~5.8 hrs | 10^17 |
| NVMe SSD | 3,500 MB/s | ~2,450 MB/s | ~54 min | 10^17 |
Understanding RAID Rebuild Risk
RAID rebuild time is the duration during which a degraded RAID array reconstructs data onto a replacement drive after a disk failure. During a RAID rebuild, the array operates in degraded mode — a state of elevated risk where a second drive failure would cause irreversible data loss in RAID 5, RAID 50, and single-parity configurations. The rebuild process must read data from all surviving member drives to reconstruct parity and write the regenerated data to the new replacement drive, making the rebuild I/O intensive and placing maximum stress on the remaining disks at exactly the moment they are most likely to fail. Learn more: calculate RAID array usable capacity and fault tolerance.
RAID 5 rebuild time for large-capacity drives has become a critical risk factor in modern storage arrays. A 16 TB nearline SAS or SATA HDD in a RAID 5 array may take 24–72 hours to rebuild even at sustained sequential read speeds, because the effective rebuild throughput is reduced by production I/O load and by the physical limitations of spinning media. During this entire rebuild window, the array has zero fault tolerance — if any remaining drive fails or returns an unrecoverable read error (URE), all data in the array is lost. This vulnerability is why RAID 6 and RAID 10 are now recommended for drives larger than 4 TB. You might also need: estimate SSD write endurance during RAID rebuild.
Uncorrectable Read Errors (UREs) present the most underappreciated risk during RAID rebuilds. A standard enterprise HDD has a URE rate of approximately 1 bit error per 10^15 bits read. For a RAID 5 array with eight 8 TB drives, the rebuild process reads 56 TB of data — about 4.48 × 10^14 bits — giving a URE probability of roughly 30% per rebuild event. With consumer-grade drives rated at 10^14 bits, the same rebuild carries a 96% chance of encountering a URE. NVMe SSD and SATA SSD arrays have URE rates of 10^17 or better, making SSD RAID 5 significantly safer than HDD RAID 5 during rebuilds, even at large capacities. See also: calculate ZFS RAIDZ usable capacity.
Important Notes
- Rebuild times assume sequential read throughput. Random I/O workloads reduce effective rebuild speed by 30–60%.
- RAID 6 provides a second parity drive, allowing one additional drive failure during rebuild without data loss. Always use RAID 6 for HDD arrays with drives larger than 4 TB.
- Hot spares reduce rebuild start latency from hours (manual replacement) to seconds (automatic rebuild trigger).
- URE probability formula: P ≈ 1 − (1 − 1/URE_rate)^(total_bits_read). For total bits read: surviving_drives × capacity_bytes × 8 bits.
- Enterprise storage controllers throttle rebuild I/O to 20–30% of drive bandwidth to limit production impact, significantly extending rebuild time.
- RAID 10 rebuilds only need to read from a single mirror partner drive, making it far faster and safer than RAID 5/6 for the same drive capacity.