The Bigger They Are, The Harder They Fall
It was suggested many many years ago, that as the size of HDDs and the RAID arrays they are part of increase, there will come a time where it is impossible not to have bad bits. The sheer number of bits in a petabyte array means that at least some of the drives have bad sectors and likely ones they don’t even know about. As with painting an incredibly long bridge, by the time a self scan has completed there will be new problems arising in the sectors it began the scan with.
While this might not stop arrays from growing, it certainly shortens the lifespan of the drives in the array. Backblaze saw a report from Secure Data Recovery that found the average MTBF was 2 years and 10 months. They questioned the accuracy of the findings, due to the relatively small sample size and the upsettingly short lifespan. As it turns out, SDR was being generous as the 17,155 failed HDDs Backblaze examined died in an average of 2 years and 6 months!
Their tests encompassed 72 different models and excluded failed boot drives, drives with no SMART data, or drives with out-of-bounds data. Take a look at the models tested and the number of failures over at Ars Technica.