Western Digital recently announced new data center HDDs that increase Shingled Magnetic Recording (SMR) capacity to 32TB and Conventional Magnetic Recording (CMR) capacity to 26TB. The company...
When will it be commercially available though? Supposedly Seagate has had 30TB drives out for the better part of a year, but I can't find anything larger than 24TB actually available for purchase.
I'd guess that they're commercially available but only for hyperscalers - large companies like Google, Amazon (AWS), etc that need a huge amount of storage.
I've been waiting for a 32TB to become available as well, Seagate announced that drive last year and it's still not available outside data centers. I suspect the WD one will be the same.
Assuming that these have fairly impressive 100 MB/s sustained write speed, then it's going to take about 93 hours to write the whole contents of the disk - basically four days. That's a long time to replace a failed drive in a RAID array; you'd need to consider multiple disks of redundancy just in case another one fails while you're resilvering the first.
I'm guessing that only works if the file is smaller than the RAM cache of the drives. Transfer a file that's bigger than that, and it will go fast at first, but then fill the cache and the rate starts to drop closer to 100 MB/s.
My data hoarder drives are a pair of WD ultrastar 18TB SAS drives on RAID1, and that's how they tend to behave.
This is one of the reasons I use unRAID with two parity disks. If one fails, I'll still have access to my data while I rebuild the data on the replacement drive.
Although, parity checks with these would take forever, of course..
That's a pretty common failure scenario in SANs. If you buy a bunch of drives, they're almost guaranteed to come from the same batch, meaning they're likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.
Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.
Also why you need to schedule periodical parity scrubs, then the "extra load of a rebuild" is exercised regularly so weak drives will be found long before a rebuild is needed.
It's more likely if you bought all the drives from the same store (since that increases the likelihood that they're from the same batch), so you should make sure that you buy them from different stores.
Drives like this are hermetically sealed with an inert gas like argon or helium on the inside. Even the presence of oxygen and nitrogen molecules can compromise the drive. If dust is getting to the moving parts of your hard drive, it's toast no matter where it's installed.
My NAS uses a pair of SAS drives, and they make noises at boot up that would be concerning in a desktop. They're quite obnoxious. But I keep them in part of the house where they don't bother me.
Well I have no experience with these particular drives, but they do seem to have 11 platters. Which is beyond insane as far as I'm concerned. More platters means more moving parts, more friction more noise (all other things being equal).
I think it’s kinda not possible for any store would be louder than any SSD since solid state stuff doesn’t make noise at all - unless there’s a fan attached to cool it down.
Though the ones in the article are not SSDs, but plain old magnetic HDDs.
I've found that the only thing you can hear through a closed basement door are noisy high speed fans, e.g. from used 19" servers, disks produce much less noise.
There is already a samsung 8 Tb SSD being sold on amazon. Buying 4 of those will be far cheaper than this monstrosity. And it will be silent, and actually useful as a home server, much faster too.
FWIW in July last year Amazon was selling these as low as $320. My biggest fear of a 26 TB HDD is getting all 26 TB of data off of it if I needed it without the drive dying.