First off, I'd normally ask this question on a datahoarding forum, but this one is way more active than those and I'm sure there's considerable overlap.
So I have a Synology DS218+ that I got in 2020. So it's a 6 year old model by now but only 4 into its service. There's absolutely no reason to believe it'll start failing anytime soon, and it's completely reliable. I'm just succession planning.
I'm looking forward to my next NAS, wondering if I should get the new version of the same model again (whenever that is) or expand to a 4 bay.
The drives are 14 TB shucked easy stores, for what it's worth, and not even half full.
The NAS itself will likely outlive the drives inside, just the nature of things. Hard drives follow a sort of curve when it comes to failure, most fail either immediately or in a few 10000 hours of run time. Other factors include the drives being too hot, the amount of hard power events, and vibration.
Lots of info on drive failure can be found on Backblaze's Drive stat page. I know you have shucked drives, these are likely white label WD Red drives which are close to the 14TB WD drive backblaze uses.
But 10.000 seems on the low side, i have 4 datacenter toshiba 10tb disks with 40k hours and expect them to do at least 80k, but you can have bad luck and one fails prematurely.
If its within warranty, you can get it replaced, if not, tough luck.
Always have stuff protected in raid/zfs and backed up if you value the data or dont want a weekend ruined because you now have to reinstall.
And with big disks, consider having more disks as redundancy as another might get a bit-error while restoring the failed one. (check the statistical averages of the disk in the datasheet)
I've got a 12TB Seagate desktop expansion which contains a Seagate ironwolf drive. According to the link you shared, I'll better look for a backup drive asap.
Edit: the ones in the backblaze reference are all exos models, but i still have no profounf trust in Seagate.
Yes, according to their historical data Seagate drives appear to be on the higher side of failure rates. I've also experienced it myself, my Seagate drives have almost always failed before my WD drives.
My Synology NAS was running for 6+ years before I replaced it last year. And the only reason I replaced it was to upgrade the hardware to be able to act more like a home server running some more demanding services.
I've since given the NAS away to a friend who is still running it... As always back up your data just in case, but I wouldn't expect the hardware to crap out on you too soon
As others have said, you should really be careful treating your RAID as a backup. I for one do all of my backing up on Playstation 1 memory cards... I had to buy a couple storage containers to store them all, but I guess that technically counts as off-site
Same here. Last year I upgraded from a DS214+ and it was still running great. The only reason I upgraded to the DS220+ was so I could run docker containers.
I sold it for $200 which meant I ran it for 9 years for about $57 a year (CAD). I'm hoping to get even better bang for the buck with the new unit.
I still have my DS1812 which I bought for ~1200 when it came out in 2012/2013 as well.
It only runs NFS/SMB atorage services. Still is an amazing unit. It has been through 7 house moves 2 complete failures, and about 4 raid rebuilds.
Considering it's 2024 now and it's been running for nearly 12 years, it's the reason I recommend paying out the arse for Synology hardware even if it is overpriced. I still get security patches, and I got a recent (2 years ago?) OS upgrade.
It can still run the occasional docker containers for when I need to get the latest ISOs or for running rclone to backup.
If I bought a new unit I would be happy for another 10+ years with it no doubt. As long as I purchased as much ram as possible to put in it because 3GB ram in this unit is what really kills the functionality, besides from the now-slow cpu
I have an 1813+ and it's also been a champ. Unless the computer inside it dies, I will continue to use it indefinitely.
However, I have offloaded all server duties other than storage to other devices. I do not ask my Syno to run Plex or any other services else besides DNS. As long as those SMB shares stay up, it's doing what it needs to do. And gigabit will be fast enough for a long time to come.
Not the batch of WD Red SSDs I got in 2022. 3 of the 4 have failed. I'm assuming the 4th is going to die any day now. Fortunately WD honors their warranties, and only one drive died at a time so the my RAID was able to stay intact.
I feel like I must have gotten 4 from the same bad batch or something. One dying felt like bad luck, but when another died every 3 months it seemed like more than a coincidence. And none of the replaced ones have died, just the original batch.
Imho there's no reason to change or upgrade if your current setup works and you're satisfied with it. Keep your money, you'll see what the market has to offer when you need it.
I've bought a Synology DS415+ back in December 2014. So it just turned 9 and it's still kicking. (Even with the C2000 fix.)
Although Synology stopped delivering updates, I'll keep it as long as it does what I need it to. However, my next device will be a TerraMaster where I'll install OMV on. Can't get a NAS with custom OS in a smaller form factor.
I'd say 6-12 years. Maybe including about 1 hard disk failing. I forgot what the mean to failure is for a harddisk. And in a decade I probably have all the disks filled to the brim, my usage pattern changed and a new one has 10x the network speed, 4x more storage and is way faster in every aspect.
I had my DS213+ for a bit over 10 years, with no failures of any kind, just a bit of drive swapping for more storage space. Finally upgraded last year to a 4-bay with better performance and Docker support, but I would have kept using the old one otherwise.
What do you mean by "last"? I know it's a common term, but when you dig deeper, you'll see why it doesn't really make sense. For this discussion, I'm assuming you mean "How long until I need to buy a newer model?"
First, consider the reasons you might have for buying a newer model. The first is hardware failure. Second is obsolescence - the device cannot keep up with newer needs, such as speed, capacity, or interface. The third is insecurity/unsupported from the vendor.
The last one is easy enough to check from a vendor's product lifecycle page. I'll assume this isn't what you're concerned about. Up next is obsolescence. Obviously it meets your needs today, but only you can predict your future needs. Maybe it's fine for a single 1080p* stream today, and that's all you use it for. It will continue to serve that purpose forever. But if your household grows and suddenly you need 3x 4k streams, it might not keep up. Or maybe you'll only need that single 1080p stream for the next 20 years. Maybe you'll hit drive capacity limits, or maybe you won't. We can't answer any of that for you.
That leaves hardware failure. But electronics don't wear out (mechanical drives do, to an extent, but you asked about the NAS). They don't really have an expected life span in the same way as a car battery or an appliance. Instead, they have a failure rate. XX% fail in a given time frame. Even if we assume a bathtub curve (which is a very bold assumption), the point where failures climb is going to be very unclear. The odds are actually very good that it will keep working well beyond that.
Also of note, very few electronics fail before they are obsolete.
*Technically it's about bitrate, but let's just ignore that detail for simplicity. We'll assume that 4k uses 4x as much space as 1080p
TL;DR: It could fail at any moment from the day it was manufactured, or it could outlast all of us. Prepare for that scenario with a decent backup strategy, but don't actually replace it until needed.
What about building your own NAS? If you've the time / skills you can pick a very small micro ITX motherboard and a NAS case and build your own. This way you can run open-source software and it will have more features / expandability and potentially last way longer.
Do you have any examples of a NAS case? I'm looking at possibilities for redoing my setup which is currently an old Ryzen PC stuffed with 9 or so drives running Windows, SnapRAID, and DrivePool. I'd love to scale it down horsepower wise to make it more efficient and reliable (Windows sucks for this) along with separating it from my general PC usage. Some sort of 8-bay drive enclosure that can directly connect to a thin client PC, and handle different capacity drives (6TB-14TB) would be ideal.
No always, and that's the reason why I would never buy their hardware. There are some older models that can be hacked to install a generic Linux but the majority can't. It's just easier to get something truly open.
I just recently upgraded from my 2 bay NAS, simply because I ran out of storage and attaching more drives via USB just seemed silly at this point (I was already at 5).
I now have a DS2422+ 12 bay with 6x 20TB plates. And I very much expect the NAS to last past 10 years. HDDs can be added and replaced if you have raid setup. Not very feasible in 2 bay NAS.
I’m still running a DS414 filled with WD Red drives. I’ve only swapped out one of the drives as it was starting to have issues. I’ve considered upgrading for more features (Docker, Home Assistant etc) but can’t justify the expense just for “nice to have” instead of “need”. Realistically it only stores Linux ISOs that I get with Download Station.
I'd be more concerned about the longevity of the drives than any NAS itself.
I moved from commercial NAS appliances to a self-built one. It turns out that they cost about the same (depending on the hardware configuration you end up choosing, evidently), but are MUCH better performance-wise.
Using unraid is nice because you can keep replacing drives with lawyer ones as you need, or adding new drives to the array. It’s very flexible that way, despite some of its shortcomings.
Still running a ds210+ i bought second hand about 8 years ago.... Hosts a website and downloads torrents.. Not much else. Think it's about time i upgraded.
Both DS220+ and DS224+ has been a pleasure to setup, but I wouldn't replace your DS218+ just because. Just make sure your RAID is healthy and your backup too.
An alternative to a standalone NAS is to setup your own little ITX server. Only if you enjoy tinkering though, Synology is definitely easier.
At home I'm currently running Server/NAS/Gaming PC all in one.
It's a Debian 12 KVM/QEMU host with an m.2 NVME disk for host OS + VM OS and 2x16TB Seagate Exos disks in RAID1 for data storage. The other hardware is a B650 ITX Motherboard, AMD Ryzen 7600 CPU, 2x32GB DDR5 RAM and AMD Radeon 6650 XT, Seasonic FOCUS PX 750W PSU.
With my KVM/QEMU host, Game Server and Jellyfin Server online it eats about 60W-65W, so not that bad.
The GPU and an USB Controller is passed through with VFIO to a virtual Fedora that I use as a desktop and gaming client.
Just make sure to have a sound dampening pc case so you can keep the servers online without being bothered. The GPU goes silent when the gaming VM is off.
I had a DS212j for about 10 years before I replaced it, and it was working just fine, so I sold it on ebay. It just couldn't keep up with the transcoding plex that I was using it for. Heck, 7 of those years it was running on a shelf in my garage getting covered in dust, and spiderwebs.
I imagine a + model will last even longer than that.
I had a Helios that literally just started having trouble powering SATA disks a few days ago. I got it in 2019 I think, so only 5 years of life.
I use Linux LVM and either ext4 (for older volumes) or btrfs (for newer volumes, because I want the checksums across the data) so in principle I could throw the disks in a PC as a temporary solution.
I have put the disks in SATA to USB 2.0 caddies, and the Helios 4 kind of still works, but I'm ordering a couple of Orange Pi 5 and with USB 3.0 disk enclosures to replace it. It was kind of time anyway, since Nextcloud has dropped support for 32-bit CPU.
I've had my Synology DS215 for almost ten years. I've recently thought about replacing it, but I don't really see the benefit. I'll just replace the drives some time.
I built my 10ish TB (usable after raidz2) system in 2015. I did some drive swaps but I think it might have actually been a shoddy power cable that was the problem and the disks may have been fine.
It'll last long as it's useful to you barring any disasters. I've got a HP gen8 microserver that I've been running as a free/truenas box 8/9 years now and I'm only thinking of replacing it now as I need more performance than the CPU in it can give.
I have the same model as you and I also wonder when it will explode lol (mostly because I have it in my ROM and can hear when it is struggling).
I have it with lots of docker containers (I can't help it, it is my only server) and the drives never cease to spin.
I actually don't recall since when I have it but it must be similar as you as well...
Just as of recently started to do clean up of containers and such, mostly because I did a fuck up (I deleted with Portainer all my unused volumes which, strangely enough for me, got rid of Portainer's volume, I needed to recreate all my stacks/compose from portainer each one, so I cleaned up some stuff in the process).
I've got a DS416 that I've had for almost a decade, and its still going strong. Worst thing I've had to deal with is a shucked easystore drive that died, but the other 3 are running fine.
Its running RAID (SHR I think) so it just screamed at me for a while until I figured out what was going on. Once a new disk was obtained, I was able to power down, swap disks and resilver the array with no data loss. I turned off a number of services I run during the time I was down a spare disk to make sure I didn't put to much stress on the array, but otherwise I was still able to run my NAS/media servers without much issue.
People have tested them long term at this point. Outside of a few rare exceptions, there's not a noticeable difference in reliability between shucked drives and 'normal' drives. They're the same stock but just rebranded and have to be cheaper because they're marketed primarily for retail as opposed to enthusiast/enterprise who are willing to pay more.