Personally will be trying to transform my server which is currently in a fractal R5 case, into a small-ish Homelab rack, combined with all my network equipment. Will require complete relocation of all network equipment in the house as well as cables so it will be a bit of a project. Also on the lookout for a good quality rack so let me know if you have any recs. Still unsure if u want to do full width rack or mini. Part of me really want the UDM Pro from Unifi..
What are your goals and thing you want to accomplish during 2025?
I want to move my whole server to NixOS. It's gotten to the point where I have no idea where all the Ubuntu config files went, and handling half of it via Docker vs baremetal. I hope this will allow me to set up proper backups as well, and maybe get better at Nix!
I started a few days ago using the VM feature, but it's tricky to work on for now, perhaps I haven't found the right workflow.
NixOS and Restic are an amazing combination, full backups in 20 lines of config. This article was my best find for this: https://francis.begyn.be/blog/nixos-restic-backups . Tip: you can easily write systemd services to trigger each software's preferred backup strategy and simply schedule them to run before the Restic backup - I have them all copy the backups to one folder that then Restic backs up, works great for me!
Hope this helps a bit. I found the effort to be very worth it, but took me almost half a year to get comfortable with it.
Thank you! It definitely does, I will be using that Restic article for sure!
I actually use NixOS on my main laptop, which I found via Vimjoyer's videos. It's great, though I wish documentation for more advanced usage was more readily available. I started making the server, currently my biggest roadblock is testing the infrastructure without going live (I made the flake generate a VM for now but it takes a long time to build it every edit and I can't even get ssh working) and figuring out how I'll eventually install it with minimal downtime.
Is there a reason(s) you’re doing NixOS over something like ProxMox? A friend of mine has been moving his lab over to ProxMox containers so i was thinking to do the same thing, but curious about NixOS since I’ve seen a few people mention it. Thanks!
The entirety of Nix configuration is in somewhere between 1 and 3 files depending on how you like your poison.
It's immutable, so stuff can't just change on you.
Every change you make is stored into a new configuration and you can roll back to any configuration you've ever done with a reboot, so it's kind of hard to brick it.
Apps can't just go in and modify your users or your host table or any of the other configs so it's got an extra layer of security. But then, the package system has more packages than God and is maintained by a million randos with very little oversight.
It has some substantially neat tricks. I moved from one box to another by just doing a fresh install, moving its three configuration files and letting syncthing rebuild my home directory from my other box.
I think, if I were going to use Nix as a home server, I just install all of the services directly on the OS. Updates and configurations for everything would be maintained by Nix itself.
Nix is great if your fine with the packages and configuration they provide. If you want other stuff or features not provided it is a giant pain in the ass and not worth it.
And you'll get oh just write a flake or just write a package file for it.
I think what I need to do correctly on my homelab this year, is setup off-site backups. I currently only backup to seperate drives and machines inside my own home. I need to setup something at my parents place to take weekly and monthly backups.
Other than that, my media server needs a bigger storage drive.
Hetzner storage box is super cheap and works with rclone. They have a web interface for configuring regular zfs snapshots too so you don't have to worry about accidental deletions/ransomware.
True. I'd have to get the €11/month box for it though. It's cheaper to set up one of my Raspberry Pi's with an external drive I already have. I just need to figue out how it's best to transfer and dedublicate the data. :)
I did this recently. Opendrive is free up to 5 gb and works with rclone. All I'm backing up is the config and data needed to recreate my containerized services. I've even had to recreate them from the backup, once.
Moving to a rack is nice, I love my rack. If you’re in or near a city I suggest keeping an eye on Craigslist and ebay (search by distance nearest and lowball ones that have been sitting for months) because it’s not uncommon for nice racks to go real cheap as long as you come get them. I got my rack realllll cheap ($40, 42u, fully enclosed with massive pdu) because it’s a 90s ibm rack and it’s welded steel so it’s like 450lbs. Moving it was a nightmare but it’s real sturdy and I’m never moving it again now that it’s in my basement
For my goals in the short term I have to replace a sas cable that caused a crc error on one drive, it only happened once per smart data but still want to get that done asap. I also have another drive that’s beginning to show some smart issues; it’s on the same sas cable so it may be related because the errors didn’t increase (they all were related to an unclean shutdown, confusing things) but it’s old anyway so better safe than sorry I guess.
Medium term I want to finally upgrade my ups. The one I have now is not a rack mount which is part of what led to the unclean shutdown. It’s also a bit undersized. I have a generator for my house so I don’t need something massive but the one I have is 450va and several years old so with the tired battery I only can get about 5m of runtime. It’s more than enough to cover the transfer from power cutting out to generator power but I want something that’s a bit more reliable in case of generator failure. This is pricey though because my array is pretty huge so it’ll probably be held off unless I find a good deal on a dead one that has cheap batteries available
I also want to put the rack on its own circuit. This is something I should do asap because it’s cheap, just gotta find time and rearrange my panel a bit because it’s pretty full. This would be the other part of the unclean shutdown as the outlet would be in a much better location and I could also install a locking outlet
Would also be nice to pick up a super cheap monitor locally, like something for $15-20 from a pawn shop or Craigslist or something for the rack. Earlier this year I had nginx crash on my server and the webui became inaccessible, I had to drag my nice and kind of large desktop monitor down to the basement to solve the issue, would be nice to just have a shitty small monitor on the rack for that
Speaking of nginx I keep meaning to setup some kind of reverse proxy or mdns for all my dockers so that I can just do whatever.whatever instead ipaddress:3993 which makes my password managers barf but I’ll probably just be lazy and edit my hosts file
Longer term I want to add a secondary low power server that can run something like pfsense to handle my routing, then turn my current wireless routers into access points because they kind of suck as routers.
And of course the array could always be bigger, especially if drive prices fall
I will probably realistically only do the drive and cable replacement, the circuit thing since that’ll be like $40 and a half hour of work, the monitor if I can find one, and maybe the hosts file thing. If I run into cash (unlikely) or a crazy deal (you never know) the ups would be my next priority but there’s a million other things going in life (deductibles just reset for health insurance, hooray)
"I'm never moving it again...". As a larger guy that owns a pickup truck, I wish I had a nickel for everytime I heard that about a big rack I help move. (Or a baby grand piano, pool table, or gun safe) :)
For the nginx reverse proxy - that's how I ran things prior to moving to microk8s. If you want I can dig out some config examples. The trick for me was to set up host based stanzas, then update my internal DNS to have A records for each docker service pointing to the same docker host.
With Kubes + external-dns + nginx ingress, I can just do a deployment/service/ingress and things automatically work now.
Nginx is pretty simple to run as a reverse proxy. Caddy is even easier but not as scalable.
HAProxy looks intimidating at first but it's pretty easy and very scalable and performant. Wendell from Level1Techs has a nice writeup on their forums
Hardware perspective i need a nas. I got myself some piece of acer oem thats not too shit just need a case and some drives (i dont wanna just make stack of drives on top of the stack of old oems i call a homelab).
Am getting starlink installed cos shitty rural aussie internet is shit. So gonna have to do some fucking around to make that work.
Would like some local media reccommendation algorithm (can probs just write some code to dump jellyfin into openwebui and task an llm).
Gotta set up an image gen ai and hook that up to openwebui.
Gotta set up an email server to make authelia notifications not just dumped to a file.
Ohh and i got literaly no backups of anything (well except my docker composes that are on git).
From a hardware perspective I need more storage. Am thinking I'll probably end up with a second Synology NAS unit before the end of the year with 4 hard drives at whatever a reasonable price vs size point it at the time I do it (likely 12-14Tb drives at this stage). Bought drives 2 at a time last time so I'm running two RAID1 pairs right now on the existing unit - adding 4 new drives at once to the home lab will let me move all that content to the new drives and reformat the existing ones into a RAID5 array and get an extra 12Tb of storage.
The one I already have does support adding the 5 drive expansion bay, but figuring that with a second NAS I can move some of my Docker instances currently running on a dedicated laptop onto the second NAS which takes one computer out of the setup as well.
Maintenance wise I've just only done my 2024 maintenance stuff that I do each year. This year it was going through my password vault and making sure everything was synced up, had complex passwords, had two factor enabled where applicable, etc, as well as setting up unique email addresses for every service I'm using (they just forward to the same inbox) to help me track who's been selling my info. Have already found a local fast food outlet who has from that.
Have also rotated all my SSH keys, made sure they were all upgraded to Ed25519 from RSA, set up unique keys for the three devices I regularly use so I can revoke one individually if required, made sure all my hardware was running the latest updates (my RPi running my Pi-hole instance was still on Buster so I had to get that updated before I could even update Pi-hole), etc.
Also swapped my Mullvad connection on my gateway to use Wireguard instead of OpenVPN since they're dropping support later this year.
Honestly I'd love to invest in some sort of rack mounting for home, its something I should look into some more, but right now I just have a whole section of the wardrobes in my study for equipment and tech storage. It's working for now although I worry about it in summer with not a massive amount of heat dissipation in there. This weekend is supposed to be close to 40 degrees Celsius both days 🥵
I want to replace my single drive Qnap NAS by a diy one. It still works, but I also want to redo my backup process, and it would be a good point to start.
To start - moving services from bare metal to rootless Podman containers running via quadlets. It's something I have had in mind for a while but keep second guessing the distro choice. Long-ish release cadence, systemd-networkd and a recent Podman version in the native repos, well supported, and not Ubuntu.
So far openSUSE Leap seems like the winner. A testing machine is up to install everything, write some deployment scripts, and decide on a storage layout and partitioning scheme.
If anyone has another distro to recommend that checks these boxes let me know!
I like rolling release for the desktop, but only want critical patches in any given month for this server, and a major upgrade no more than every 3-4 years. Or an immutable server distro. But it doesn't seem like networkd is an option for the ones I've looked at (Fedora CoreOS, openSUSE MicroOS), and I am not sure if I want to figure out Ignition/Combustion right now.
Next project - VLANs on Mikrotik.
OP - Navepoint makes good racks for reasonable money. I have a Pro series 9u from them and it went together without any problems. It's on the wall with a pretty big ups in it.
Transition my main host to Linux, maybe Plex to Jellyfin, setup a switch (have an RS900 and access to acquire a free CS2960), a UPS or two. I may also wind up getting my hands on some PoE cameras and APs. Run some cable too.
While not really for my hosting, I want to upgrade the Wi-Fi speeds in my home, currently running an eero setup that provides good coverage, but the speed seems poor when transferring large files around the home.
As a networking noob: what are the benefits to having/using an IPv6 stack? I realize that eventually we all have to move to IPv6, but any point in being early on it?
IPv6 is pretty much identical to IPv4 in terms of functionality.
The biggest difference is that there is no more need for NAT with IPv6 because of the sheer amount of IPv6 addresses available. Every device in an IPv6 network gets their own public IP.
For example: I get 1 public IPv4 address from my ISP but 4,722,366,482,869,645,213,696 IPv6 addresses. That's a number I can't even pronounce and it's just for me.
There are a few advantages that this brings:
Any client in the network can get a fresh IP every day to reduce tracking
It is pretty much impossible to run a full network scan on this amount of IP addresses
Every device can expose their own service on their own IP (For example: You can run multiple web servers on the same port without a reverse proxy or multiple people can host their own game server on the same port)
There are some more smaller changes that improve performance compared to IPv4, but it's minimal.
Btw: does anybody know what bad things actually happen if there is no metal cage that blocks all the radio?
Noise happens. Could be no problem, or it could hurt your wifi or mobile data connections, or maybe raise a neighbor's ham radio noise floor. I saw this recently when setting up a pi to run BirdNet-Pi. The USB3 connection to an SSD caused enough noise in the 2.4GHz band that the onboard wifi radio could only connect on the 5GHz band.
I am doing exactly the same as what the OP is doing. In addition to that, I will unify my beelink mini PC proxmox server and our old Intel atom NAS into one rack server with AMD EPYC, proxmox and truenas in a VM.
I sure hope our landlord and the Internet operator can agree on the operator finally bringing fiber cables to all apartments. Then I would have fast enough uplink to my homelab.
Yeah... So I'm in Berlin, and in Germany the internet operators finally are building fiber everywhere. The provider who lays the fiber to our street is Deutsche Telekom, and they promise to pay everything: laying the fiber, bringing it to our house and bringing the fiber to every apartment for a two year monopoly on fiber internet after which it's up for competition using their cables. What needs to happen next is our landlord (a Swiss company) and house management company to agree on these guys to come in, put little fiber dividers to every floor and drill a hole to the walls so we get the fiber cable to our apartment.
Of course this being Germany, they are very slow on agreeing on that, we might need to go to court and for sure we need to talk to our neighbors who own their apartments to push them a bit. I'd expect us to get the connection maybe before end of 2025. But eventually it will happen...
Rebuilding my main router to work with 10gbe fiber that recently became available here. Although it is a tad expensive, so I am not actually sure yet if I will upgrade my contract.
Top 1 for me would be a strong backup mechanism, and by that I mean something that is tested. Currently I have restic in place but I don't even know if in case of a disaster the backups are ok.
And considering my lack of time, I would be happy with just that.
Nothing fancy but I found an old RPI3 and want to selfhost Vaultwarden and piped on that thing to give my parents a way to watch YouTube without those nasty ads and give them a proper and easy way to store their password. (Over wireguard tunnel)
Also If the universe aligns buy a N100 or 200? To host my own router/switch setup and finally take advantage of my 5Gbit obtic fiber 🫤. I still need to figure out how I get WiFi AP to work with a N100...
Not much but I have a lot other things to figure out but mostly software wise :).
Buy a NAS , sell my old gaming pc (acting as 1 node in my proxmox cluster of 2), buy a second mini pc, learn more about backups and fallbacks and all that fun stuff
Get VLANs working, proper IOT network isolation, and Nextcloud as my primary document storage. If that first one didn't bring down my homelab entry time I try I'd be more inclined.
VLANs for the win! Was a difficult process for me too when i first setup my Omada stack, but got there in the end. Very nice to have it sorted. While you're at it, you might want to look into having a seperate wifi for guests! I at least have a very limited guest wifi, with a QR code guests can scan when they come in to my house - neat little thing for them, plus i dont have to worry about their devices on my network.
Add an NVMe cache to my server and upgrade RAM if pricing permits.
From the software side there are a lot of open feature requests I keep adding to my backlog, like setting up a mail archive, reconfiguring my network (separate IoT devices into separate VLANs), maybe reconfigure some of my containers, …
upgrade to microOS from Leap, without violating step 1
reduce the physical footprint of my server (currently in a massive case, would like to go to mini-ITX)
My city is also planning to roll out fiber, so upgrading my network may become a priority if that happens. My current ISP is limited to 100mbps, but I should be able to get 10gbit once they hook me up (though I'll probably stop well short of that).
I want to improve my notifications. With that I mean emails coming from the server when updates are available when something happens during my rsync backup routines or just when they are completed and so on. Right now I don't really know when something is happening just when the server is not working anymore.
I just got my notification system up and running yesterday actually! Although I went with NTFY. Because I use Proton I cannot use that for notifications, plus I'd like to keep my Homelab separated. NTFY is quite well documented and works with almost any service you throw at it, highly recommend this! ✨
NTFY
Any reason to pick NTFY over Gotify? I've been using Gotify for quite a while with good luck, but I would switch if there was a compelling reason.
I'm still in the middle of a K8s migration. It's overkill for a home user, but I want the upskilling.
I've got a QNAP NAS with self-managed linux for storage, and a MS-01 with an RTX A2000 for compute. They're connected over 10Gb SFP+. I'm more than half way done, especially considering I mostly know what I'm doing now.
I still need to figure out the idiomatically right way to schedule pods with their storage, but I got GPU workloads going recently. Next up is migrate the last of the docker-compose from the storage node.
Finish my migration to my local Kubernetes cluster. Tired of running a mix of vms, docker, and bare metal. I got it setup and a few things, just have to power through.
I also need to bump the drive size in my NAS as I’m running low and want to leverage it more, not less. (Pods use PVs hosted on the NAS over NFS or iSCSI).
And get my offsite backups going again, I had to move this last year and it put a real damper on my goals for last year so there’s a lot of “got the stuff just have to make it work”.
Edit: the UDM Pro is pretty nice. That, a rack and a 2.5G enterprise switch were last year’s acquisitions.
I want to move my 4x SFP+ from their current MicroTik switch to my new Brocade. Then I'm very strongly debating running both VM and Ceph over the same 10Gbps connections, removing the ugly USB Ethernet dongles from my three Proxmox Lenovo M920q boxes.
After that? Maybe look at finally migrating Vault off my ClusterHat to Kubernetes.
A pain in the ass. Great but did not fit my needs. Dependent containers would fail a lot during upgrades. Kept trying to figure it out and then just said WTF am I doing this all works fine in docker.
I got a 600 G3 with the 4560 processor, installed Debian onto it and hooked it to my 4k TV mainly to run immich and stremio.
Immich runs just fine, though I have gotten too fast behind its upgrades and having less knowledge about Docker, I'm afraid to update immich. Need to figure that out.
But what disappointed me was that my good quality videos (even the downloaded ones) are choppy to run (unlike the fluid expectations from the video above) and I don't really know what I should look into to make it better.
Moving my servers to Arch (EOS) as my trial for one during 2024 was successful, rock solid. Swapping my router to a Unifi Express as I am switching to an ISP which finally allows me to do so.
I will be moving my entire homelab to a different country, which currently consist of two kubernetes nodes, a NAS and various home automation devices.
I will be scaling down gradually, taking cold storage backups of everything and plan to resurrect everything on new hardware once I have moved.