Unfortunately I cloud not get the same type of case for the NAS as I had for the servers, dus to the dimensions, so I have chosen the case as shown below.
The case can easily fit 8 disks, which is enough for what I want to do with it. To save money I re-used the disks that were already in the server, and organized them in the following way: – 2x 3TB disk in raid1, to be used for data from Proxmox – 2x 4TB disk in raid1, to be used for data storage and backups – 2x 6TB disk in raid0, to be used for media
The Proxmox pool disks I want to replace with SSD’s in the future, to get more speed.
The setup was fairly easy to do, as was the rebuild, and now I can move the 3 remaining important services (rsync, Time Machine and samba server) to the NAS.
In the next posts I will tell more about the individual services in more detail.
So why did I choose to go for Proxmox you might ask. Of course this is very personal, but I will share my reasoning behind it:
Kubernetes You cannot ignore it, as it is heavily used all around us, but it was a bit too professional for me in the sense that it took al lot of work to get it running and I could not get comfortable enough to have it in my home network.
Docker Docker I was comfortable with, and switching from a single docker node to a docker swarm was reasonably easy. But as you go along, you want some applications to have an IP address on the network instead of the swarm address. You maybe want the swarm to have a single IP for the nodes to access your services running on one of them. We used keepalived and macvlan, which worked, but it was not stable for some reason.
Proxmox Having looked at a few YouTube channels, I learned about Proxmox, and decided to look into it. Following the training on “Learn Linux TV”, I could see the potential for my home network, being able to do both Docker and Virtual Machines. Also the possibility to have multiple Proxmox nodes in a cluster and then make applications High Available was very appealing to me.
Moving the important services . . . So now I had three Proxmox nodes in a cluster, it was time to migrate some important applications in order to free up the main server, so this could be build into a NAS, and in the process free up the swarm nodes, who will move to my son for his own server projects.
On the main server we had Nginx Proxy Manager running, moved that to Proxmox
On the Swarm we had Vaultwarden running, moved that to Proxmox
On the Swarm we had Owncloud running, moved that to Proxmox
We also had a rsync server, timemachine and samba server running on the Swarm, but these can all be moved to Truenas, so we need to leave the swarm running a little bit longer . . .
It took some strength to get the cabinet up the stairs and in its final spot, but in the end it was all worth it. I used some L-Profiles to support the servers, as this is much cheaper than rails (rails around 70 euro’s, 2 L-Profiles around 12 euro’s). The cabling is still a bit messy, but I will have this sorted at some point in time.
The cabinet as it is shown in the picture uses around 100W, and the temperatures of the servers stay below 30 degrees (with the door closed). I have some stuff running on the servers , but that is covered in later posts.
The power is organized with two (one on each side) power strips. Be aware of the space you have in the back . . .
This turned out to be somewhat more difficult than anticipated. It is not as standard as you might think, and there is a lot to consider dimensioning wise. So it is good to know that you need to watch out for the depth of the server cases in relation to the depth of the cabinet. Also that there are cabinets that are not 19 inch wide, as I found out when it arrived and it turned out that I did not check this well enough at the site.
This is the cabinet I have ordered:
With a dimensioning of 60x60x100cm it is not too big and with 18U rack space it has more than enough room for the project: – Unify Dream Machine Pro – 1U – 3 times server case (2U) – 6U – NAS case – 4U – Test Server – 2U – Possibly a Unify router – 1U
Which leaves 4U . . .
The cabinet is very nice. It has wheels, and stands. I use two stands in the front to prevent the cabinet from rolling. It has a glas door at the front and a metal door at the back, both lockable with a key. Both the side panels are also removable and can be locked with a key.
For the case I have chosen a 2U high model from inter-tech:
The case has enough room for the hardware, an knowing the hardware will not take much space, it will not get very warm (hopefully) with the help of two case fans. For convenience I have chosen a 500W power supply from the same vendor.
The server hardware consists of the following items: – Asrock B450M-HDV R4.0 motherboard – AMD Ryzen 5 4600G processor, with cooler – G.Skill DDR4 Aegis 2x 8GB 3200MHz memory – WD Blue SN750 500GB M.2 SSD storage
The build was not too difficult. Took some fiddling to put the cooler on the CPU, and I noticed that the power supply had its fan on top, which means that this is covered by the lid of the case. This had me worried a little, but it turns out not to be a problem, as the system is not getting very warm. Top temperature I got from the motherboard is below 30 degrees, and the case does not feel hot. Will be interesting to monitor this when there are three of them on top of each other in the cabinet.
Installing Proxmox was very easy, and I will publish an artikel on the subject at a later stage.
On the network side: Being a big fan of Ubiquity Unify products, and knowing there will be a 19 inch rack, I decided to replace my Unify Dream Router for the Unify Dream Machine Pro. This would boost the throughput and me not having to worry about having multiple networks.
On the server side: Three brand new servers with decent capacity but also budget friendly. The main server to be rebuild to a NAS storage.
Optionally: A test server, to be able to play around with new things
First steps . . . I started out by playing with TrueNas for the NAS part, and Proxmox for the server part. TrueNas was fairly easy to install and work with. Only choice to make is the Core version or the Scale version. In the end I have chosen for the Core version, as this is the oldest of the two.
The followers of this site are probably disappointed by the lack of updates. There are several reasons (I will not bother you with that), but one of these reasons is the struggle with the chosen paths and a long time desire to have my own server rack.
So we started out with a single server with enough power to run several services and still have power to spare. At this time I was doing this together with my son, which resulted in this beast:
We had this running for a long time, and it was fulfilling our needs and that of the rest of the household. But whenever we were working on the server and things started to go wrong, we ended up with complaints because the services were down. Not a good spot to be in . . .
This is when we started our next endeavor, and decided to buy some second hand HP workstations to build us a fully redundant docker swarm.
This was working fine as well, and we had our redundancy in order, but it was not entirely the way I wanted it. In the mean time my son was getting more busy with his own projects and spend less time on the swarm. This made me decide to start my own project and at the same time fulfill a wish to have my own server rack.
Keep visiting this site, so you can follow along how this dying wish materializes . . .