This post is NOT sponsored, the products were bought with my hard-earned money.
I’ve been running a full SSD storage setup for a few years in my home server and I’ve been happy with it, except for the storage anxiety that I get with running small pools of fast storage, which is why I started looking at how the hard drive market is doing.
Half of tech YouTube has been sponsored by companies like ServerPartDeals, so they were one of the first places I looked at, but they seem to only operate within the US and the shipping+taxes destroy any price advantages from ordering there to Estonia (which is in Europe).
At some point I stumbled upon datablocks.dev, which seems to operate within a similar niche, but in Europe and on a much smaller scale. What c…
This post is NOT sponsored, the products were bought with my hard-earned money.
I’ve been running a full SSD storage setup for a few years in my home server and I’ve been happy with it, except for the storage anxiety that I get with running small pools of fast storage, which is why I started looking at how the hard drive market is doing.
Half of tech YouTube has been sponsored by companies like ServerPartDeals, so they were one of the first places I looked at, but they seem to only operate within the US and the shipping+taxes destroy any price advantages from ordering there to Estonia (which is in Europe).
At some point I stumbled upon datablocks.dev, which seems to operate within a similar niche, but in Europe and on a much smaller scale. What caught my eye were their white label hard drive offerings. Their website has a good explanation on the differences between recertified and white label hard drives. In short: white label drives have no branding, have no or very low number of power-on hours, may have small scratches or dents, but are in all other aspects completely functional and usable.
White label drives also have a price advantage compared to branded recertified drives. Here’s one example with 18 TB drives, the recertified one is 16.7% more expensive compared to the white label one, and the only obvious difference seems to be the sticker on the drive. I highly suspect that the white label one is also manufactured by Seagate based on the physical similarities.
The price difference between a recertified and a white label drive. I took some time to think things over and compared the pricing of various drives. The drives were all competitively priced between each other, with the price per terabyte hovering around 13 EUR/TB, so it didn’t matter much which drive size you picked, you’d still get a pretty solid deal. It was also a better deal compared to using an WD Elements/My Book drive of the same size.
I decided to go with two 18 TB hard drives. I considered buying the 20 TB or 22 TB capacities, but decided to go with 18 TB because it’s the largest single hard drive that I can easily and quickly buy a replacement for in the form of a WD Elements/My Book drive.
The stock on datablocks.dev
is quite volatile, the drives are in stock when new batches arrive, but they can also quickly go out of stock. I saw this live with the 22 TB hard drives, one day there are 35 left, the next day there can be 7 left, and then only one lone drive.
At the time of writing, the 18 TB model that I bought is out of stock, so my choice to go with a slightly smaller but more easily replaceable one is validated.
For those that have followed my blog for a while will know that I’m a huge fan of all-SSD server builds, especially this one by Jeff Geerling that I still consider building from time to time. If I dislike noise, higher power usage and slower performance, then why did I get the hard drives? It’s simple, really: I now have an actual closet that I can stash my home server in, meaning that noise isn’t that big of a worry, and as long as my home server takes about the same amount of power as my refrigerator or dishwasher, then that’s fine. SSD prices still haven’t gone down as much as I’ve hoped over the years, so the all-SSD build ideas that I have are way outside my budget.
The drives arrived in a reasonable time window. The packaging was adequate, although I was slightly concerned with the cardboard box showing signs of something hitting it hard. The drives were packaged within sealed antistatic bags, and with ample bubble wrap surrounding them.
The cardboard box with a slight dent. Plenty of paper inside to prevent the drives from flying around. Drives were wrapped in bubble wrap, with the drives themselves also separated with a few layers of it for maximum protection. Drives in anti-static bags. Just as described, the drives did have slight scratches and very minor dents in them, but in all other aspects they looked like new.
One of the hard drives. It does have slight dents and scratches, matching the description. The second drive had a more noticeable bump in it. The backside of the drives. Those USB-SATA adapters from shucking are really darn handy now. Adapter courtesy of my brother-in-law.
Before putting them to use, I formatted the drives using badblocks
. It took a full 24 hours to do a full drive write. The write performance peaked at 275 MB/s and slowed down to 123 MB/s at the end, which is expected.1
The performance of the drive during the full drive format.
I also had to choose a larger block size for badblocks
because otherwise it could not handle the drive, resulting in the command being badblocks -wsv -b 8192 /dev/sdX
.
This is what peak jank looks like. I unfortunately did not save the SMART data from the time I received the drives, but the contents were as expected, there were no more than a few power on hours and other metrics were OK. Keep in mind that it’s also possible to reset SMART data on a drive so this information cannot be taken at face value.
The drives are noisy, as expected. They run at 7200 RPM and do the usual clicks and clacks that a normal hard drive does. If this bothers you, use foam to fix it. The soft side of a sponge can work just as well.
With these drives I’ve now followed my own advice and tiered my storage: two 1 TB SSD-s for the things that benefit from good speed and latency (databases, containers), and 18 TB hard drives for bulk storage, backups and less frequently used data. Coming from an all-SSD build, I expected the performance to drop in day-to-day operations, but in most cases I cannot tell a difference. My family photos load just fine, media plays back well, and backups take slightly longer, which isn’t noticeable due to them running during the night. Only when I look at the Prometheus node exporter graphs do I notice that sometimes the server is waiting behind the disks a bit more due to higher iowait
.
During full backups or disk scrubs, the iowait is more prevalent on graphs (the red part), but that doesn’t seem to impact my other workloads in a significant way. The drives are connected via two WD Elements/My Book USB-SATA adapters, over USB 3.0, and stored right below my ThinkPad T430, which is proudly running as my home server. I added glue-on rubber feet on the stand to make sure the drives do not accidentally slip off anywhere. It does nothing to reduce the noise, though, and I’m convinced that it’s actually making the noise worse. I’m not proud of the lack of cable management, but this setup works well. Given how often I get new ideas, it doesn’t make sense to organize this too much anyway. The power usage did shoot up as a result, roughly 10-20 W. Not ideal, but my whole networking and home server setup is idling at below 45 W, and I’ve had less efficient home servers in the past, so it’s not that big of a deal.
The power usage was elevated while I was formatting and copying files over to the new drives, but after that it’s stabilized to around 1.2 kWh per day. In this configuration, the drives run quite cool. During formatting on a hot day, I saw them go up to a maximum of 51°C, but in general use they sit at around 38-42°C.
Overall, I’m reasonably happy with the drives. I expect these to last me at least 5 years, and I’m probably going to switch one of the drives out a bit sooner to reduce the risk of a full drive pool failure. They’ve made it the first 50 days, so that’s good!
hard drives are expected to be slower at the end of the drive because of their design, the platter rotates at 7200 RPM but the end of the drive is located at the inner tracks of the platter, near the center of the spindle, which results in a slower effective speed. Math is cool! ↩︎