Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts Tagged ZFS

Day 61 of #100daysofhomelab – swapping disks in a Hetzner Dedicated Machine

It’s been a while… So, for Day 61 of , I thought I should write up how to swap a disk in a Hetzner Dedicated Machine.

I have a dedicated server I rent from Hetzner in Germany. It has an Xeon E5-1650 V2 processor (6 cores, 12 threads, 3.5Gz base, 3.9Gz turbo), 128Gb RAM, and a pretty impressive 15 6Tb HDD. All drives are hooked to a Mega RAID controller, but because I am running ProxMox, I left it in JBOD mode and set up the 15 drives in RAIDZ-2. All 15 drives are in a single pool (probably not ideal, but it works for me). Now and again, I get a message from ProxMox telling me about bad blocks… and every time it happens, I have to remember what to do to find the bad drive, report it to Hetzner, wait for them to replace the drive and then add it back to the pool… Today, it happened, so I thought I better document it, to help future me, and hopefully someone else out there…

First, we need to find the drive in question. Usually, I’m my alerts, I get the Serial number of the drive causing problems. So, I ran the following command:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

This gives me a full list of drives along with the Slot Number (needed when sending to Hetzner) and the Serial Number. the data output starts with the “Enclosure Device ID:” so when you find the Serial number, look above it for the Slot Number… so, my issue is with the disk in Slot 10. I opened a support ticket with Hetzner requesting a replacement disk. It can take an hour or more for this, but sometimes faster. Depends on their load…

Once you get a confirmation that the disk is done, you now need to swap it into the zpool.

first, we must check if the new drive is set up correctly. Run the following:

megacli -PDList -a0 | grep Firmware

We are looking for “Firmware status: Online, Spun Up”. If we have anything marked as configured, we need to run the following:

megacli -CfgForeign -Scan -a0

This shows us any foreign configurations. If that’s more than 0, we run:

megacli -CfgForeign -Clear -a0

This clears out that configuration. Next, we need the Enclosure ID and Slot number for the new drive from:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

cause we need to run:

megacli -PDMakeGood -PhysDrv [<enclosure>:<slot>] -a0

Finally, run:

megacli -CfgEachDskRaid0 WB RA Direct CachedBadBBU -a0

Note: If that fails with a message about cache data, you may need to run:

megacli -DiscardPreservedCache -L"10" -a0

This will clear the cache and then you can run the CfgEachDskRaid0. This will mark all new disks as JBOD disks… used for ZFS. If you have something different, check the docs from Hetzner below.

Next, we need to swap disks in ZFS. Run

zpool status

to get the info about the missing disks. the missing disk will show as unavailable. Next, find the ID of the disk that was added.

cd /dev/disk/by-id/

ls

find the new disk (usually wont have any partitions on it). Now, its a matter of running the following:

zpool replace rpool /dev/disk/by-id/scsi-3600605b008f498802aa37da51674ea7e-part3 /dev/disk/by-id/wwn-0x600605b008f498802b2a3a683752e088

swap the scsi-36xxx and wwn-0x6xxx parts for the ones you found and rpool with your ZFS pool name.

finally, run

zpool status

to see the status, run:

zpool status -v -1

shows you the status with more info and refreshes every second. ZFS is now running in the background resilvering the drives and swapping out the old ones. since the old one is missing, it will wait till the new drive is sorted then remove the old one. This can take some time, depending on your disks and data size.

Hopefully, this helps someone!

Some links for info:

LSI RAID Controller – Hetzner Docs

Day 56 of #100daysofhomelab

Day 56 of and I managed to fix some stuff with my TrueNAS box. There was lots of messing when it came to permissions, but it works now. Some speeds are below. Not quite getting the speeds I was expecting, but there I have not tweaked anything, yet… This is going from my MacBook Pro with a 10Gb adapter. The reads are quite good, but the writes… well, the HDDs are FASTER than the NVMe… No idea why… I did get a new card to add another 4 NVMe drives in… We’ll see what happens when that gets built.

NVMe drive speed
Spinning Disk Speeds

And now, the links:

ZFS over multiple DVD/BD-R images

A couple of days back, I started thinking about archiving and backup software. I kind of have backups “sorted”, with my MacBook Pro using BackBlaze to backup to the cloud, Time Machine backing it up to my Synology, my VMs on Proxmox being backed up to Proxmox Backup Server off-site, my Synology and QNAPs being backup to B2 and Hetzner and some other bits and bobs… But for the Archiving stuff, I am not really set up… So, I went looking for archiving software. Couldn’t find anything, so asked on r/DataHoarder. Still no options, at the time of posting, but someone did reply with the idea of using DVDs (or Blu Rays) for ZFS...

Ok, that’s just crazy, but in a kind of a good way… kind of like the floppy RAID stuff I have seen… It does help with the storage of data, plus allows for potential loss of data… but it needs some automation to get it fully perfect…

Assuming you are using this for archiving, you could automate building 5 ISOs, just shy of 100Gb each, once a month, create ZFS ZRAID 2 or 3 (depending on how paranoid you are) and then write your data to it. ZRAID lets you lose 1 disk, giving you around 400GB of usage. Z2 brings that up to 2 losable disks, and 300Gb and Z3 is 3 disks and 200GB. I think Z2 would be your best bet, especially if you are using something like MDisk and are storing them safely.

Once finished, unmount and send an email saying you need to write the ISOs to disk. Label each disk with a unique serial number (this is where the archiving software would be handy) plus the set details and number (so, March 2023 Disk 1/5).

If you need something from that backup you stick it in the drives… You can do it with multiple drives, so with 5 disks and ZRAID, you need to mount a minimum of 4 of them. ZRAID2 needs 3 and ZRAID3 needs a minimum of 2… Ideally, you would want 5 of them, allowing you to check all disks (ZFS Scrub) and then get your files off.

A year of archiving would require 5 drives (say 100 quid a pop, USB makes things easier... Internal is possibly cheaper) and 60 disks (I Found 25 100Gb MDisks disks on Amazon for around 500 EUR) costing a total of maybe 2k, with 15 extra disks…

Follow-up questions:

  • Does ZFS allow the mounting of read-only?
  • Could you do this with Rewritable BluRay disks? Could they be mounted directly and written to? Leave them in the drives for the month, let writes do their thing and then archive them once a month? It’s archived, so it doesn’t need to be fast…

Day 36 of #100daysofhomelab

Day 36 of #100daysofhomelab and after yesterday’s post about RAID 10 on my external array, I found ZFS on OSX, and well, now I have a ZFS RAIDZ pool setup. It is showing as around 28.8Tb usable space, and so far, so good. 

UntitledImage

Other than that, I have been looking into Ubuntu Landscape to monitor my Ubuntu fleet of machines. If you host it in-house, you get 10 machines for free, so hopefully, that’s enough for me to start with… I am working on getting it running on 22.04, using these beta install steps. RB5009 install is still pending… keep hitting stupid blocks stopping me from doing it, but hopefully this week… 

day 11 of #100daysofhomelab

Day 11 of #100daysofhomelab and i am trying to fix my Plex Server… Seems when i moved from Fedora to Ubuntu, my ZFS pool did not import. I did not notice this, since mostly what is on it was temp files and logs… Well, the main drive is running out of space, so I checked the ZFS and it was failing because the version of OpenZFS I was running on Fedora (from master in their GitHub repo) is not compatible with the one on Ubuntu… (facepalm) So, have to rebuild and install OpenZFS from code… hopefully this works… [Edit… it did not work… ugh] [Edit 2: This did work though: Installing ZFS on Ubuntu (uptrace.dev) especially the part of building from code].

In other news, some links below.

Backups, Backups, Backups!

I have posted about backups a few times on this site in recent years, and its still something I make tweaks to every now and again. The latest setup is probably over the top, but I will give you a walk though on it and some of it could be useful to some of you.

I have a couple of different machines and storage devices running that need backups. Some need daily backups, some could get away with weekly. The list is as follows:

  • GodBoxV1 (2X4 Core Xeon, 82GB RAM, Fedora, 512GB Boot SSD, 5x4TB HDD in ZFS RAIDZ1)
  • GodBoxV3 (2×20 Core Xeon, 192GB RAM, Ubuntu, 2x512GB NVMe SSD in RAID 0 for booth, 4X512GB NVMe SSD ZFS stripe for FAST storage, 8x8TB HDD in ZFS RAIDZ2 for bulk storage)
  • Docker Box (VM, runs a LOT of different containers on the network)
  • Synology DS1817+ (8x8TB HDD in SMR with 48TB usable, 2x10GB + 4x1Gb NICs)
  • QNAP TS-932X (5X8TB HDD in RAID 6 along with 4X512GB SSDs in RAID 5, 2X10Gb NICs)

2020-11-04_10-15-00_IMG_2342

GodBoxV2 and the 4 C6100 boxes are running Widows Server 2019, and I have 4 new C6220s which, when in production, may be either running Server 2019 or VMWare ESXi. More on this in a future post. GodBoxV1 and V3 are being backed up with Borg/Borgmatic, and the Server2019 boxes are running Hyper-V and the VMs are not backed up on a nightly basis, but that is planned in the future…

Borgmatic is basically a very nice and handy wrapper for Borg itself. It allows you to easily configure a YAML file with what you want to backup, what you want to exclude, where you want it backed up to (multiple locations if required) and details on retention, etc. It also allows you to send notices when something completes or fails. I have 3 main machines which are backed up using Borgmatic, but will probably add more at some stage. These three backup to 3 different locations; Local ZFS Storage in house (currently on GodBoxV1), RSync.NET and Hetzner’s Storage Box. [Note: Hetzner have 2 types of storage: Storage Box and Storage Share. Storage Share seems to be NextCloud and does not have BorgBackup installed. Storage Box can be used with BorgBackup though]

[Note: RSync.net have an offer for Borg Storage: 1.5c per Gig. So, 100Gb a year costs only $18. On their signup page, if you enter referral code 2019-09-13_05-27-04, I get some extra storage for backups on my end, and you can help me continue writing random stuff here!]

Nightly, Borgmatic runs and backs up everything important on GodBoxV1, V3 and the Docker Box, to all three locations. Then, on GodBoxV3, we backup some larger files (photos, video and other large data from my cameras) to Hetzner. I also plan on setting up a backup of those larger files to either my Synology or QNAP boxes. The reason the large files are only backed up to one current location is size; they currently weigh in at around 300GB, give or take, and I currently have around 200Gb of usable space with RSync.NET. My plan is to use the QNAP or Synology box as a secondary backup for this storage at some stage.

On a nightly basis, the Synology runs backups to both Backblaze B2, Wasabi and Hetzner using Hyper-Backup. Finally, on a weekly basis, some folders on the Synology are backed up to AWS Glacier.

This gives me a fairly good set of backup options, but there are some tweaks I want to make:

  • Important VMs on the Hyper-V Cluster should be backed up. Daily backup to local storage (QNAP, Synology, ZFS) and one weekly backup external (Hetzner, B2, RSync.net)
  • Large media files backed up to a second location, either local or remote.
  • Intel Nuc, Home Laptop and Mac Mini should also be backed up. 99% of the time they use storage from the ZFS pool or the NAS devices, but they still have local storage.
  • Look into backing up iPhones, Android Phones, iPads, etc, to local storage also. I do use PhotoSync to copy photos from my iPhone to the ZFS storage, which is backed up, but having something to backup the rest of the data, other than iCloud, would be handy.

So, thats my 2020 backup plan. Any comments, questions, etc, shout in the comments section.