Tiernan's Comms Closet

Geek, Programmer, Photographer, network egineer…

Currently Viewing Posts in Homelab

Building Cloud Images for Proxmox

I needed to create a few Ubuntu VMs for a Kubernetes cluster for testing, and wanted to make this as easy as possible using Proxmox and some (minor) automation… Here is what I have done:</span>

First, Download the base image:

wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

Then tweak the image… I’m using my apt-cacher-ng proxy here so I set the proxy for all VMs. you can remove it or tweak it as required. If you want to remove it, remove the append-line option. I am also installing qemu-guest-agent here. You can add extra items at this point if you want.

sudo virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent --append-line '/etc/apt/apt.conf.d/00proxy:Acquire::http { Proxy "http://10.244.71.182:3142"; };'

Sysprep the image

resets it to the default stage. If you don’t do this, and clone the machine 2 or 3 times, they all get the same machine ID and IP address… [Note: This is not fully working for me… See below where I make changes to machine ID… ]

sudo virt-sysprep -a jammy-server-cloudimg-amd64.img

Create the template. I used ID 9000 and set a name. You can change this. Also, I have mind tagged with VLAN 72 (my Kubernetes VLAN). Change or remove as required. Also, I set the disk size to add 50Gb. Any mention of godboxv2-tank should be changed to your storage name…

sudo qm create 9000 --name "ubuntu-2204-cloudinit-template" --memory 4096 --cores 2 --net0 virtio,bridge=vmbr0,tag=72

sudo qm importdisk 9000 jammy-server-cloudimg-amd64.img godboxv2-tank

sudo qm set 9000 --scsihw virtio-scsi-pci --scsi0 godboxv2-tank:vm-9000-disk-0

sudo qm set 9000 --boot c --bootdisk scsi0

sudo qm disk resize 9000 scsi0 +50G

sudo qm set 9000 --ide2 godboxv2-tank:cloudinit

sudo qm set 9000 --serial0 socket --vga serial0

sudo qm set 9000 --agent enabled=1

sudo qm template 9000

Clone the VM into a new VM.

sudo qm clone 9000 2001 --name k8s-01

sudo qm set 2001 --sshkey godboxv3.pub

sudo qm set 2001 --memory 4096

sudo qm set 2001 --ciuser tiernano

sudo qm set 2001 --ipconfig0 ip=dhcp

Change tiernano and godboxv3.pub to your settings. change names and memory as required.

As mentioned above, I am still having the issue with IPs being shared… to fix this, log into the boxes and run the following:

echo -n > /etc/machine-id

rm /var/lib/dbus/machine-id

ln -s /etc/machine-id /var/lib/dbus/machine-id

and then reboot. The problem should now be solved.

Some network Upgrades going on

I am in the middle of a fairly large network upgrade for the CloudShed. I have bought 2 Ubiquiti Unifi Hi-Capacity Aggregration Switchs, a 24 Port SWitch Pro POE, a Switch Enterprise 8 PoE, a couple of U7 Pro Access Points and a U6 In-wall.

The 2 Aggregation Switches have 4 25Gb ports on them, along with 28 10Gb ports. 2 of the 25Gbs are going to be linked between the house and the CloudShed. The U6 InWall is going into the office, the 2 U7 Pros are in the house already, powered by the Switch Enterprise 8 Poe (2.5Gb enternet on that!) and the 24 port Poe Switch will replace my older 16 port one, which does not have 10Gb ethernet. More stuff on this coming when I get more time to install it all.

Day 61 of #100daysofhomelab – swapping disks in a Hetzner Dedicated Machine

It’s been a while… So, for Day 61 of , I thought I should write up how to swap a disk in a Hetzner Dedicated Machine.

I have a dedicated server I rent from Hetzner in Germany. It has an Xeon E5-1650 V2 processor (6 cores, 12 threads, 3.5Gz base, 3.9Gz turbo), 128Gb RAM, and a pretty impressive 15 6Tb HDD. All drives are hooked to a Mega RAID controller, but because I am running ProxMox, I left it in JBOD mode and set up the 15 drives in RAIDZ-2. All 15 drives are in a single pool (probably not ideal, but it works for me). Now and again, I get a message from ProxMox telling me about bad blocks… and every time it happens, I have to remember what to do to find the bad drive, report it to Hetzner, wait for them to replace the drive and then add it back to the pool… Today, it happened, so I thought I better document it, to help future me, and hopefully someone else out there…

First, we need to find the drive in question. Usually, I’m my alerts, I get the Serial number of the drive causing problems. So, I ran the following command:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

This gives me a full list of drives along with the Slot Number (needed when sending to Hetzner) and the Serial Number. the data output starts with the “Enclosure Device ID:” so when you find the Serial number, look above it for the Slot Number… so, my issue is with the disk in Slot 10. I opened a support ticket with Hetzner requesting a replacement disk. It can take an hour or more for this, but sometimes faster. Depends on their load…

Once you get a confirmation that the disk is done, you now need to swap it into the zpool.

first, we must check if the new drive is set up correctly. Run the following:

megacli -PDList -a0 | grep Firmware

We are looking for “Firmware status: Online, Spun Up”. If we have anything marked as configured, we need to run the following:

megacli -CfgForeign -Scan -a0

This shows us any foreign configurations. If that’s more than 0, we run:

megacli -CfgForeign -Clear -a0

This clears out that configuration. Next, we need the Enclosure ID and Slot number for the new drive from:

megacli -PDList -aAll | egrep "Enclosure Device ID:|Slot Number:|Inquiry Data:|Error Count:|state"

cause we need to run:

megacli -PDMakeGood -PhysDrv [<enclosure>:<slot>] -a0

Finally, run:

megacli -CfgEachDskRaid0 WB RA Direct CachedBadBBU -a0

Note: If that fails with a message about cache data, you may need to run:

megacli -DiscardPreservedCache -L"10" -a0

This will clear the cache and then you can run the CfgEachDskRaid0. This will mark all new disks as JBOD disks… used for ZFS. If you have something different, check the docs from Hetzner below.

Next, we need to swap disks in ZFS. Run

zpool status

to get the info about the missing disks. the missing disk will show as unavailable. Next, find the ID of the disk that was added.

cd /dev/disk/by-id/

ls

find the new disk (usually wont have any partitions on it). Now, its a matter of running the following:

zpool replace rpool /dev/disk/by-id/scsi-3600605b008f498802aa37da51674ea7e-part3 /dev/disk/by-id/wwn-0x600605b008f498802b2a3a683752e088

swap the scsi-36xxx and wwn-0x6xxx parts for the ones you found and rpool with your ZFS pool name.

finally, run

zpool status

to see the status, run:

zpool status -v -1

shows you the status with more info and refreshes every second. ZFS is now running in the background resilvering the drives and swapping out the old ones. since the old one is missing, it will wait till the new drive is sorted then remove the old one. This can take some time, depending on your disks and data size.

Hopefully, this helps someone!

Some links for info:

LSI RAID Controller – Hetzner Docs

Day 60 of #100daysofhomelab

Day 60 of and I have been sick for most of the last 2 weeks, so that’s why I haven’t been posting much… Today is going to be links only too…

Day 59 of #100daysofhomelab – Proxmox Updates, LTT Hacked, New Framework Laptops

Day 59 of and Proxmox released 7.4 of their Virtual Environment. I have not upgraded any of my machines to it, just yet, but that’s the plan for the weekend. Other than that, some links:

Day 58 of #100daysofhomelab

Day 58 of and today is mostly a retrospective of what I did over the last few days, with some links thrown in for good measure…

Given I am going to keep GodBoxV3 running Windows Server 2022 for the foreseeable future, I installed Veeam Availability Suite (through their NFR program) and got it to backup up my Hyper-V VMs, along with my ESXi VMs to both local and Backblaze B2 storage. So far, so good.

Also, Ubiquiti released Unifi OS 3.0 for the UDM Pro, which I upgraded this morning. Links for that are below. Some nice bits in here, like:

  • Added Wireguard VPN Server support.
  • Added VPN Client Routing.
  • Added Ad-blocking feature.
  • Added support for OpenVPN tunnel in Traffic Routes.
  • Allow adding multiple VPN Clients.

the 2.5 release OS had the VPN Client option, but ALL traffic went over the VPN, whether you wanted it to or not. This release gives you the option to say that traffic from a given host, network or even traffic to a given IP or range, goes over the VPN link. The Ad Block feature is nice too, but I have not tried it yet (still using PiHole for the moment) and the Wireguard VPN option is going to be VERY handy. More testing coming soon…

Anyway, on to the links.

Day 57 of #100daysofhomelab

Day 57 of and its a link dump for today:

Day 55 of #100daysofhomelab

Day 55 of and it’s going to mostly link dump today… Still working on my TrueNAS stuff, hopefully, I can get a couple of blog posts about it soon enough… Anyway, on to the links.

Day 54 of #100daysofhomelab

Day 54 of and it’s going to be a very quick one… My head is wrecked with TrueNAS… Swapped TrueNAS Core (FreeBSD) to TrueNAS Scale (Linux). Trying to get Resilio Sync to work on it, but getting permissions issues… It’s after 2 am here, so giving up for the moment, but hopefully, I can figure it out tomorrow… On a different note, I ordered a load of storage upgrades (Another Hyper M.2 x16 card, some new NVMe drives, and some other stuff) for GodBoxV3… More details soon…